title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Long-time dynamics of Kirchhoff wave models with strong nonlinear damping",
"Long-time dynamics of Kirchhoff wave models with strong nonlinear damping"
] |
[
"Igor Chueshov \nDepartment of Mechanics and Mathematics\nKharkov National University\n61077KharkovUkraine\n"
] |
[
"Department of Mechanics and Mathematics\nKharkov National University\n61077KharkovUkraine"
] |
[] |
We study well-posedness and long-time dynamics of a class of quasilinear wave equations with a strong damping. We accept the Kirchhoff hypotheses and assume that the stiffness and damping coefficients are C 1 functions of the L 2 -norm of the gradient of the displacement. We first prove the existence and uniqueness of weak solutions and study their properties for a rather wide class of nonlinearities which covers the case of possible degeneration (or even negativity) of the stiffness coefficient and the case of a supercritical source term. Our main results deal with global attractors. In the case of strictly positive stiffness factors we prove that in the natural energy space endowed with a partially strong topology there exists a global attractor whose fractal dimension is finite. In the non-supercritical case the partially strong topology becomes strong and a finite dimensional attractor exists in the strong topology of the energy space. Moreover, in this case we also establish the existence of a fractal exponential attractor and give conditions that guarantee the existence of a finite number of determining functionals. Our arguments involve a recently developed method based on "compensated" compactness and quasi-stability estimates.
|
10.1016/j.jde.2011.08.022
|
[
"https://arxiv.org/pdf/1011.6271v3.pdf"
] | 119,718,416 |
1011.6271
|
ae3ae70e524814dfe351f7e4e6dfef59e512e89f
|
Long-time dynamics of Kirchhoff wave models with strong nonlinear damping
12 Jan 2011 January 13, 2011
Igor Chueshov
Department of Mechanics and Mathematics
Kharkov National University
61077KharkovUkraine
Long-time dynamics of Kirchhoff wave models with strong nonlinear damping
12 Jan 2011 January 13, 2011AMS 2010 subject classification: Primary 37L30Secondary 37L15, 35B40, 35B41 Keywords: Nonlinear Kirchhoff wave modelstate-dependent nonlocal dampingsupercriti- cal sourcewell-posednessglobal attractor
We study well-posedness and long-time dynamics of a class of quasilinear wave equations with a strong damping. We accept the Kirchhoff hypotheses and assume that the stiffness and damping coefficients are C 1 functions of the L 2 -norm of the gradient of the displacement. We first prove the existence and uniqueness of weak solutions and study their properties for a rather wide class of nonlinearities which covers the case of possible degeneration (or even negativity) of the stiffness coefficient and the case of a supercritical source term. Our main results deal with global attractors. In the case of strictly positive stiffness factors we prove that in the natural energy space endowed with a partially strong topology there exists a global attractor whose fractal dimension is finite. In the non-supercritical case the partially strong topology becomes strong and a finite dimensional attractor exists in the strong topology of the energy space. Moreover, in this case we also establish the existence of a fractal exponential attractor and give conditions that guarantee the existence of a finite number of determining functionals. Our arguments involve a recently developed method based on "compensated" compactness and quasi-stability estimates.
Introduction
In a bounded smooth domain Ω ⊂ R d we consider the following Kirchhoff wave model with a strong nonlinear damping:
∂ tt u − σ( ∇u 2 )∆∂ t u − φ( ∇u 2 )∆u + f (u) = h(x), x ∈ Ω, t > 0, u| ∂Ω = 0, u(0) = u 0 , ∂ t u(0) = u 1 .
(1)
Here ∆ is the Laplace operator, σ and φ are scalar functions specified later, f (u) is a given source term, h is a given function in L 2 (Ω) and · is the norm in L 2 (Ω). This kind of wave models goes back to G. Kirchhoff (d = 1, φ(s) = ϕ 0 + ϕ 1 s, σ(s) ≡ 0, f (u) ≡ 0) and has been studied by many authors under different types of hypotheses. We refer to [4,28,41] and to the literature cited in the survey [32], see also [5,17,19,22,31,34,35,36,37,46,47,48,49] and the references therein.
Our main goal in this paper is to study well-posedness and long-time dynamics of the problem (1) under the following set of hypotheses:
(ii) f (u) is a C 1 function such that f (0) = 0 (without loss of generality), (c) if d ≥ 3 then either
µ f := lim inf |s|→∞ s −1 f (s) > −∞,(4)|f ′ (u)| ≤ C 1 + |u| p−1 with some 1 ≤ p ≤ p * ≡ d + 2 d − 2 ,(5)
or else c 0 |u| p−1 − c 1 ≤ f ′ (u) ≤ c 2 1 + |u| p−1 with some p * < p < p * * ≡ d + 4
(d − 4) + ,(6)
where c i are positive constants and s + = (s + |s|)/2.
Remark 1.2 (1)
The coercive behavior in (2) and (3) holds with η 0 = c 1 = 0 if we assume that lim inf s→+∞ {sφ(s)} > 0, for instance. The standard example is φ(s) = φ 0 + φ 1 s α with φ 0 ∈ R, φ 1 > 0 and α ≥ 1. However we can also take φ(s) with finite support, or even φ(s) ≡ const ≤ 0. In this case we need additional hypotheses concerning behavior of σ(s) as s → +∞. We note that the physically justified situation (see, e.g., the survey [32]) corresponds to the case when the stiffness coefficient φ(s) is positive almost everywhere. However we include into the consideration the case of possibly negative φ because the argument we use to prove well-posedness involves positivity properties of φ in a rather mild form (see, e.g., (2) and (3)).
(2) We note that in the case when d ≤ 2 or d ≥ 3 and (5) holds with p < p * the Nemytski operator u → f (u) is a locally Lipschitz mapping from the Sobolev space H 1 0 (Ω) into H −1+δ (Ω) for some δ > 0. If d ≥ 3 and (5) holds with p = p * this fact is valid with δ = 0. These properties of the source nonlinearity f (u) are of importance in the study of wave dynamics with the strong damping (see, e.g., [6,7,38,45] and the references therein). Below we refer to this situation as to non-supercritical (subcritical when δ > 0 and critical for the case δ = 0). To deal with the supercritical case (the inequality in (5) holds with p > p * ) we borrow some ideas from [23] and we need a lower bound for f (u) of the same order as its upper bound (see the requirement in (6)). The second critical exponent p * * arises in the dimension d ≥ 5 from the requirement H 2 (Ω) ⊂ L p+1 (Ω) which we need to estimate the source term in some negative Sobolev space, see also Remark 2.6 below.
(3) We also note that in the case (6) the condition in (4) holds automatically (with µ f = +∞). This condition can be relax depending on the properties of φ. For instance, in the case when φ(s) = φ 0 + φ 1 s α with φ 1 > 0 instead of (4) we can assume that f (s)s ≥ −c 1 |s| l − c 2 for some l ≤ min{2α + 2 − ε, 2d/(d − 2) + } with arbitrary small ε > 0. Therefore for this choice of φ we need no coercivity assumptions concerning f in the non-supercritical case provided p < 2α + 1. However we do not pursue these possible generalizations and prefer to keep hypotheses concerning φ and σ as general as possible.
Well-posedness issues for Kirchhoff type models like (1) were studied intensively last years. The main attention was paid the case when the strong damping term −σ∆u t is absent and the source term f (u) is either absent or subcritical. We refer to [19,36,49] and also to the survey [32]. In these papers the authors have studied sets of initial data for which solutions exist and are unique. The papers [19,36] consider also the case of a degenerate stiffness coefficient (φ(s) ∼ s α near zero). We also mention the paper [31] which deals with global existence (for a restricted class of initial data) in the case of a strictly positive stiffness factor of the form φ(s) = φ 0 + φ 1 s α with the nonlinear damping |u t | q u t and the source term f (u) = −|u| p u for some range of exponents q and p, see also the recent paper [43] which is concentrated on the local existence issue for the same type of damping and source terms but for a wider range of the exponents p and q.
Introducing of the strong (Kelvin-Voigt) damping term −σ∆u t provides an additional a priori estimate and simplifies the issue. There are several well-posedness results available in the literature for this case (see [5,33,35,37,46,48,47]). However all these publications assume that the damping coefficient σ(s) ≡ σ 0 > 0 is a constant and deal with a subcritical or absent source term. Moreover, all of them (except [37]) assume that stiffness factor is non-degenerate (i.e., φ(s) ≥ φ 0 > 0). However [37] assumes small initial energy, i.e., deals with local (in phase space) dynamics. Recently the existence and uniqueness of weak (energy) solutions of (1) was reported (without detailed proofs) in [23] for the case of supercritical source satisfying (6). However the authors in [23] assume (in addition to our hypotheses) that d = 3, the damping is linear (i.e., σ(s) = const > 0) and the stiffness factor φ is a uniformly positive C 1 function satisfying the inequality s 0 φ(ξ)dξ ≤ sφ(s) for all s ≥ 0. As for nonlinear strong damping to the best of our knowledge there is only one publication [26]. This paper deals with nonlinear damping of the form σ( A α u 2 )A α u t with 0 < α ≤ 1. The main result of [26] states only the existence of weak solutions for uniformly positive φ and σ in the case when f (u) ≡ 0.
The main achievement of our well-posedness result is that (a) we do not assume any kind of non-degeneracy conditions concerning φ (this function may be zero or even negative); (b) we consider a nonlinear state-dependent strong damping and do not assume uniform positivity of the damping factor σ; (c) we cover the cases of critical and supercritical source terms f .
Our second result deals with a global attractor for the dynamical system generated by (1). There are many papers on stabilization to zero equilibrium for Kirchhoff type models (see, e.g., [1,5,31,32] and the references therein) and only a few recent results devoted to (non-trivial) attractors for systems like (1). We refer to [34] for studies of local attractors in the case of viscous damping and to [17,35,46,47,48] in the case of a strong linear damping (possibly perturbed by nonlinear viscous terms). All these papers assume subcriticality of the force f (u) and deal with a uniformly positive stiffness coefficient of the form φ(s) = φ 0 + φ 1 s α with φ 0 > 0. In the long time dynamics context we can point only the paper [1] which contains a result (see Theorem 4.4 [1]) on stabilization to zero in the case when φ(s) ≡ σ(s) = a + bs γ with a > 0 and possibly supercritical source with the property f (u)u + aµu 2 ≥ 0, where µ > 0 is small enough. In this case the global attractor A = {0} is trivial. However this paper does not discuss well-posedness issues and assumes the existence of sufficiently smooth solutions as a starting point of the whole considerations.
Our main novelty is that we consider long-time dynamics for much more general stiffness and damping coefficients and cover the supercritical case. Namely, under some additional nondegeneracy assumptions we prove the existence of a finite dimensional global attractor which uniformly attracts trajectories in a partially strong sense (see Definition 3.1). In the non-supercritical case this result can be improved: we establish the convergence property with respect to strong topology of the phase (energy) space. Moreover, in this case we prove the existence of a fractal exponential attractor and give conditions for the existence of finite sets of determining functionals. To establish these results we rely on recently developed approach (see [12] and also [13] and [14,Chapters 7,8]) which involves stabilizability estimates, the notion of a quasi-stable system and also the idea of "short" trajectories due to [29,30]. In the supercritical case to prove that the attractor has a finite dimension we also use a recent observation made in [23] concerning stabilizability estimate in the extended space. In the non-supercritical case we first prove that the corresponding system is quasi-stable in the sense of the definition given in [14,Section 7.9] and then apply the general theorems on properties of quasi-stable systems from this source.
We also note that long-time dynamics of second order equations with nonlinear damping was studied by many authors. We refer to [3,11,20,24,39,40] for the case of a damping with a displacement-dependent coefficient and to [12,13,14] and to the references therein for a velocitydependent damping. Models with different types of strong (linear) damping in wave equations were considered in [6,7,23,38,45], see also the literature quoted in these references.
The paper is organized as follows. In Section 2 we introduce some notations and prove Theorem 2.2 which provides us with well-posedness of our model and contains some additional properties of solutions. In Section 3 we study long-time dynamics of the evolution semigroup S(t) generated by (1). We first establish some continuity properties of S(t) (see Proposition 3.2) and its dissipativity (Proposition 3.5). These results do not require any non-degeneracy hypotheses concerning the stiffness coefficient φ. Then in the case of strictly positive φ we prove asymptotic compactness of S(t) (see Theorem 3.9 and Corollary 3.10). Our main results in Section 3 state the existence of global attractors and describe their properties in both the general case (Theorems 3.11 and 3.13) and the non-supercritical case (Theorems 3.16 and 3.18).
Well-posedness
We first describe some notations.
Let H σ (Ω) be the L 2 -based Sobolev space of the order σ with the norm denoted by · σ and H σ 0 (Ω) is the completion of C ∞ 0 (Ω) in H σ (Ω) for σ > 0. Below we also denote by · and (·, ·) the norm and the inner product in L 2 (Ω).
In the space H = L 2 (Ω) we introduce the operator A = −∆ D with the domain
D(A) = u ∈ H 2 (Ω) : u = 0 on Ω ≡ H 2 (Ω) ∩ H 1 0 (Ω),
where ∆ D is the Laplace operator in Ω with the Dirichlet boundary conditions. The operator A is a linear self-adjoint positive operator densely defined on H = L 2 (Ω). The resolvent of A is compact in H. Below we denote by {e k } the orthonormal basis in H consisting of eigenfunctions of the operator A:
Ae k = λ k e k , 0 < λ 1 ≤ λ 2 ≤ · · · , lim k→∞ λ k = ∞.
We also denote H = [H 1 0 (Ω) ∩ L p+1 (Ω)] × L 2 (Ω). In the non-supercritical case (when d ≤ 2 or d ≥ 3 and p ≤ p * = (d − 2)(d + 2) −1 ) we have that H 1 0 (Ω) ⊂ L p+1 (Ω) 1 and thus the space H coincides with H 1 0 (Ω) × L 2 (Ω). We define the norm in H by the relation
(u 0 ; u 1 ) 2 H = ∇u 0 2 + α u 0 2 L p+1 (Ω) + u 1 2 ,(7)
where α = 1 in the case when d ≥ 3 and p > p * and α = 0 in other cases.
and (1) is satisfied in the sense of distributions.
Our main result in this section is Theorem 2.2 on well-posedness of problem (1). This theorem also contains some auxiliary properties of solutions which we need for the results on the asymptotic dynamics.
1. The function t → (u(t); u t (t)) is (strongly) continuous in H = [H 1 0 ∩ L p+1 ](Ω) × L 2 (Ω) and
u tt ∈ L 2 (0, T ; H −1 (Ω)) + L ∞ (0, T ; L 1+1/p (Ω)).
Moreover, there exists a constant C R,T > 0 such that
u t (t) 2 + ∇u(t) 2 + c 0 u(t) 2 L p+1 (Ω) + t 0 ∇u t (τ ) 2 dτ ≤ C R,T(10)
1 To unify the presentation we suppose that p ≥ 1 is arbitrary in all appearances in the case d = 1.
for every t ∈ [0, T ] and initial data (u 0 ; u 1 ) H ≤ R, where c 0 = 1 in the case when (6) holds and c 0 = 0 in other cases. We also have the following additional regularity:
u t ∈ L ∞ (a, T ; H 1 0 (Ω)), u tt ∈ L ∞ (a, T ; H −1 (Ω)) ∩ L 2 (a, T ; L 2 (Ω))
for every 0 < a < T and there exist β > 0 and c R,T > 0 such that
u tt (t) 2 −1 + ∇u t (t) 2 + t+1 t u tt (τ ) 2 + c 0 Ω |u(x, τ )| p−1 |u t (x, τ )| 2 dx dτ ≤ c R,T t β (11)
for every t ∈ (0, T ], where as above (u 0 ; u 1 ) H ≤ R and c 0 > 0 in the supercritical case only.
2. The following energy identity
E(u(t), u t (t)) + t s σ( ∇u(τ ) 2 ) ∇u t (τ ) 2 dτ = E(u(s), u t (s))(12)
holds for every t > s ≥ 0, where the energy E is defined by the relation
E(u 0 , u 1 ) = 1 2 u 1 2 + Φ ∇u 0 2 + Ω F (u 0 )dx − Ω hu 0 dx, (u 0 ; u 1 ) ∈ H, with Φ(s) = s 0 φ(ξ)dξ and F (s) = s 0 f (ξ)dξ.
3. If u 1 (t) and u 2 (t) are two weak solutions such that (u i (0); u i t (0)) H ≤ R, i = 1, 2, then there exists b R,T > 0 such that the difference z(t) = u 1 (t) − u 2 (t) satisfies the relation
z t (t) 2 −1 + ∇z(t) 2 + t 0 z t (τ ) 2 dτ ≤ b R,T z t (0) 2 −1 + ∇z(0) 2(13)
for all t ∈ [0, T ], and, if (6) holds, we also have that
T 0 Ω |z| p+1 dx + Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx dτ ≤ b R,T z t (0) 2 −1 + ∇z(0) 2 . (14)
4. If we assume in addition that u 0 ∈ (H 2 ∩ H 1 0 )(Ω), then u ∈ C w (0, T ; (H 2 ∩ H 1 0 )(Ω)), where C w (0, T ; X) stands for the space of weakly continuous functions with values in X, and under the condition (u 0 ; u 1 ) H ≤ R we have that
u t (t) 2 + ∆u(t) 2 ≤ C R (T ) 1 + ∆u 0 2 for every t ∈ [0, T ].(15)
Proof. Let Σ(s) = s 0 σ(ξ)dξ. For every η > 0 we introduce the following functional on H:
E η + (u 0 , u 1 ) = u 1 2 + Φ ∇u 0 2 + ηΣ ∇u 0 2 − a(η) + α u 0 p+1 L p+1 (Ω) + u 0 2(16)
with a(η) = inf s∈R + {Φ(s) + ηΣ(s)}, where α = 1 in the case when (6) holds and α = 0 in other cases. By (2) this functional is finite for every η ≥ η 0 .
Let ν ∈ R + and
W η,ν (u 0 , u 1 ) = E(u 0 , u 1 ) + η (u 0 , u 1 ) + 1 2 Σ ∇u 0 2 + ν u 0 2 .(17)
One can see that for every η ≥ η 0 we can choose ν = ν(η, µ f ) ≥ 0, positive constants a i and a monotone positive function M (s) such that
a 0 E η + (u 0 , u 1 ) − a 1 ≤ W η,ν (u 0 , u 1 ) ≤ a 2 E η + (u 0 , u 1 ) + M ( ∇u 0 2 ), ∀ (u 0 ; u 1 ) ∈ H.(18)
To prove the existence of solutions, we use the standard Galerkin method. We start with the case when u 0 ∈ (H 2 ∩ H 1 0 )(Ω) and assume that (u 0 ; u 1 ) H ≤ R for some R > 0. We seek for approximate solutions of the form
u N (t) = N k=1 g k (t)e k , N = 1, 2, . . . ,
that satisfy the finite-dimensional projections of (1). Moreover, we assume that
(u N (0); u N t (0)) H ≤ C R and u N (0) − u 0 2 → 0 as N → ∞.
Such solutions exist (at least locally), and after multiplication of the corresponding projection of (1) by u N t (t) we get that u N (t) satisfies the energy relation in (12). Similarly, one can see from (3) and (4) that
d dt (u N , u N t ) + 1 2 Σ( ∇u N 2 ) = u N t 2 − φ( ∇u N 2 ) ∇u N 2 − (f (u N ), u N ) + (h, u N ) ≤ u N t 2 + C 1 Σ( ∇u N 2 ) + C 2 u N 2 + C 3 .
One can see from (2) that for every η > η 0 there exist c i > 0 such that
Σ(s) ≤ c 1 [Φ(s) + ηΣ(s) − a(η)] + c 2 , s ∈ R + .
Thus using (18) we have that the function W η,ν
N (t) ≡ W η,ν (u N (t), u N t (t)) satisfies the inequality d dt W η,ν N (t) ≤ η u N t 2 + C 1 Σ( ∇u N 2 ) + C 2 u N 2 + C 3 ≤ c 1 W η,ν N (t) + c 2
for η > η 0 with ν depending on η and f . Therefore, using Gronwall's type argument and also relation (18) we obtain
E η + (u N (t); u N t (t)) ≤ C R,T for all t ∈ [0, T ], N = 1, 2, 3 . . . ,
for every η > η 0 . By the coercivity requirement in (2) we conclude that
(u N (t); u N t (t)) H ≤ C R,T for all t ∈ [0, T ], N = 1, 2, 3 . . . .(19)
Since σ(s) > 0, this implies that σ( ∇u N (t) 2 ) > σ R,T for all t ∈ [0, T ]. Therefore the energy relation (12) for u N yields that
T 0 ∇u N t (t) 2 dt ≤ C(R, T ), N = 1, 2, . . . , for any T > 0.(20)
Now we use the multiplier −∆u (below we omit the superscript N for shortness). We obviously have that
d dt −(u t , ∆u) + 1 2 σ( ∇u 2 ) ∆u 2 + φ( ∇u 2 ) ∆u 2 + (f ′ (u), |∇u| 2 ) ≤ ∇u t 2 + σ ′ ( ∇u 2 )(∇u, ∇u t ) ∆u 2 + h ∆u .(21)
In the case when d ≥ 3 and (6) holds, we have
(f ′ (u), |∇u| 2 ) ≥ c 0 Ω |u| p−1 |∇u| 2 dx − c 1 ∇u 2 , c 0 , c 1 > 0.
In other (non-supercritical) cases, due to the embedding
H 1 (Ω) ⊂ L p+1 (Ω), from (19) we have the relation |(f ′ (u), |∇u| 2 )| ≤ c R,T ∆u 2 . This implies that d dt −(u t , ∆u) + 1 2 σ( ∇u 2 ) ∆u 2 ≤ ∇u t 2 + c R,T (1 + ∇u t ) · ∆u 2 + C R,T .(22)
for every t ∈ [0, T ]. Let
Ψ(t) = E(u(t), u t (t)) + η −(u t , ∆u) + 1 2 σ( ∇u 2 ) ∆u 2 with η > 0. We note that there exists η * = η(R, T ) > 0 such that Ψ(t) ≥ α R,T,η u t 2 + ∆u 2 − C R,T , t ∈ [0, T ],(23)
for every 0 < η < η * . Therefore using the energy relation (12) for the approximate solutions and also (22) one can choose η > 0 such that
d dt Ψ(t) ≤ c 0 [Ψ(t) + c 1 ](1 + ∇u t 2 ), t ∈ [0, T ],
with appropriate c i > 0. By (20) and (23) this implies the estimate
u N t (t) 2 + ∆u N (t) 2 ≤ C R (T ) 1 + ∆u N (0) 2 , t ∈ [0, T ].
The above a priori estimates show that (u N ;
∂ t u N ) is * -weakly compact in W T ≡ L ∞ (0, T ; H 2 (Ω)) ∩ L p+1 (Ω)) × L ∞ (0, T ; L 2 (Ω)) ∩ L 2 (0, T ; H 1 0 (Ω)) for every T > 0.
Moreover, using the equation for u N (t) we can show in the standard way that
T 0 ∂ tt u N (t) 2 −m dt ≤ C T (R), N = 1, 2, . . . ,(24)
for some m ≥ max{1, d/2}. Thus the Aubin-Dubinsky theorem (see [42,Corollary 4]) yields that (u N ; ∂ t u N ) is also compact in
C(0, T ; H 2−ε (Ω)) × [C(0, T ; H −ε (Ω)) ∩ L 2 (0, T ; H 1−ε (Ω))] for every ε > 0.
Thus there exists an element (u; u t ) in W T such that (along a subsequence) the following convergence holds:
max [0,T ] u N (t) − u(t) 2 2−ε + T 0 u N t (t) − u t (t) 2 1−ε dt → 0 as N → ∞.
Moreover, by the Lions Lemma (see Lemma 1.3
in [27, Chap.1]) we have that f (u N (x, t)) → f (u(x, t)) weakly in L 1+1/p ([0, T ] × Ω).
This allows us to make a limit transition in nonlinear terms and prove the existence of a weak solution under the additional condition u 0 ∈ (H 2 ∩ H 1 0 )(Ω). One can see that this solution possesses the properties (9), (10), (15) and satisfies the corresponding energy inequality. Now we prove that (13) (and also (14) in the supercritical case) hold for every couple u 1 (t) and u 2 (t) of weak solutions. For this we use the same idea as [23] and start with the following preparatory lemma which we also use in the further considerations.
Lemma 2.3 Let u 1 (t) and u 2 (t) be two weak solutions to (1) with different initial data (u i 0 ; u i 1 ) from H such that u i t (t) 2 + ∇u i (t) 2 ≤ R 2 for all t ∈ [0, T ] and for some R > 0.(25)
Then for
z(t) = u 1 (t) − u 2 (t) we have the relation d dt (z, z t ) + 1 4 σ 12 (t) · ∇z 2 + 1 2 φ 12 (t) · ∇z 2 + (f (u 1 ) − f (u 2 ), z) + φ 12 (t)|(∇(u 1 + u 2 ), ∇z)| 2 ≤ z t 2 + C R ∇u 1 t + ∇u 2 t ∇z 2 (26) for all t ∈ [0, T ], where σ 12 (t) = σ 1 (t)+ σ 2 (t) and φ 12 (t) = φ 1 (t)+ φ 2 (t) with σ i (t) = σ( ∇u i (t) 2 ) and φ i (t) = φ( ∇u i (t) 2 )
. We also use the following notation
φ 12 (t) = 1 2 1 0 φ ′ (λ ∇u 1 (t) 2 + (1 − λ) ∇u 2 (t) 2 )dλ.(27)
Remark 2.4 It follows directly from Definition 2.1 that (9) holds for every weak solution. This and also (8) allows us to show that (z, z t ) + σ 12 (t) ∇z 2 /4 is absolutely continuous with respect to t and thus the relation in (26) has a meaning for every couple of weak solutions.
Proof. One can see that z(t) = u 1 (t) − u 2 (t) solves the equation
z tt − 1 2 σ 12 (t)∆z t − 1 2 φ 12 (t)∆z + G(u 1 , u 2 ; t) = 0,(28)
where
G(u 1 , u 2 ; t) = − 1 2 [σ 1 (t) − σ 2 (t)]∆(u 1 t + u 2 t ) + [φ 1 (t) − φ 2 (t)]∆(u 1 + u 2 ) + f (u 1 ) − f (u 2 ).
Since G ∈ L 2 (0, T ; H −1 (Ω)) + L ∞ (0, T ; L 1+1/p (Ω)) and z ∈ L ∞ (0, T ; (H 1 0 ∩ L p+1 )(Ω)) for any couple u 1 and u 2 of weak solutions, we can multiply equation (28) by z in L 2 (Ω). Therefore using the relation
|σ ′ 12 (t)| ≤ C R ∇u 1 t + ∇u 2 t
and also the observation made in Remark 2.4 we conclude that
d dt (z, z t ) + 1 4 σ 12 (t) · ∇z 2 + 1 2 φ 12 (t) · ∇z 2 + (G(u 1 , u 2 , t), z) ≤ z t 2 + C R ∇u 1 t + ∇u 2 t · ∇z 2 .
One can see that
φ 1 (t) − φ 2 (t) = 2(∇(u 1 + u 2 ), ∇z) · φ 12 (t),
where φ 12 is given by (27), and
|[σ 1 (t) − σ 2 (t)](∇(u 1 t + u 2 t ), ∇z)| ≤ C R ∇u 1 t + ∇u 2 t · ∇z 2 .
Thus using the structure of the term G(u 1 , u 2 ; t) we obtain (26).
Lemma 2.5 Assume that f (u) satisfies Assumption 1.1 and the additional requirement 2 saying that f ′ (u) ≥ −c for some c ≥ 0. Then for z = u 1 − u 2 we have that Ω (f (u 1 ) − f (u 2 ))(u 1 − u 2 )dx ≥ −c 0 z 2 + c 1 Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx(29)
and
Ω (f (u 1 ) − f (u 2 ))(u 1 − u 2 )dx ≥ −c 0 z 2 + c 1 Ω |z| p+1 dx,(30)
where c 0 ≥ 0 and c 1 > 0 in the case when (6) holds and c 1 = 0 in other cases.
Proof. It is sufficient to consider the case when (6) holds. The relation in (29) follows from the obvious inequality
1 0 |(1 − λ)u 1 + λu 2 | r dλ ≥ c r |u 1 | r + |u 2 | r , r ≥ 0, u i ∈ R,
which can be obtained by the direct calculation of the integral. As for (30) we use the obvious representation
u 2 u 1 |ξ| r dξ = 1 r + 1 |u 1 | r u 1 − |u 2 | r u 2 , r ≥ 0, u i ∈ R, u 1 < u 2 ,
and the argument given in [14, Remark 3.2.9].
Now we return to the proof of relations (13) and (14). Let u 1 and u 2 be weak solutions satisfying (25) and also the inequality u i (t) L p+1 (Ω) ≤ R for all t ∈ [0, T ] in the supercritical case. We first note that in the non-supercritical case by the embedding H 1 (Ω) ⊂ L r (Ω) for r = ∞ in the case d = 1, for arbitrary 1 ≤ r < ∞ when d = 2 and for r = 2d(d − 2) −1 in the case d ≥ 3 we have that
f (u 1 ) − f (u 2 ) −1 ≤ C R ∇(u 1 − u 2 ) , u 1 , u 2 ∈ H 1 0 (Ω), ∇u i ≤ R,(31)which implies that |(f (u 1 ) − f (u 2 ), z)| ≤ C R ∇z 2 .
Therefore it follows from Lemma 2.3 and from Lemma 2.5 in the supercritical case that
d dt (z, z t ) + 1 4 σ 12 (t) ∇z 2 + 1 2 φ 12 (t) ∇z 2 + c 0 Ω |z| p+1 dx + Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx ≤ z t 2 + C R 1 + ∇u 1 t + |∇u 2 t ∇z 2 ,(32)
where c 0 is positive in the supercritical case only. Now we consider the multiplier
A −1 z t . Since H 2−η (Ω) ⊂ L p+1 (Ω) for some η > 0 under the condition p < p * * = (d + 4)/(d − 4) + , we easily obtain that A −1 z t 2 L p+1 ≤ C A −η/2 z t 2 ≤ ε z t 2 + C ε A −1/2 z t 2 for every ε > 0.(33)
Thus we can multiply equation (28) by A −1 z t and obtain that 1 2
d dt A −1/2 z t 2 + 1 2 φ 12 (t)(z, z t ) + 1 2 σ 12 (t) z t 2 + (G(u 1 , u 2 ; t), A −1 z t ) = 0,(34)
where
(G(u 1 , u 2 ; t), A −1 z t ) = G 1 (t) + G 2 (t) + G 3 (t).(35)
Here
G 1 (t) = − 1 2 [σ 1 (t) − σ 2 (t)](∆(u 1 t + u 2 t ), A −1 z t ), G 2 (t) = φ 12 (t)(∇(u 1 + u 2 ), ∇z)(∇(u 1 + u 2 ), ∇A −1 z t )
with φ 12 (t) given by (27), and
G 3 (t) = (f (u 1 ) − f (u 2 ), A −1 z t ). One can see that |(G 1 (t) + G 2 (t)| ≤ C R z t · ∇z .
In the non-supercritical case by (31) we have the same estimate for |G 3 (t)|. In the supercritical case we obviously have that
Ω |f (u 1 ) − f (u 2 )||A −1 z t |dx (36) ≤ ε Ω (1 + |u 1 | p−1 + |u 2 | p−1 )|z| 2 dx + C ε Ω (1 + |u 1 | p−1 + |u 2 | p−1 )|A −1 z t | 2 dx ≤ ε Ω (1 + |u 1 | p−1 + |u 2 | p−1 )|z| 2 dx + C ε Ω (1 + |u 1 | p+1 + |u 2 | p+1 )dx p−1 p+1 A −1 z t 2 L p+1 .
Therefore using (33) we have that
|(G(u 1 , u 2 ; t), A −1 z t )| ≤ C R z t · ∇z + ε Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx + z 2 + z t 2 + C ε (R) A −1/2 z t
for any ε > 0. Thus from (34) we obtain 1 2
d dt A −1/2 z t 2 + 1 2 σ 12 (t) z t 2 ≤ C R z t · ∇z + εc 0 Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx + z 2 + z t 2 + c 0 C ε (R) A −1/2 z t 2 (37)
for any ε > 0, where c 0 = 0 in the non-supercritical case. Let
Ψ(t) = 1 2 A −1/2 z t 2 + η (z, z t ) + 1 4 σ 12 (t) ∇z 2(38)
for η > 0 small enough. It is obvious that for η ≤ η 0 (R) we have
a R η A −1/2 z t 2 + ∇z 2 ≤ Ψ(t) ≤ b R A −1/2 z t 2 + ∇z 2 .(39)
From (32) and (37) we also have that
dΨ dt + 1 2 σ 12 (t) − η − cε z t 2 + c 0 η Ω |z| p+1 dx + c 0 (η − ε) Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx ≤ C ε (R) ∇z 2 + A −1/2 z t 2 .
After selecting appropriate η and ε this implies the desired conclusion in (13) and (14). We can use (13) and (14) to prove the existence of weak solutions for initial data (u 0 ; u 1 ) ∈ H by limit transition from smoother solutions. Indeed, we can choose a sequence (u n 0 ; u n 1 ) elements from (H 2 ∩ H 1 0 )(Ω) × L 2 (Ω) such that (u n 0 ; u n 1 ) → (u 0 ; u 1 ) in H. Due to (13) and (14) the corresponding solutions u n (t) converge to a function u(t) in the sense that
max t∈[0,T ] u n t (t) − u t (t) 2 −1 + u n (t) − u(t) 2 1 + T 0 u n (τ ) − u(τ ) p+1 L p+1 (Ω) dτ → 0.
From the boundedness provided by the energy relation in (10) for u n we also have * -weak convergence of (u n ; u n t ) to (u; u t ) in the space
L ∞ (0, T ; H 1 (Ω)) ∩ L p+1 (Ω)) × L ∞ (0, T ; L 2 (Ω)) ∩ L 2 (0, T ; H 1 0 (Ω)) .
This implies that u(t) is a weak solution. By (13) this solution is unique. Moreover, this solution satisfies the corresponding energy inequality.
Now we prove smoothness properties of weak solutions stated in (11) using the same method as [23] (see also [2]).
As usual the argument below can be justified by considering Galerkin approximations. Let u(t) be a solution such that (u(t); u t (t)) H ≤ R for t ∈ [0, T ]. Formal differentiation gives that v = u t (t) solves the equation
v tt − σ( ∇u 2 )∆v t − φ( ∇u 2 )∆v + f ′ (u)v + G * (u, u t ; t) = 0,(40)
where
G * (u, u t ; t) = −2 σ ′ ( ∇u 2 )∆u t + φ ′ ( ∇u 2 )∆u (∇u, ∇u t ).
Thus, multiplying equation (40) by v we have that
d dt (v, v t ) + 1 2 σ( ∇u 2 ) ∇v 2 + φ( ∇u 2 ) ∇v 2 + (f ′ (u)v, v) ≤ v t 2 + C R |(∇u, ∇v)| 2 + |(∇u, ∇v)| ∇v 2 .
This implies that
d dt (v, v t ) + 1 2 σ( ∇u 2 ) ∇v 2 + c 0 Ω |u| p−1 v 2 dx ≤ v t 2 + C R [1 + ∇u t ] ∇v 2 ,
where c 0 > 0 in the supercritical case only. Using the multiplier A −1 v t in (40) we obtain that 1 2
d dt A −1/2 v t 2 + σ( ∇u 2 ) v t 2 ≤ C R ∇v v t + Ω f ′ (u)vA −1 v t dx .
As above (cf. (36)) in the supercritical case we have that
Ω f ′ (u)vA −1 v t dx ≤ ε Ω (1 + |u| p−1 )|v| 2 dx + C R,ε A −1 v t 2 L p+1 .
for any ε > 0. Thus
1 2 d dt A −1/2 v t 2 + σ( ∇u 2 ) v t 2 ≤ ε v t 2 + c 0 Ω |u| p−1 v 2 dx + C R,ε ∇v 2 + A −1 v t 2 L p+1 .
We introduce now the functional
Ψ * (t) = 1 2 A −1/2 v t 2 + η (v, v t ) + 1 2 σ( ∇u 2 ) ∇v 2
for η > 0 small enough. It is obvious that for η ≤ η 0 (R) we have
a R η A −1/2 v t 2 + ∇v 2 ≤ Ψ * (t) ≤ b R A −1/2 v t 2 + ∇v 2 .
Using (33) we also have that
dΨ * dt + σ( ∇u 2 ) − η − ε v t 2 + c 0 [η − ε] Ω |u| p−1 v 2 dx ≤ C R,ε 1 + ∇u t 2 A −1/2 v t 2 + ∇v 2 .
In particular for η > 0 small enough, there exists α R > 0 such that
dΨ * dt + α R v t 2 + c 0 Ω |u| p−1 v 2 dx ≤ C R 1 + ∇u t 2 Ψ * (t),(41)
where c 0 > 0 in the supercritical case only. This implies that
u tt (t) 2 −1 + ∇u t (t) 2 + t 0 u tt (τ ) 2 + c 0 Ω |u(x, τ )| p−1 |u t (x, τ )| 2 dx dτ ≤ C R,T u tt (0) 2 −1 + ∇u t (0) 2 for t ∈ [0, T ],
where c 0 = 0 in the non-supercritical case. This formula demonstrates preservation of some smoothness. To obtain (11) we multiply (41) by t α . This gives us the relation
d dt (t α Ψ * ) + α R t α v t 2 ≤ C R 1 + ∇u t 2 [t α Ψ * ] + αt α−1 b R A −1/2 v t 2 + ∇v 2 .(42)
One can see that
t α−1 ∇v 2 ≤ 1 + t 2(α−1) ∇u t 2 ∇v 2 ≤ C T [1 + ∇u t 2 (t α Ψ * )], t ∈ [0, T ], provided α ≥ 2. We also have that A −1/2 v t 2 ≤ C v t δ A −m u tt 2−δ for any m ≥ 1 with δ = δ(m) ∈ [1, 2). Since A −m u tt = σ( ∇u 2 )A −m+1 u t + φ( ∇u 2 )A −m+1 u − A −m (f (u) − h), one can see that A −m u tt ≤ C R + Ω |f (u)|dx ≤C R for m ≥ max{1, d/2}. Therefore t α−1 A −1/2 v t 2 ≤ C δ t (α−1) v t δ ≤ εt α v t 2 + C R,T,δ,ε , t ∈ [0, T ], provided 2(α − 1)/δ ≥ α. Thus from (42) we have that d dt (t α Ψ * ) ≤ C R,T + C R,T 1 + ∇u t 2 [t α Ψ * ].
This implies (11) with some β > 0. Now we prove that the function t → (u(t); u t (t)) is (strongly) continuous in H = [H 1 0 (Ω) ∩ L p+1 (Ω)] × L 2 (Ω) and establish energy relation (12). We concentrate on the supercritical case only (other cases are much simpler).
We first note that the function t → (u(t); u t (t)) is weakly continuous in H for every t ≥ 0 and t → u(t) is strongly continuous in H 1 0 (Ω), t ≥ 0. Moreover, (11) implies that t → (u(t); u t (t)) is continuous in H 1 0 (Ω) × L 2 (Ω) at every point t 0 > 0. Let us prove that t → u(t) p+1 L p+1 (Ω) is continuous at t 0 > 0. From (11) and from the energy inequality for weak solutions we have that
b a Ω |u| p−1 (|u| 2 + |u t | 2 )dxdt ≤ C a,b , for all 0 < a < b ≤ T.(43)
On smooth functions we also have that
d dt u(t) p+1 L p+1 (Ω) = (p + 1) Ω |u| p u t dx ≤ p + 1 2 Ω |u| p−1 (|u| 2 + |u t | 2 )dx.
Therefore by (43) for t 2 > t 1 > a we have that
u(t 2 ) p+1 L p+1 (Ω) − u(t 1 ) p+1 L p+1 (Ω) ≤ p + 1 2 t 2 t 1 Ω |u| p−1 (|u| 2 + |u t | 2 )dxdt → 0 as t 2 − t 1 → 0.
Thus the function t → u(t) p+1 L p+1 (Ω) is continuous for t > 0. Since u(t) is weakly continuous in L p+1 (Ω) for t > 0 and L p+1 (Ω) is uniformly convex, we conclude that u(t) is norm-continuous in L p+1 (Ω) at every point t 0 > 0.
In the next step we establish energy relation (12) for every t > s > 0. For this we note that by (11) equation (1) is satisfied on any interval [a, b], 0 < a < b ≤ T , as an equality in space H −1 + L 1+1/p (Ω). Moreover one can see that f (u)u t ∈ L 1 ([a, b]×Ω). This allows us to multiply equation (1) by u t and prove (12) for t ≥ s > 0.
To prove energy relation (12) for s = 0 we note that it follows from (12) for t ≥ s > 0 that the limit E(u(s), u t (s)) as s → 0 exists and
E * ≡ lim s→0 E(u(s), u t (s)) = E(u(t), u t (t)) + t 0 σ( ∇u(τ ) 2 ) ∇u t (τ ) 2 dτ.
Since u(t) is continuous in H 1 0 (Ω) on [0, +∞), we conclude that there is a sequence {s n }, s n → 0, such that u(x, s n ) → u 0 (x) almost surely. Since F (u) ≥ −c for all u ∈ R, from Fatou's lemma we have that
Ω F (u 0 (x))dx ≤ lim inf s→0 Ω F (u(x, s))dx.
The property of weak continuity of u t (t) at zero implies that u 1 2 ≤ lim inf s→0 u t (s) 2 . Thus we arrive to the relation E(u 0 , u 1 ) ≤ E * . Therefore from the energy inequality for weak solutions we obtain (12) for all t ≥ s ≥ 0.
No we conclude the proof of strong continuity of t → (u(t); u t (t)) in H at t = 0. From the continuity of t → E(u(t), ∂ t u(t)) and property that u(t) → u 0 in H 1 0 (Ω) as t → 0 one can see by contradiction that
lim t→0 u t (t) 2 = u 1 2 , lim t→0 Ω F (u(x, t))dx = Ω F (u 0 (x))dx.
The first relation implies that u(t) is continuous in L 2 (Ω) at t = 0. It follows from Assumption 1.1 that |u(x, t)| p+1 ≤ C 1 F (u(x, t)) + C 2 for almost all x ∈ Ω, t > 0.
We also have that |u(x, t)| p+1 → |u 0 (x)| p+1 almost everywhere along some sequence as t → 0. Therefore from the Lebesgue dominated convergence theorem we conclude that
u(t) p+1 L p+1 (Ω) → u 0 p+1 L p+1 (Ω) as t → 0
along a subsequence. Using again uniform convexity of the space L p+1 (Ω) we conclude that u(t) is strongly continuous in L p+1 (Ω). The proof of Theorem 2.2 is complete.
Remark 2.6
We do not know how to avoid the assumption p < p * * = (d + 4)/(d − 4) + (which arises in dimension d greater than 4) in the proof of well-posedness. The point is that we cannot use smother multipliers like A −2l z t and A −2l z to achieve the goal because the term ∇z 2 goes into picture in the estimate for G. If we will use the multipliers A −2l z t and z in the proof of uniqueness of solutions, then we get a problem with the corresponding two-sided estimate for the corresponding analog of the function Ψ(t) given by (38).
As for the existence of weak solutions without the requirement p ≥ p * * in the case d ≥ 4 we note that the standard a priori estimates for u N (t) (see (19), (20) and (24)) can be also easily obtained in this case. The main difficulty in this situation is the limit transition in the nonlocal terms φ( u N (t) 2 ) and σ( u N (t) 2 ). To do this we can apply the same procedure as in [5] with σ = const, f (u) ≡ 0. We do not provide details because we do not know how establish uniqueness for this case.
Remark 2.7 In addition to Assumption 1.1 assume that either Φ(s) ≡ s 0 φ(ξ)dξ → +∞ as s → +∞ and µ f > 0, (44) or elseμ
φ := lim inf s→+∞ φ(s) > 0 andμ φ λ 1 + µ f > 0,(45)
where µ f is defined by (4) and λ 1 is the first eigenvalue of the minus Laplace operator in Ω with the Dirichlet boundary conditions (ifμ φ = +∞, then µ f > −∞ can be arbitrary). In this case it is easy to see that (18) holds with η = ν = 0. Therefore the energy relation in (12) yields
sup t∈R + E 0 + (u(t), u t (t)) ≤ C R provided E 0 + (u 0 , u 1 ) ≤ R,(46)
where R > 0 is arbitrary and E 0 + is defined by (16) with η = 0. Now using either (44) of (45) we can conclude from (46) that sup t∈R + ∇u(t) ≤ C R and inf
t∈R + σ( ∇u(t) 2 ) ≥ σ R > 0.(47)
Therefore under the conditions above the energy relation in (12) along with (46) implies that
sup t∈R + E 0 + (u(t), u t (t)) + ∞ 0 ∇u t (τ ) 2 dτ ≤ C R(48)
for any initial data such that E 0 + (u 0 , u 1 ) ≤ R. We note that in the case considered the energy type function E 0 + is topologically equivalent to the norm on H in the sense that E 0 + (u 0 , u 1 ) ≤ R for some R > 0 if and only if (u 0 ; u 1 ) H ≤ R * for some R * > 0.
To describe continuity properties of S(t) it is convenient to introduce the following notion.
Definition 3.1 (Partially strong topology) A sequence {(u n 0 ; u n 1 )} ⊂ H is said to be partially strongly convergent to (u 0 ; u 1 ) ∈ H if u n 0 → u 0 strongly in H 1 0 (Ω), u n 0 → u 0 weakly in L p+1 (Ω) and u n 1 → u 1 strongly in L 2 (Ω) as n → ∞ (in the case when d ≤ 2 we take 1 < p < ∞ arbirtary).
It is obvious that the partially strong convergence becomes strong in the non-supercritical case (H 1 0 (Ω) ⊂ L p+1 (Ω)).
S(t)y 1 − S(t)y 2 H ≤ a R,T y 1 − y 2 H , t ∈ [0, T ],
for all y 1 , y 2 ∈ H = H 1 0 (Ω) × L 2 (Ω) such that y i ≤ R. Thus, in this case S(t) is a locally Lipschitz continuous mapping in H with respect to the strong topology.
Proof. Let (u n 0 ; u n 1 ) → (u 0 ; u 1 ) in H as n → ∞. From the energy relation we have that
lim n→∞ E(u n (t), u n t (t)) + t 0 σ( ∇u n (τ ) 2 ) ∇u n t (τ ) 2 dτ = lim n→∞ E(u n 0 , u n 1 ) = E(u 0 , u 1 ) = E(u(t), u t (t)) + t 0 σ( ∇u(τ ) 2 ) ∇u t (τ ) 2 dτ, (50)
where u n (t) and u(t) are weak solutions with initial data (u n 0 ; u n 1 ) and (u 0 ; u 1 ). Using (13) and the low continuity property of weak convergence one can see from (50) that u n (t) → u(t) in H 1 0 (Ω) and also lim n→∞
1 2 u n t (t) 2 + Ω F (u n (x, t))dx = 1 2 u t (t) 2 + Ω F (u(x, t))dx.
As in the proof of the strong time continuity of weak solutions in Theorem 2.2 this allows us to obtain the strong continuity with respect to initial data. Now we establish additional continuity properties stated in (A) and (B). (A) This easily follows from uniform boundedness of u n t (t) and u n (t) L p+1 (Ω) on each interval [0, T ] (which implies the corresponding weak compactness) and from Lipschitz type estimate in (13) for the difference of two solutions. We also use the fact that by (11) ∇u n t (t) is uniformly bounded for each t > 0.
(B) Let S(t)y i = (u i (t); u i t (t)), i = 1, 2. Then in the non-supercritical case we have (31). Therefore using (10) and Lemma 2.3 we obtain that
d dt (z, z t ) + 1 4 σ 12 (t) ∇z 2 ≤ z t 2 + C R,T 1 + ∇u 1 t + |∇u 2 t ∇z 2 ,
where z = u 1 − u 2 and σ 12 (t) is defined in Lemma 2.3.
In the case considered we can multiply equation (28) by z t and obtain that 1 2
d dt z t 2 + 1 2 σ 12 (t) ∇z t 2 + G(t) = − 1 2 φ 12 (t)(∇z, ∇z t ) ≤ C R,T ∇z t ∇z(51)
Here above
G(t) ≡ (G(u 1 , u 2 ; t), z t ) = H 1 (t) + H 2 (t) + H 3 (t),(52)
where
H 1 (t) = 1 2 [σ 1 (t) − σ 2 (t)](∇(u 1 t + u 2 t ), ∇z t ),H 2 (t) = φ 12 (t)(∇(u 1 + u 2 ), ∇z)(∇(u 1 + u 2 ), ∇z t )
with φ 12 (t) given by (27), and H 3 (t) = (f (u 1 ) − f (u 2 ), z t ). Using these representations one can see that
|(G(u 1 , u 2 ; t), z t )| ≤ C R,T (1 + ∇u 1 t + ∇u 2 t ) ∇z t ∇z ≤ ε ∇z t 2 + C R,T,ε (1 + ∇u 1 t 2 + ∇u 2 t 2 ) ∇z 2 .
for any ε > 0. Therefore the function
V (t) = 1 2 z t 2 + η (z, z t ) + 1 4 σ 12 (t) ∇z 2
for η > 0 small enough satisfies the relations
a R,T z t 2 + ∇z 2 ≤ V (t) ≤ b R,T z t 2 + ∇z 2 and d dt V (t) ≤ c R,T (1 + ∇u 1 t 2 + ∇u 2 t 2 )V (t)
with positive constants a R,T , b R,T and c R,T . Thus Gronwall's lemma and the finiteness of the dissipation integral in (10) imply the desired conclusion.
Dissipativity
Now we establish some dissipativity properties of the semigroup S(t). Fro this we need the following hypothesis. Then there exists R * > 0 such that for any R > 0 we can find t R ≥ 0 such that
(u(t); u t (t)) H ≤ R * for all t ≥ t R ,
where u(t) is a solution to (1) with initial data (u 0 ; u 1 ) ∈ H such that (u 0 ; u 1 ) H ≤ R. In particular, the evolution semigroup S(t) is dissipative in H and
B * = {(u 0 ; u 1 ) ∈ H : (u 0 ; u 1 ) H ≤ R * } is an absorbing set.(54)
Proof. Let u(t) be a solution to (1) with initial data possessing the property (u 0 ; u 1 ) H ≤ R.
Multiplying equation (1) by u we obtain that
d dt (u, u t ) + 1 2 Σ( ∇u 2 ) − u t 2 + φ( ∇u 2 ) ∇u 2 + (f (u), u) − (h, u) = 0,
where Σ(s) = s 0 σ(ξ)dξ. Therefore using the energy relation in (12) for the function W (t) = W η,0 (u(t), u t (t)) with W η,ν given by (17) we obtain that
d dt W (t) + σ( ∇u 2 ) ∇u t 2 − η u t 2 + ηφ( ∇u 2 ) ∇u 2 + η(f (u), u) − η(h, u) = 0.
Since (53) implies (44), we have (47). By (4) and (6) we have that
(u, f (u)) ≥ d 0 u p+1 L p+1 (Ω) + d 1 (µ f − δ) u 2 − d 2 (δ), ∀ δ > 0,
where d 0 > 0, d 1 = 0 in the supercritical case and d 0 = 0, d 1 = 1 in other cases. In both cases (either (45) or (53)) this yields
d dt W (t) + (σ R − η) u t 2 + ηc 0 φ( ∇u 2 ) ∇u 2 + ηd 0 2 u p+1 L p+1 (Ω) + ηc 1 u 2 ≤ ηc 2
with positive c i independent of R and d 0 > 0 in the supercritical case only. Thus there exist constants a 0 , a 1 > 0 independent of R and also 0 < η R ≤ 1 such that
d dt W η,0 (u(t), u t (t)) + ηa 0 u t 2 + φ( ∇u 2 ) ∇u 2 + d 0 u p+1 L p+1 (Ω) + u 2 ≤ ηa 1 ,
for all initial data (u 0 ; u 1 ) ∈ H such that (u 0 ; u 1 ) H ≤ R and for each 0 < η ≤ η R . Moreover, for this choice of η we have relation (18) with ν = 0 and a(η) ≥ a(0). Therefore using the "barier" method (see, e.g., [ Remark 3.6 Let B 0 = t≥1+t * S(t)B * ps , where B * is given by (54), t * ≥ 0 is chosen such that S(t)B * ⊂ B * for t ≥ t * and [·] ps denotes the closure in the partially strong topology. By the standard argument (see, e.g., [44]) one can see that B 0 is a closed forward invariant bounded absorbing set which lies in B * . Moreover, by (11) the set B 0 is bounded in
H 1 0 (Ω) × H 1 0 (Ω).
For a strictly positive stiffness coefficient we can also prove a dissipativity property in the space 4 . Indeed, we have the following assertion.
H * = (H 2 ∩ H 1 0 )(Ω) × L 2 (Ω)
Proposition 3.7 In addition to the hypotheses of Proposition 3.5 we assume that φ(s) is strictly positive (i.e., φ(s) ≥ φ 0 > 0 for all s ∈ R + ) and f ′ (s) ≥ −c for all s ∈ R in the case when (5) holds with p = p * . Let u(t) be a solution to (1) with initial data (u 0 ; u 1 ) ∈ H such that u 0 ∈ H 2 (Ω) and (u 0 ; u 1 ) H ≤ R for some R. Then there exist B > 0 and γ > 0 independent of R and C R > 0 such that
∆u(t) 2 ≤ C R (1 + ∆u 0 2 )e −γt + B for all t ≥ 0.(55)
Proof. By Proposition 3.5 we have that (u(t); u t (t) ≤ R * fo all t ≥ t R . Therefore it follows from (21) that d dt
χ(t) + φ 0 2 ∆u(t) 2 ≤ ∇u t (t) 2 + C R * ∇u t (t) 2 ∆u(t) 2 + C R * for all t ≥ t R ,
where χ(t) = −(u t (t), ∆u(t)) + σ( ∇u(t) 2 ) ∆u(t) 2 /2. One can see that
a 1 ∆u(t) 2 − a 2 ≤ χ(t) ≤ a 3 ∆u(t) 2 + a 4 for all t ≥ t R ,(56)
where a i = a i (R * ) are positive constants. Therefore using the finiteness of the dissipation integral ∞ t R ∇u t (t) 2 dt < C R * we can conclude that
χ(t) ≤ C R |χ(t R )|e −γ(t−t R ) + C R * for all t ≥ t R .
Thus (56) and (15) yield (55).
Remark 3.8 Using (15) one can show that the evolution operator S(t) generated by (1) maps the space H * = (H 2 ∩ H 1 0 )(Ω) × L 2 (Ω) into itself and weakly continuous with respect to t and initial data. Therefore under the hypotheses of Proposition 3.7 by [2, Theorem 1, Sect.II.2] S(t) possesses a weak global attractor in H * . Unfortunately we cannot derive from Proposition 3.5 a similar result in the space H because we cannot prove that S(t) is a weakly closed mapping in H (a mapping S : H → H is said to be weakly closed if weak convergences u n → u and Su n → v imply Su = v). Below we prove the existence of a global attractor in H under additional hypotheses concerning the stiffness coefficient φ.
Asymptotic compactness
In this section we prove several properties of asymptotic compactness of the semigroup S(t).
We start with the following theorem.
Theorem 3.9 Let Assumptions 1.1 and 3.4 be in force. Assume also that φ(s) is strictly positive (i.e., φ(s) > 0 for all s ∈ R + ) and f ′ (s) ≥ −c for all s ∈ R in the non-supercritical case (the bounds in (6) are not valid). Then there exists a bounded set K in the space H 1 = (H 2 ∩H 1 0 )(Ω)× H 1 0 (Ω) and the constants C, γ > 0 such that
sup dist H 1 0 (Ω)×H 1 0 (Ω) (S(t)y, K ) : y ∈ B ≤ Ce −γ(t−t B ) , t ≥ t B ,(57)
for any bounded set B from H. Moreover, we have that K ⊂ B 0 , where B 0 is the positively invariant set constructed in Remark 3.6.
Proof. We use a splitting method relying on the idea presented in [38] (see also [23]). We first note that it is sufficient to prove (57) for B = B 0 , where B 0 ⊂ H ∩ (H 1 0 × H 1 0 )(Ω) is the invariant absorbing set constructed in Remark 3.6.
From (11) and (48) we obviously have that
∇u(t) 2 + ∇u t (t) 2 + t+1 t u tt (τ ) 2 dτ + ∞ 0 ∇u t (τ ) 2 dτ ≤ C B 0 , t ≥ 0,(58)
for any solution u(t) with initial data (u 0 ; u 1 ) from B 0 . Thus we need only to show that there exists a ball B = {u ∈ (H 2 ∩ H 1 0 )(Ω) : ∆ ≤ ρ} which attracts in H 1 0 (Ω) any solution u(t) satisfying (58) with uniform exponential rate.
We denote σ(t) = σ( ∇u(t) 2 ) and φ(t) = φ( ∇u(t) 2 ). Since both σ and φ are strictly positive, we have that
0 < c 1 ≤ σ(s), φ(s) ≤ c 2 , t ≥ 0,
where the constants c 1 and c 2 depend only on the size of the absorbing set B 0 . Let ν > 0 be a parameter (which we choose large enough). Assume that w(t) solves the problems
−σ(t)∆w t − φ(t)∆w + νw + f (w) = h u (t) ≡ −u tt + νu + h(x), x ∈ Ω, t > 0, w| ∂Ω = 0, w(0) = 0.(59)
Then one can see that v(t) = w(t) − u(t) satisfies the equation
−σ(t)∆v t − φ(t)∆v + νv + f (w + v) − f (w) = 0, x ∈ Ω, t > 0, v| ∂Ω = 0, v(0) = u 0 .(60)
As in the proof of Proposition 3.7 using the multiplier −∆w in (59) one can see that
1 2 d dt σ(t) ∆w(t) 2 + φ(t) ∆w(t) 2 ≤ ε + C ε ∇u t (t) 2 ∆w(t) 2 + C ε h u (t) 2
for all t > 0. Therefore using Gronwall's type argument and the bounds in (58) we obtain that
∆w(t) 2 ≤ C t 0 e −γ(t−τ ) h u (τ ) 2 dτ ≤ C B 0 , ∀ t ≥ 0.(61)
where C B 0 > 0 does not depends on t. Multiplying (60) by v in a similar way we obtain
1 2 d dt σ(t) ∇v(t) 2 + φ(t) ∇v(t) 2 ≤ ε + C ε ∇u t (t) 2 ∇v(t) 2 , t ≥ 0, which implies that ∇v(t) 2 ≤ C ∇u(0) 2 e −2γt , t ≥ 0.(62)
Let B = {u ∈ H 2 (Ω) ∩ H 1 0 (Ω) : ∆u 2 ≤ C B 0 }, where C B 0 is the constant from (61). It follows from (61) and (62) that
dist H 1 0 (Ω) (u(t), B) = inf b∈B w(t) + v(t) − b 1 ≤ v(t) 1 ≤ Ce −γt , t ≥ 0.(63)
This implies the existence of the set K desired in the statement of the Theorem 3.9.
Now we consider the set B 0 defined in Remark 3.6 as a topological space equipped with the partially strong topology (see Definition 3.1). Since B 0 bounded in H ∩ (H 1 0 × H 1 0 )(Ω), this topology can be defined by the metric
R(y, y * ) = u 0 − u * 0 1 + u 1 − u * 1 + ∞ n=1 2 −n |(u 0 − u * 0 , g n )| 1 + |(u 0 − u * 0 , g n )|(64)
for y = (u 0 ; u 1 ) and y * = (u * 0 ; u * 1 ) from B 0 , where {g n } is a sequence in L (p+1)/p (Ω) ∩ H −1 (Ω) such that g n −1 = 1 and Span{g n : n ∈ N} is dense in L (p+1)/p (Ω). Corollary 3.10 Let the hypotheses of Theorem 3.9 be in force and K and B 0 be the same sets as in Theorem 3.9. Then there exist C, γ > 0 such that
sup inf z∈K R(S(t)y, z) : y ∈ B 0 ≤ Ce −γt f or all t ≥ 0.(65)
Proof. As in the proof of Theorem 3.9 using the splitting given by (59) and (60) we have that
inf z∈K R(S(t)y, z) ≤ v(t) 1 + ∞ n=1 2 −n |(v(t), g n )| 1 + |(v(t), g n )| ≤ v(t) 1 + 2 −N +1 + N n=1 2 −n |(v(t), g n )| 1 + |(v(t), g n )| ≤ v(t) 1 1 + N n=1 2 −n g n −1 + 2 −N +1 ≤ 2 v(t) 1 + 2 −N +1
for every N ∈ N, where S(t)y = (u(t); u t (t)) with y = (u 0 ; u 1 ) ∈ B 0 , and v solves (60
Global attractor in partially strong topology
We recall the notion of a global attractor and some dynamical characteristics for the semigroup S(t) which depend on a choice of the topology in the phase space (see, e.g., [2,9,21,44] for the general theory). A bounded set A ⊂ H is said to be a global partially strong attractor for S(t) if (i) A is closed with respect to the partially strong (see Fractal dimension dim X f M of a compact set M in a complete metric space X is defined as
dim X f M = lim sup ε→0 ln N (M, ε) ln(1/ε) ,
where N (M, ε) is the minimal number of closed sets in X of diameter 2ε which cover M . We also recall (see, e.g., [2]) that the unstable set M + (N ) emanating from some set N ⊂ H is a subset of H such that for each z ∈ M + (N ) there exists a full trajectory {y(t) : t ∈ R} satisfying u(0) = z and dist H (y(t), N ) → 0 as t → −∞.
Our first main result in this section is the following theorem.
∆u(t) 2 + ∇u t (t) 2 + u tt (t) 2 −1 + t+1 t u tt (τ ) 2 dτ ≤ C A(66)
for any full trajectory γ = {(u(t); u t (t)) : t ∈ R} from the attractor A. We also have that
A = M + (N ), where N = {(u; 0) ∈ H : φ( A 1/2 u 2 )Au + f (u) = h}.(67)
Proof. Since B 0 is an absorbing positively invariant set (see Remark 3.6), to prove the theorem it is sufficient to consider the restriction of S(t) on the metric space B 0 endowed with the metric R given by (64). By Corollary 3.10 the dynamical system (B 0 , S(t)) is asymptotically compact. Thus (see, e.g., [2,8,44]) this system possesses a compact (with respect to the metric R) global attractor A which belongs to K . It is clear that A is a global partially strong attractor for (H, S(t)) with the regularity properties stated in (66). The attractor A is a strictly invariant compact set in H. By Remark 3.3 the semigroup S(t) is gradient on A. Therefore the standard results on gradient systems with compact attractors (see, e.g., [2,9,44]) yields (67). Thus the proof of Theorem 3.11 is complete.
where c 0 = 0 in the non-supercritical case. Now as in the proof of Theorem 2.2 we use the multiplier A −1 z t . However now our considerations of the term |(G(u 1 , u 2 ; t), A −1 z t )| of the form (35) involves the additional positivity type requirement imposed on φ.
Using the inequality A −1/2 z t 2 ≤ η z t 2 + C η A −l z t 2 for any η > 0 and l ≥ 1/2, one can see that
|G 1 (t)| ≤ ε z t 2 + C R,ε ∇u 1 t 2 + ∇u 2 t 2 ∇z 2
and also, involving (69),
|G 2 (t)| ≤ ε z t 2 + C R,ε A −l z t 2 + z 2
for any ε > 0 and for every l ≥ 1/2. Therefore from (36) we obtain that
|(G(u 1 , u 2 ; t), A −1 z t )| ≤ C R,ε ∇u 1 t 2 + ∇u 2 t 2 ∇z 2 + z 2 + A −l z t 2 + ε z t 2 + c 0 Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx
for any ε > 0, where c 0 = 0 in the non-supercritical case. Consequently by (34) and (70) the function Ψ(t) given by (38) satisfies the relation
dΨ dt + η 2 φ 12 (t) · ∇z 2 + 1 2 σ 12 (t) − η − ε z t 2 + c 0 (η − ε) Ω (|u 1 | p−1 + |u 2 | p−1 )|z| 2 dx ≤ C ε (R) d 12 (t) ∇z 2 + A −l z t 2 + z 2 ,
where d 12 (t) = ∇u 1 t (t) 2 + ∇u 2 t (t) 2 . Therefore after an appropriate choice of η and ε we have that dΨ dt
+ α 12 (t)Ψ ≤ c R A −l z t 2 + z 2 with α 12 (t) = η 2 φ 12 (t) − c R d 12 (t),
This implies that
Ψ(t) ≤ c R exp − t 0 α 12 (τ )dτ Ψ(0) + c R t 0 exp − t τ α 12 (ξ)dξ A −l z t (τ ) 2 + z(τ ) 2 dτ.
(71) Under Assumption 3.12 by Remark 2.7 we have estimate (48) which yields that
t τ α 12 (ξ)dξ ≥ ηφ R · (t − τ ) − c R t τ d 12 (ξ)dξ ≥ ηφ R · (t − τ ) − C R
for all t > τ ≥ 0. with positive φ R and C R . Thus from (71) and (39) we obtain (68).
Lemma 3.15
Let the hypotheses of Proposition 3.14 be in force. Then the difference z(t) = u 1 (t) − u 2 (t) of two weak solutions satisfies the relation
T 0 A −l z tt (τ ) 2 dτ ≤ C R z t (0) 2 −1 + ∇z(0) 2 + C R T T 0 z(τ ) 2 + A −l z t (τ ) 2 dτ (72)
for every T ≥ 1, where C R > 0 is a constant and l ≥ 3/2 is arbitrary such that L 1 (Ω) ⊂ H −2l (Ω), i.e. l > d/4.
Proof. It follows from (28) that
A −l z tt ≤ C R ( A −l+1 z + A −l+1 z t ) + A −l G(u 1 , u 2 ; t) .
By the embedding L 1 (Ω) ⊂ H −2l (Ω) we obviously have that
A −l G(u 1 , u 2 ; t) ≤ C R A 1/2 z + C Ω |f (u 1 ) − f (u 2 )|dx ≤ C R A 1/2 z + C Ω 1 + |u 1 | p−1 + |u 2 | p−1 |z|dx.
Therefore using (13) (and also (14) in the supercritical case) we obtain that
b a A −l z tt (τ ) 2 dτ ≤ C R z t (a) 2 −1 + ∇z(a) 2
for every a < b such that b − a ≤ 1. Therefore
T 0 A −l z tt (τ ) 2 dτ ≤ [T ]−1 k=0 k+1 k A −l z tt (τ ) 2 dτ + T [T ] A −l z tt (τ ) 2 dτ ≤ C R [T ] k=0 z t (k) 2 −1 + ∇z(k) 2 ,
where [T ] denotes the integer part of T . Now we can apply the stabilizability estimate in (68) with t = k for each k and obtain (72).
Proof of Theorem 3.11
We use the idea due to Málek-Nečas [29] (see also [30] and [13]). For some T ≥ 1 which we specify latter and for some l > max{d, 6}/4 we consider the space W T = u ∈ C(0, T ; H 1 0 (Ω)) : u t ∈ C(0, T ; H −1 (Ω)), u tt ∈ L 2 (0, T ; H −2l (Ω)) with the norm
|u| 2 W T = max t∈[0,T ] ∇u(t) 2 + u t (t) 2 −1 + T 0 u tt (t) 2 −2l dt.
Let A T be the set of weak solutions to (1) on the interval [0, T ] with initial data (u(0); u t (0)) from the attractor A. It is clear that A T is a closed bounded set in W T . Indeed, if the sequence of solutions u n (t) with initial data in A T is fundamental in W T , then we have that u n (0) → u 0 strongly in H 1 (Ω), u n (0) → u 0 weakly in L p+1 (Ω) and u n t (0) → u 1 weakly in L 2 (Ω) for some (u 0 ; u 1 ) ∈ A. By (13) and (72) this implies that u n (t) converges in W T to the solution with initial data (u 0 ; u 1 ). This yields the closeness of A T in W T . The boundedness of A T is obvious.
On A T we define the shift operator V by the formula
V : A T → A T , [V u](t) = u(T + t), t ∈ [0, T ].
It is clear that A T is strictly invariant with respect to V , i.e. V A T = A T . It follows from (13) and (72) that
|V U 1 − V U 2 | W T ≤ C T |U 1 − U 2 | W T , U 1 , U 2 ∈ B T .
By Proposition 3.14 we have that
max s∈[0,T ] z t (T + s) 2 −1 + ∇z(T + s) 2 ≤ ae −γT max s∈[0,T ] z t (s) 2 −1 + ∇z(s) 2 + b 2T 0 z(τ ) 2 + A −l z t (τ ) 2 dτ,
where a, b, γ > 0 depends on the size of the set A in H 1 = [H 2 ∩ H 1 0 ](Ω) × H 1 0 (Ω). Lemma 3.15 and Proposition 3.14 also yield that
2T T A −l z tt 2 dτ ≤ Ce −γT z t (0) 2 −1 + ∇z(0) 2 + C(1 + T ) 2T 0 z 2 + A −l z t 2 dτ.
Therefore we obtain that
|V U 1 − V U 2 | 2 W T ≤ q T |U 1 − U 2 | 2 W T + C T n 2 T (U 1 − U 2 ) + n 2 T (V U 1 − V U 2 )(73)
for every U 1 , U 2 ∈ A T , where q T = Ce −γT and the seminorm n T (U ) has the form
n 2 T (U ) ≡ T 0 u 2 + A −l u t 2 dτ for U = {u(t)} ∈ W T .
One can see that this seminorm is compact on W T . Therefore we can choose T ≥ 1 such that q T < 1 in (73) and apply Theorem 2.15 [13] to conclude that A T has a finite fractal dimension in W T . One can also see that A = {(u(t); u t (t)) t=s : u(·) ∈ A T } does not depend on s. Therefore the fractal dimension of A is finite in the spaceH = H 1 0 (Ω) × H −1 (Ω). By interpolation argument it follows from (66) and (13) that S(t) A is a Hölder continuous mapping fromH into H r for each t > 0. Since dimH f A < ∞, this implies that dim Hr f A is finite.
Attractor in the energy space. Non-supercritical case
In this section we deal with the attractor in the strong topology of the energy space which we understand in the standard sense (see, e.g., [2,9,21,44]). Namely, the global attractor of the evolution semigroup S(t) is defined as a bounded closed set A ⊂ H which is strictly invariant (S(t)A = A for all t > 0) and uniformly attracts all other bounded sets: Since H = H 1 0 (Ω) × L 2 (Ω) in the non-supercritical case, Theorem 3.9 implies the existence of a compact set in H which attracts bounded sets in the strong topology. This leads to the following assertion. (67)). Moreover, we have that dist H (y, N ) → 0 as t → ∞ f or any y ∈ H.
(74)
If in addition we assume Assumption 3.12(ii), then A has a finite fractal dimension in the space H r = [H 1+r ∩ H 1 0 ](Ω) × H r (Ω) for every r < 1.
Proof. We apply Theorems 3.11 and 3.13. To obtain (74) we only note that by Remark 3.3 the semigroup S(t) is gradient on the whole space H. Thus the standard results on gradient systems (see, e.g., [2,9,44]) lead to the conclusion in (74).
Under additional hypotheses we can establish other dynamical properties of the system under the consideration. We impose now the following set of requirements.
Assumption 3. 17 We assume that φ ∈ C 2 (R + ) is a nondecreasing function (φ ′ (s) ≥ 0 for s ≥ 0), f ′ (s) ≥ −c for some c ≥ 0, and one of the following requirements fulfills:
(a) either f is subcritical: either d ≤ 2 or (5) holds with p < p * ≡ (d + 2)(d − 2) −1 , d ≥ 3;
(b) or else 3 ≤ d ≤ 6, f ∈ C 2 (R) is critical, i.e., |f ′′ (u)| ≤ C 1 + |u| p * −2 , u ∈ R, p * = (d + 2)(d − 2) −1 .
Our second main result in this section is the following theorem. and there is R > 0 such that sup γ⊂A sup t∈R ∆u(t) 2 + ∇u t (t) 2 + u tt (t) 2 ≤ R 2 .
(2) There exists a fractal exponential attractor A exp in H. implies that lim t→∞ S(t)y 1 − S(t)y 2 H = 0. Here above S(t)y i = (u i (t); ∂ t u i (t)), i = 1, 2.
We recall (see, e.g., [16] and also [9,13,14]) that a compact set A exp ⊂ H is said to be a fractal exponential attractor for the dynamical system (H, S(t)) iff A exp is a positively invariant set of finite fractal dimension in H and for every bounded set D ⊂ H there exist positive constants t D , C D and γ D such that
d X {S(t)D | A exp } ≡ sup x∈D dist H (S(t)x, A exp ) ≤ C D · e −γ D (t−t D ) , t ≥ t D .
We also mentioned that the notion of determining functionals goes back to the papers by Foias and Prodi [18] and by Ladyzhenskaya [25] for the 2D Navier-Stokes equations. For the further development of the theory we refer to [15] and to the survey [8], see also the references quoted in these publications. We note that for the first time determining functionals for second order (in time) evolution equations with a nonlinear damping was considered in [10], see also a discussion in [14,Section 8.9]. We also refer to [8] and [9, Chap.5] for a description of sets of functionals with small completeness defect. and d dtW (t) + c RW (t) ≤ C R d 12 (t) ∇z 2 + C z(t) 2 with positive constants. Thus the finiteness of the integral in (48) and the standard Gronwall's argument implies the result in (77) in the critical case. In the subcritical case we use the same argument but for the functional E * without the term containing f ′ .
Completion of the proof of Theorem 3.16: Proposition 3.19 means that the semigroup S(t) is quasi-stable on the absorbing set B 0 defined in Remark 3.6 in the sense of Definition 7.9.2 [14]. Therefore to obtain the result on regularity stated in (75) and (76) we first apply Theorem 7.9.8 [14] which gives us that sup t∈R ∇u t (t) 2 + u tt (t) 2 ≤ C A for any trajectory γ = {(u(t); u t (t)) : t ∈ R} ⊂ A.
Applying (66) we obtain (75) and (76). By (11) any weak solution u(t) possesses the property t+1 t u tt (τ ) 2 dτ ≤ C R,T for t ∈ [0, T ], ∀ T > 0, provided (u 0 ; u 1 ) ∈ S(1)B 0 , where B 0 is the absorbing set defined in Remark 3.6. This implies that t → S(t)y is a 1/2-Hölder continuous function with values in H for every y ∈ S(1)B 0 . Therefore the existence of a fractal exponential attractor follows from Theorem 7.9.9 [14].
To prove the statement concerning determining functionals we use the same idea as in the proof of Theorem 8.9.3 [14].
The damping (σ) and the stiffness (φ) factors are C 1 functions on the semi-axis R + = [0, +∞). Moreover, σ(s) > 0 for all s ∈ R + and there exist c i ≥ 0 and η 0 ≥ 0 such thats 0 [φ(ξ) + η 0 σ(ξ)] dξ → +∞ as s → ξ)dξ ≥ −c 2 for s ∈ R + .
and the following properties hold: (a) if d = 1, then f is arbitrary; (b) if d = 2 then |f ′ (u)| ≤ C 1 + |u| p−1 for some p ≥ 1;
Definition 2. 1
1A function u(t) is said to be a weak solution to (1) on an interval [0, T ] if u ∈ L ∞ (0, T ; H 1 0 (Ω) ∩ L p+1 (Ω)), ∂ t u ∈ L ∞ (0, T ; L 2 (Ω)) ∩ L 2 (0, T ; H 1 0 (Ω))
Theorem 2. 2 (
2Well-posedness) Let Assumption 1.1 be in force and (u 0 ; u 1 ) ∈ H. Then for every T > 0 problem (1) has a unique weak solution u(t) on [0, T ]. This solution possesses the following properties:
an evolution semigroup S(t) in the space H by the formula S(t)y = (u(t); ∂ t u(t)), where y = (u 0 ; u 1 ) ∈ H and u(t) solves(1)
Proposition 3. 2
2Let Assumption 1.1 be in force. Then the evolution semigroup S(t) given by (49) is a continuous mapping in H with respect to the strong topology. Moreover, (A) General case: For every t > 0 S(t) maps H into itself continuously in the partially strong topology. (B) Non-supercritical case ((6) fails): For any R > 0 and T > 0 there exists a R,T > 0 such that
Remark 3. 3
3One can see from the energy relation in (12) that the dynamical system generated by semigroup S(t) is gradient on H (with respect to the strong topology), i.e., there exists a continuous functional Ψ(y) on H (called a strict Lyapunov function) possessing the properties (i) Ψ S(t)y ≤ Ψ(y) for all t ≥ 0 and y ∈ H; (ii) equality Ψ(y) = Ψ(S(t)y) may take place for all t > 0 if only y is a stationary point of S(t). In our case the full energy E(u 0 ; u 1 ) is a strict Lyapunov function.
Assumption 3. 4
4We assume 3 that either (45) holds or else φ(s)s → +∞ as s → +∞ and µ f = lim inf |s|→∞ s −1 f (s) > 0. (53) Proposition 3.5 Let Assumptions 1.1 and 3.4 be in force.
Definition 3.1) topology, (ii) A is strictly invariant (S(t)A = A for all t > 0), and (iii) A uniformly attracts in the partially strong topology all other bounded sets: for any (partially strong) vicinity O of A and for any bounded set B in H there exists t * = t * (O, B) such that S(t)B ⊂ O for all t ≥ t * .
Theorem 3. 11
11Let Assumptions 1.1 and 3.4 be in force. Assume also that (i) φ(s) is strictly positive (i.e., φ(s) > 0 for all s ∈ R + ) and (ii) f ′ (s) ≥ −c for all s ∈ R in the non-super critical case (when (6) does not hold). Then the semigroup S(t) given by (49) possesses a global partially strong attractor A in the space H. Moreover, A ⊂ H 1 = [H 2 ∩ H 1 0 ](Ω) × H 1 0 (Ω) and sup t∈R
lim t→∞ sup{dist H (S(t)y, A) : y ∈ B} = 0 for any bounded set B in H.
Theorem 3.16 Let Assumptions 1.1 and 3.4 be in force. Assume also that φ(s) is strictly positive (i.e., φ(s) > 0 for all s ∈ R + ) and f ′ (s) ≥ −c for all s ∈ R in the non-supercritical case (when the bounds in(6) are not valid). Then the evolution semigroup S(t) possesses a compact global attractor A in H. This attractor A coincides with the partially strong attractor given byTheorem 3.11 and thus (i) A ⊂ H 1 = [H 2 ∩ H 1 0 ](Ω) × H 1 0 (Ω); (ii) the relation in (66) hold; (iii) A = M + (N ), where N is the set of equilibria (see
Theorem 3. 18
18Let Assumptions 1.1(ii), 3.12, and 3.17 be in force. Then
( 1 )
1Any trajectory γ = {(u(t); u t (t)) : t ∈ R} from the attractor A given by Theorem 3.16 possesses the properties(u; u t ; u tt ) ∈ L ∞ (R; [H 2 ∩ H 1 0 ](Ω) × H 1 0 (Ω) × L 2 (Ω))(75)
( 3 )
3Let L = {l j : j = 1, ..., N } be a finite set of functionals on H 1 0 (Ω) andǫ L = ǫ L (H 1 0 (Ω), L 2 (Ω)) ≡ sup u : u ∈ H 1 0 (Ω), l j (u) = 0, j = 1, ..., N, u 1 ≤ 1be the corresponding completeness defect.; Then there exists ε 0 > 0 such that under the condition ǫ L ≤ ε 0 the set L is (asymptotically) determining in the sense that the property lim t→∞ max j t+1 t |l j (u 1 (s) − u 2 (s))| 2 ds = 0
This requirement holds automatically in the supercritical case, see(6).
Under these additional conditions the properties in(2)and(3)holds automatically with η0 = c1 = 0.
We note that H * ⊂ H because H 2 (Ω) ⊂ Lp+1(Ω) for p < p * * .
To obtain the result on dimension for the attractor A we need the following amplification of the requirements listed in the first part of Assumption 1.1.Assumption 3.12The functions σ and φ belong to C 1 (R + ) and possess the properties:(i) σ(s) > 0 and φ(s) > 0 for all s ∈ R + ;(ii) λ 1μφ + µ f > 0, whereμ φ is defined in(45), µ f is given by(4)and λ 1 is the first eigenvalue of the minus Laplace operator in Ω with the Dirichlet boundary conditions (in the supercritical case this requirement holds automatically). Our main ingredient of the proof is the following weak quasi-stability estimate.Proposition 3.14 (Weak quasi-stability) Assume that the hypotheses of Theorem 3.13 are in force. Let u 1 (t) and u 2 (t) be two weak solutions such that u i (t) 2 2 + u i t (t)) 2 1 ≤ R 2 , for all t ≥ 0, i = 1, 2. Then their difference z(t) = u 1 (t) − u 2 (t) satisfies the relationwhere a R , b R , γ R are positive constants and l ≥ 1/2 can be taken arbitrary.Proof. Our additional hypothesis on φ and also the bounds for solutions u i imposed allow us to improve the argument which led to(13).for our case, it follows from Lemmas 2.3 and 2.5 thatProof of Theorem 3.16The main ingredient of the proof is some quasi-stability property of S(t) in the energy space H which is stated in the following assertion.where a R , b R , γ R are positive constants.Proof. As a starting point we consider the energy type relation (51) for the difference z (which we already use in the proof of the second part of Proposition 3.2) and estimate the termgiven by (52) using the additional hypotheses imposed. One can see thatHere and below we use the fact that u i t (t) 2 + ∇u i (t) 2 ≤ C R for all t ≥ 0 (see(48)). We also have thatwhere |Ĥ 2 (t))| ≤ C R ( ∇u 1 t + ∇u 2 t ) ∇z 2 . If f is subcritical, i.e., Assumption 3.17(a) holds, then the estimate for H 3 (t) is direct:for some δ > 0 and for any ε > 0. Therefore in the argument below we concentrate on the critical case described in Assumption 3.17(b). In this case we have thatBy the growth condition of f ′′ we have thatTherefore the Hölder inequality and the Sobolev embedding H 1 (Ω) ⊂ L p * +1 (Ω) imply thatNow we introduce the energy type functionalFrom (51) and the calculations above we obviously have thatwhere d 12 (t) = ∇u 1 t (t) 2 + ∇u 2 t (t) 2 . Therefore using Lemma 2.3 we obtain that the functionsatisfies the relation d dt W * (t) + 1 2 σ 12 (t) − ε ∇z t 2 − η z t 2 + η 1 2 φ 12 (t) ∇z 2 + φ 12 (t)|(∇(u 1 + u 2 ), ∇z)| 2 + η 1 0 Ω f ′ (u 2 + λ(u 1 − u 2 ))|z| 2 dλdx ≤ ε ∇z 2 + C R,ε d 12 (t) ∇z 2 .Therefore, if we introduceW (t) = W * (t) + C z(t) 2 with appropriate C > 0 and with η > 0 small enough, then we obtain that a R z t (t) 2 + ∇z(t) 2 ≤W (t) ≤ b R z t (t) 2 + ∇z(t) 2
Asymptotic stability for nonlinear Kirchhoff systems. G Autuori, P Pucci, M C Salvatori, Nonlinear Anal. 10RWAG. Autuori, P. Pucci, M.C. Salvatori, Asymptotic stability for nonlinear Kirchhoff systems, Nonlinear Anal., RWA, 10 (2009), 889-909.
A V Babin, M I Vishik, Attractors of Evolution Equations. North-Holland, AmsterdamA.V. Babin, M.I. Vishik, Attractors of Evolution Equations, North-Holland, Amsterdam, 1992.
On nonlinear wave equations with degenerate damping and source terms. V Barbu, I Lasiecka, M A Rammaha, Trans. AMS. 357V. Barbu, I. Lasiecka, M.A. Rammaha. On nonlinear wave equations with degenerate damp- ing and source terms, Trans. AMS, 357 (2005), 2571-2611.
Sur une classe d'équations fonctionelles aux dérivées partielles. S Bernstein, Bull. Acad. Sciences de l'URSS, Ser. Math. 4S. Bernstein, Sur une classe d'équations fonctionelles aux dérivées partielles, Bull. Acad. Sciences de l'URSS, Ser. Math., 4 (1940), 17-26.
Existence and exponential decay for a Kirchhoff-Carrier model with viscosity. M M Cavalcanti, V N D Cavalcanti, J S P Filho, J A Soriano, J. Math. Anal. Appl. 226M.M. Cavalcanti, V.N.D. Cavalcanti, J.S.P. Filho, J.A. Soriano, Existence and exponential decay for a Kirchhoff-Carrier model with viscosity, J. Math. Anal. Appl., 226 (1998), 20-40.
Attractors for strongly damped wave equations with critical nonlinearities. A Carvalho, J Cholewa, Pacific J. Math. 207A. Carvalho, J. Cholewa, Attractors for strongly damped wave equations with critical non- linearities, Pacific J. Math., 207 (2002), 287-310.
Strongly damped wave equation in uniform spaces. J W Cholewa, T Dlotko, Nonlinear Anal. 64TMAJ.W. Cholewa, T. Dlotko, Strongly damped wave equation in uniform spaces, Nonlinear Anal., TMA, 64 (2006), 174-187.
Theory of functionals that uniquely determine asymptotic dynamics of infinitedimensional dissipative systems. I Chueshov, Russian Math. Surveys. 53I. Chueshov, Theory of functionals that uniquely determine asymptotic dynamics of infinite- dimensional dissipative systems, Russian Math. Surveys 53 (1998), 731-776.
Introduction to the Theory of Infinite-Dimensional Dissipative Systems, Acta, Kharkov, 1999 (in Russian); English translation: Acta. I Chueshov, KharkovI. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems, Acta, Kharkov, 1999 (in Russian); English translation: Acta, Kharkov, 2002; see also http://www.emis.de/monographs/Chueshov/
Determining functionals for nonlinear damped wave equations. I Chueshov, V Kalantarov, Mat. Fiz. Anal. Geom. 82I. Chueshov, V. Kalantarov, Determining functionals for nonlinear damped wave equations, Mat. Fiz. Anal. Geom., 8 (2001), no. 2, 215-227.
Plate models with state-dependent damping coefficient and their quasi-static limits. I Chueshov, S Kolbasin, Nonlin. Anal. TMAI. Chueshov, S. Kolbasin, Plate models with state-dependent damping coefficient and their quasi-static limits, Nonlin. Anal., TMA, 73 (2010), 1626-1644.
Attractors for second-order evolution equations with a nonlinear damping. I Chueshov, I Lasiecka, J. Dyn. Dif. Eqs. 16I. Chueshov, I. Lasiecka, Attractors for second-order evolution equations with a nonlinear damping, J. Dyn. Dif. Eqs, 16 (2004), 469-512.
Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping, Memoirs of AMS 912. I Chueshov, I Lasiecka, AMSProvidenceI. Chueshov, I. Lasiecka, Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping, Memoirs of AMS 912, AMS, Providence, 2008.
I Chueshov, I Lasiecka, Von Karman Evolution Equations. New YorkSpringerI. Chueshov and I. Lasiecka, Von Karman Evolution Equations, Springer, New York, 2010.
Estimating the number of asymptotic degrees of freedom for nonlinear dissipative systems. B Cockburn, D A Jones, E Titi, Math. Comp. 66B. Cockburn, D. A. Jones and E. Titi, Estimating the number of asymptotic degrees of freedom for nonlinear dissipative systems, Math. Comp., 66 (1997), 1073-1087.
Exponential Attractors for Dissipative Evolution Equations. A Eden, C Foias, B Nicolaenko, R Temam, Research in Appl. Math. 37MassonA. Eden, C. Foias, B. Nicolaenko and R. Temam, Exponential Attractors for Dissipative Evolution Equations, Research in Appl. Math. 37, Masson, Paris 1994.
Kernel sections for non-autonomous strongly damped wave equations of non-degenerate Kirchhoff-type. X Fan, S Zhou, Appl. Math. Computation. 158X. Fan, S. Zhou, Kernel sections for non-autonomous strongly damped wave equations of non-degenerate Kirchhoff-type, Appl. Math. Computation, 158 (2004), 253-266.
Sur le comportement global des solutions nonstationnaires deś equations de Navier-Stokes en dimension deux. C Foias, G Prodi, Rend. Sem. Mat. Univ. Padova. 39C. Foias and G. Prodi, Sur le comportement global des solutions nonstationnaires deś equations de Navier-Stokes en dimension deux, Rend. Sem. Mat. Univ. Padova, 39 (1967), 1-34.
Global solutions for dissipative Kirchhoff strings with non-Lipschitz nonlinear term. M Ghisi, J. Differential Equations. 230M. Ghisi, Global solutions for dissipative Kirchhoff strings with non-Lipschitz nonlinear term, J. Differential Equations, 230 (2006), 128-139.
A one-dimensional wave equation with nonlinear damping. S Gatti, V Pata, Glasgow Math. J. 48S.Gatti, V. Pata, A one-dimensional wave equation with nonlinear damping, Glasgow Math. J., 48 (2006), 419-430.
J K Hale, Asymptotic Behavior of Dissipative Systems. Providence, RIAMSJ.K. Hale, Asymptotic Behavior of Dissipative Systems, AMS, Providence, RI, 1989.
Hyperbolic-parabolic singular perturbation for quasilinear equations of Kirchhoff type. H Hashimoto, T Yamazaki, J. Differential Equations. 237H. Hashimoto, T. Yamazaki, Hyperbolic-parabolic singular perturbation for quasilinear equations of Kirchhoff type, J. Differential Equations, 237 (2007), 491-525.
Finite-dimensional attractors for the quasi-linear strongly-damped wave equation. V Kalantarov, S Zelik, J. Differential Equation. 247V. Kalantarov, S. Zelik, Finite-dimensional attractors for the quasi-linear strongly-damped wave equation, J. Differential Equation, 247 (2009), 1120-1155.
Attractors for Kirchhoff's equation with a nonlinear damping coefficient. S Kolbasin, Nonlin. Anal. TMAS. Kolbasin, Attractors for Kirchhoff's equation with a nonlinear damping coefficient, Nonlin. Anal., TMA, 71 (2009), 2361-2371.
A dynamical system generated by the Navier-Stokes equations. O Ladyzhenskaya, J. Soviet Math. 3O. Ladyzhenskaya, A dynamical system generated by the Navier-Stokes equations, J. Soviet Math., 3 (1975), 458-479.
Global solutions for a nonlinear wave equation. P Lazo, Appl. Math. Computation. 200P. Lazo, Global solutions for a nonlinear wave equation Appl. Math. Computation, 200 (2008), 596-601.
J L Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, ParisJ. L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires, Dunod, Paris, 1969.
On some questions in boundary value problems in mathematical physics. J L Lions, International Symposium on Continuum Mechanics and Partial Differential Equations. North-Holland, AmsterdamRio de JaneiroJ.L. Lions, On some questions in boundary value problems in mathematical physics, in: International Symposium on Continuum Mechanics and Partial Differential Equations, Rio de Janeiro, 1977, North-Holland, Amsterdam, 1978.
A finite dimensional attractor for three dimensional flow of incompressible fluids. J Málek, J Nečas, J. Differential Equations. 127J.Málek and J. Nečas, A finite dimensional attractor for three dimensional flow of incom- pressible fluids, J. Differential Equations, 127 (1996), 498-518.
Large time behavior via the method of l-trajectories. J Málek, D Pražak, J. Differential Equations. 181J.Málek and D. Pražak, Large time behavior via the method of l-trajectories, J. Differential Equations, 181 (2002), 243-279.
On global solution and energy decay for the wave equation of Kirchhoff type with nonlinear damping term. T Matsuyama, R Ikehata, J. Math. Anal. Appl. 204T. Matsuyama, R. Ikehata, On global solution and energy decay for the wave equation of Kirchhoff type with nonlinear damping term, J. Math. Anal. Appl., 204 (1996), 729-753.
Vibration of elastic strings: Mathematical Aspects, Part One. L A Medeiros, J L Ferrel, S B De Menezes, J. Comp. Analysis Appl. 2L.A. Medeiros, J.L. Ferrel, S.B. de Menezes, Vibration of elastic strings: Mathematical Aspects, Part One, J. Comp. Analysis Appl., 4 no.2 (2002), 91-127.
On a nonlinear wave equation with damping. L A Medeiros, M Miranda, Revista Mat. Univ. Complutense Madrid. L.A. Medeiros, M. Milla Miranda, On a nonlinear wave equation with damping, Revista Mat. Univ. Complutense Madrid, 3 (1990), 213-231.
An attractor for a nonlinear dissipative wave equation of Kirchhoff type. M Nakao, J. Math. Anal. Appl. 353M. Nakao, An attractor for a nonlinear dissipative wave equation of Kirchhoff type, J. Math. Anal. Appl., 353 (2009), 652-659
Global attractors for some quasi-linear wave equations with a strong dissipation. M Nakao, Y Zhijian, Adv. Math. Sci. Appl. 17M. Nakao, Y. Zhijian, Global attractors for some quasi-linear wave equations with a strong dissipation, Adv. Math. Sci. Appl., 17 (2007), 89-105.
Global existence, decay, and blow up of solutions for some mildly degenerate nonlinear Kirchhoff strings. K Ono, J. Differential Equations. 137K. Ono, Global existence, decay, and blow up of solutions for some mildly degenerate non- linear Kirchhoff strings, J. Differential Equations, 137 (1997), 273-301.
On global existence, asymptotic stability and blowing up of solutions for some degenerate non-linear wave equations of Kirchhoff type with a strong dissipation. K Ono, Math. Methods Appl. Sci. 20K. Ono, On global existence, asymptotic stability and blowing up of solutions for some degenerate non-linear wave equations of Kirchhoff type with a strong dissipation, Math. Methods Appl. Sci., 20 (1997), 151-177.
V Pata, S Zelik, Smooth attractors for strongly damped wave equations. 19V. Pata, S. Zelik, Smooth attractors for strongly damped wave equations, Nonlinearity, 19 (2006), 1495-1506.
Attractors and their regularity for 2-D wave equations with nonlinear damping. V Pata, S Zelik, Adv. Math. Sci. Appl. 17V. Pata, S. Zelik, Attractors and their regularity for 2-D wave equations with nonlinear damping, Adv. Math. Sci. Appl., 17 (2007), 225-237.
Global and exponential attractors for 3-D wave equations with displacement depending damping. V Pata, S Zelik, Math. Meth. Appl. Sci. 29V. Pata, S. Zelik, Global and exponential attractors for 3-D wave equations with displacement depending damping, Math. Meth. Appl. Sci., 29 (2006), 1291-1306.
On a class of quasilinear hyperbolic equations. S I Pohozhaev, 25Math. USSR, SbornikS.I. Pohozhaev, On a class of quasilinear hyperbolic equations, Math. USSR, Sbornik, 25(1) (1975), 145-158.
Compact Sets in the Space L p (0, T ; B). J Simon, Annali Mat. Pura Appl. 146J. Simon, Compact Sets in the Space L p (0, T ; B), Annali Mat. Pura Appl., 146 (1987), 65-96.
Existence and asymptotic behaviour of solutions to weakly damped wave equations of Kirchhoff type with nonlinear damping and source terms. T Taniguchi, J. Math. Anal. Appl. 361T. Taniguchi, Existence and asymptotic behaviour of solutions to weakly damped wave equations of Kirchhoff type with nonlinear damping and source terms, J. Math. Anal. Appl., 361 (2010), 566-578.
Infinite-Dimensional Dynamical Dystems in Mechanics and Physics. R Temam, SpringerNew YorkR. Temam, Infinite-Dimensional Dynamical Dystems in Mechanics and Physics, Springer, New York, 1988.
Dynamics of strongly damped wave equations in locally uniform spaces: Attractors and asymptotic regularity. M Yang, C Sun, Trans. AMS. 361M. Yang, C. Sun, Dynamics of strongly damped wave equations in locally uniform spaces: Attractors and asymptotic regularity, Trans. AMS, 361 (2009), 1069-1101.
Long-time behavior of the Kirchhoff type equation with strong damping in R N. Z.-J Yang, J. Differential Equations. 242Z.-J. Yang, Long-time behavior of the Kirchhoff type equation with strong damping in R N , J. Differential Equations, 242 (2007), 269-286.
Global attractor for the Kirchhoff type equation with a strong dissipation. Z.-J Yang, Y.-Q Wang, J. Differential Equations. Z.-J. Yang, Y.-Q. Wang, Global attractor for the Kirchhoff type equation with a strong dissipation J. Differential Equations, 249 (2010), 3258-3278.
Finite dimensional attractors for the Kirchhoff equation with a strong dissipation. Z Yang, X Li, J. Math. Anal. Appl. 375Z. Yang, X. Li, Finite dimensional attractors for the Kirchhoff equation with a strong dissi- pation, J. Math. Anal. Appl., 375 (2011), 579-593.
Global solvability for the Kirchhoff equations in exterior domain of dimension three. T Yamazaki, J. Differential Equations. 210T. Yamazaki, Global solvability for the Kirchhoff equations in exterior domain of dimension three, J. Differential Equations, 210 (2005), 317-351.
|
[] |
[
"An Efficient Mixture of Deep and Machine Learning Models for COVID-19 and Tuberculosis Detection Using X-Ray Images in Resource Limited Settings",
"An Efficient Mixture of Deep and Machine Learning Models for COVID-19 and Tuberculosis Detection Using X-Ray Images in Resource Limited Settings"
] |
[
"Ali H Al-Timemy [email protected] \nBiomedical engineering department\nAl-Khwarizmi College of Engineering\nUniversity of Baghdad\nJaderiyah, BaghdadIraq\n",
"Rami N Khushaba [email protected] \nAustralian Centre for Field Robotics\nUniversity of Sydney\n8 Little Queen street2008ChippendaleNSWAustralia\n",
"Zahraa M Mosa \nDepartment of Physics, College of Science for Women\nUniversity of Baghdad\nBaghdadIraq\n",
"Javier Escudero \nSchool of Engineering, Institute for Digital Communications\nThe University of Edinburgh\nEH9 3JLEdinburghKing's BuildingsUK\n"
] |
[
"Biomedical engineering department\nAl-Khwarizmi College of Engineering\nUniversity of Baghdad\nJaderiyah, BaghdadIraq",
"Australian Centre for Field Robotics\nUniversity of Sydney\n8 Little Queen street2008ChippendaleNSWAustralia",
"Department of Physics, College of Science for Women\nUniversity of Baghdad\nBaghdadIraq",
"School of Engineering, Institute for Digital Communications\nThe University of Edinburgh\nEH9 3JLEdinburghKing's BuildingsUK"
] |
[] |
Clinicians in the frontline need to assess quickly whether a patient with symptoms indeed has COVID-19 or not. The difficulty of this task is exacerbated in low resource settings that may not have access to biotechnology tests. Furthermore, Tuberculosis (TB) remains a major health problem in several lowand middle-income countries and its common symptoms include fever, cough and tiredness, similarly to COVID-19. In order to help in the detection of COVID-19, we propose the extraction of deep features (DF) from chest X-ray images, a technology available in most hospitals, and their subsequent classification using machine learning methods that do not require large computational resources. We compiled a five-class dataset of X-ray chest images including a balanced number of COVID-19, viral pneumonia, bacterial pneumonia, TB, and healthy cases. We compared the performance of pipelines combining 14 individual state-of-the-art pre-trained deep networks for DF extraction with traditional machine learning classifiers. A pipeline consisting of ResNet-50 for DF computation and ensemble of subspace discriminant classifier was the best performer in the classification of the five classes, achieving a detection accuracy of 91.6± 2.6% (accuracy ± Confidence Interval (CI) at 95% confidence level). Furthermore, the same pipeline achieved accuracies of 98.6±1.4% and 99.9±0.5% (± CI) in simpler three-class and two-class classification problems focused on distinguishing COVID-19, TB and healthy cases; and COVID-19 and healthy images, respectively. The pipeline was computationally efficient requiring just 0.19 second to extract DF per X-ray image and 2 minutes for training a traditional classifier with more than 2000 images on a CPU machine. The results suggest the potential benefits of using our pipeline in the detection of COVID-19, particularly in resource-limited settings as it relies in accessible X-rays and it can run with limited computational resources. The final constructed dataset named COVID-19 five-class balanced dataset is available from: https://drive.google.com/drive/folders/1toMymyHTy0DR_fyE7hjO3LSBGWtVoPNf?usp=sharing.
|
10.1007/978-3-030-69744-0_6
|
[
"https://arxiv.org/pdf/2007.08223v1.pdf"
] | 220,546,294 |
2007.08223
|
d40d80d979971d20d4d05d53b909d4c55cad2eb4
|
An Efficient Mixture of Deep and Machine Learning Models for COVID-19 and Tuberculosis Detection Using X-Ray Images in Resource Limited Settings
Ali H Al-Timemy [email protected]
Biomedical engineering department
Al-Khwarizmi College of Engineering
University of Baghdad
Jaderiyah, BaghdadIraq
Rami N Khushaba [email protected]
Australian Centre for Field Robotics
University of Sydney
8 Little Queen street2008ChippendaleNSWAustralia
Zahraa M Mosa
Department of Physics, College of Science for Women
University of Baghdad
BaghdadIraq
Javier Escudero
School of Engineering, Institute for Digital Communications
The University of Edinburgh
EH9 3JLEdinburghKing's BuildingsUK
An Efficient Mixture of Deep and Machine Learning Models for COVID-19 and Tuberculosis Detection Using X-Ray Images in Resource Limited Settings
COVID-19, Deep featuresMachine learningPneumoniaResNet-50Tuberculosis
Clinicians in the frontline need to assess quickly whether a patient with symptoms indeed has COVID-19 or not. The difficulty of this task is exacerbated in low resource settings that may not have access to biotechnology tests. Furthermore, Tuberculosis (TB) remains a major health problem in several lowand middle-income countries and its common symptoms include fever, cough and tiredness, similarly to COVID-19. In order to help in the detection of COVID-19, we propose the extraction of deep features (DF) from chest X-ray images, a technology available in most hospitals, and their subsequent classification using machine learning methods that do not require large computational resources. We compiled a five-class dataset of X-ray chest images including a balanced number of COVID-19, viral pneumonia, bacterial pneumonia, TB, and healthy cases. We compared the performance of pipelines combining 14 individual state-of-the-art pre-trained deep networks for DF extraction with traditional machine learning classifiers. A pipeline consisting of ResNet-50 for DF computation and ensemble of subspace discriminant classifier was the best performer in the classification of the five classes, achieving a detection accuracy of 91.6± 2.6% (accuracy ± Confidence Interval (CI) at 95% confidence level). Furthermore, the same pipeline achieved accuracies of 98.6±1.4% and 99.9±0.5% (± CI) in simpler three-class and two-class classification problems focused on distinguishing COVID-19, TB and healthy cases; and COVID-19 and healthy images, respectively. The pipeline was computationally efficient requiring just 0.19 second to extract DF per X-ray image and 2 minutes for training a traditional classifier with more than 2000 images on a CPU machine. The results suggest the potential benefits of using our pipeline in the detection of COVID-19, particularly in resource-limited settings as it relies in accessible X-rays and it can run with limited computational resources. The final constructed dataset named COVID-19 five-class balanced dataset is available from: https://drive.google.com/drive/folders/1toMymyHTy0DR_fyE7hjO3LSBGWtVoPNf?usp=sharing.
Introduction
During December 2019, the world witnessed the outbreak of a new, previously unknown, virus and disease in Wuhan, China, resulting in an infectious disease known as Coronavirus Disease COVID-19 1 , which is caused by a recently discovered coronavirus. Coronaviruses are a large family of viruses which may cause respiratory infections ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). The most common symptoms of COVID-19 include fever, cough, difficulty in breathing, and tiredness, which may develop to pneumonia and potentially leading to death [1,2]. The symptoms of pulmonary diseases may be confused and there might be a misdiagnosis of COVID-19 cases with that of other pulmonary diseases, such as other types of viral pneumonia, bacterial pneumonia, and Tuberculosis (TB). This complicates the doctor's task to detect COVID-19 quickly to manage the disease, isolate the patient, and trace recent contacts if relevant.
In the current situation of the COVID-19 crisis, there are more than 12 million cases around the world and the numbers are rising at a rate of new 200000 cases per day 2 . Multiple countries are experiencing an increase in the number of patients admitted to hospital, which may compromise their healthcare systems. The situation is even more precarious in low-income countries that may have difficulties in accessing some of the most recent testing technology.
The current testing utilized to confirm COVID-19 cases is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) [3]. The accuracy of the test is around 60-70% and it takes few hours to 2 days to get the results [4]. Other tests recommended by the World Health Organization (WHO), but not in mass use yet, include the Cobas SARS-CoV-2 Qualitative assay 3 . Due to the high number of cases affected every day, the waiting time to obtain the results may be longer. Furthermore, tests are difficult to access, expensive and are unavailable to certain hospitals or regions.
TB is a very serious and deadly pulmonary disease. In 2018 alone, 10 million people had TB worldwide and it caused 1.5 million deaths. It is regarded as a major cause of death from a single infectious agent worldwide [5]. Low-middle income countries account for 2/3 of the TB cases, including India which has the largest number of TB cases, followed by China, Indonesia, Philippines, Pakistan, Nigeria and 1 https://www.who.int/emergencies/diseases/novel-coronavirus-2019/question-and-answers-hub/q-adetail/q-a-coronaviruses#:~:text=symptoms 2 https://www.worldometers.info/coronavirus/ 3 https://www.who.int/news-room/detail/07-04-2020-who-lists-two-covid-19-tests-for-emergency-use Bangladesh [5]. Multidrug Resistant (MDR) TB, a severe type of TB, which requires chemotherapy and expensive toxic medications for treatment, stays a crisis and threat for public health, where its main load occurs in 3 countries including India, China and the Russian Federation. In the current COVID-19 pandemic, TB is an issue specially. Both TB and COVID-19 spread by close contact between people and the symptoms may be similar to COVID-19, such as fever, cough, tiredness [6]. Thus, lowincome countries may face a health crisis where the COVID-19 crisis is compounded by the presence of TB. Large number of COVID-19 cases 4 are reported in India (766000 cases), Pakistan (230000 cases) and Bangladesh (170000), which results in less attention given to TB. In such cases, there is a need to achieve fast TB and COVID-19 detection. The COVID-19 pandemic may cause a global reduction of 25% in the detection of TB. This could lead to an increase of 13% increase in TB deaths due to disruption to TB detection and treatment [7].
Radiology can be utilized to help in the detection of COVID-19 in images of Computerized Tomography (CT) [3,[8][9][10] or X-rays [4,11,12] since darkened spots and also opacities that looks like ground glass, can be observed [3]. Thus, it is suggested that X-ray and CT scans can help to select and quantify COVID-19 and aid doctors to detect this disease quickly [12,13]. CT is relatively complex modality and it is not available in all clinics. On the other hand, an X-ray scan is a simple modality and it can be available in all hospitals and in small clinics. Hence, it has been proposed for Computer Aided Diagnosis (CAD) for aiding COVID-19 detection [14][15][16][17] and TB screening [18,19].
In the healthcare field, a new subbranch of artificial intelligence algorithms denoted as Deep Learning (DL) is being increasingly developed as a CAD tool to help doctors achieve better clinical decisions with high precision [20]. Therefore, achieving good performance on disease detection problems.
Similarly, DL methods can be an aid to the doctors for improving the quality of the detection of COVID-19 [4,11,12,21] and TB [18,19]. Moreover, automatic screening of TB has been proposed with deep learning [18,19,22]. However, deep learning requires GPU, which is not a privilege for limited resource settings.
A convolutional neural network (CNN), one of the DL methods, has been applied to the medical field [23][24][25] to tackle a variety of problems such as COVID-19 detection with CT and X-ray images [8][9][10]14,15,17,[26][27][28]. Several new CNN designs have been proposed for COVID-19 detection such as COVID-Net [21], CovidxNet [14], CoroNet [12], DarkCovidNet [4], COVIDiagnosis-Net [29], and nCOVNet [30]. Despite the remarkable research, the main challenges with CNN includes its need for a large number of images for training the network in addition to the long training time, even with GPU support.
On the other hand, Transfer Learning (TL) is proposed to deal with the challenge of the need for a large set of images and long training times of CNN. A pretrained CNN such as ResNet-50 [17], VGG-19 [17], DenseNet-201 [31] or MobileNet-v2 [32] can be utilized to learn a new task by fine tuning of the last fully connected layers. The pretrained networks are trained on a dataset of ImageNet [33] which has more than million images with 1000 classes. Despite its simplicity, this approach still requires long training time. TL has been applied to COVID-19 detection from X-ray images [4,12,[14][15][16][17][18]21,22,[26][27][28][35][36][37][38] where pretrained networks such as ResNet-50 [26,35,39], VGG-19 [16,40], DenseNet-201 [41] and Xception [39] has been used for COVID-19 detection with X-ray images.
Inspiring research has been done to tackle the COVID-19 detection with X-ray images. Panwar et al.
proposed nCOVNet [30] for COVID-19 detection for 2-class dataset (142 normal and 142 COVID-19 images). A detection accuracy of 88% was obtained with 70%/30% training and testing split. Wang and Wong [21] developed COVID-Net and validated it on a dataset of 358 COVID-19, 5,538 normal and 8,066 pneumonia images with 91% sensitivity for COVID-19 cases with 70%/30% training and testing data split, respectively. It should be noted that the 3-class images were not balanced. DarkCovidNet was proposed in [4] to detect COVID-19 for 3-class dataset including 125 COVID-19 cases, 500 pneumonia and 500 normal images with an accuracy of 87.2%, utilizing the 5-fold cross validation to avoid overfitting. However, only 125 cases were included in their model. Khan et al. [12] proposed Deep Features (DF) extraction is a process in which the features are directly acquired from the last fully connected layer of a pretrained deep learning network e.g., AlexNet, ResNet-50 and VGG-19, without the need for re-training the networks. The main advantage of employing a pretrained model is huge savings in the computational time required to train these models from scratch, and hence the possibility of running these models on a CPU machine without the need for expensive GPU enabled computational
powers. An initial attempt by Sethy et al. [35] utilized 127 images of 3-class of COVID-19, pneumonia (viral and bacterial) and normal, 80%-20% data split of training and testing was used to evaluate the model. An accuracy of 95.33% was obtained with ResNet-50 deep features and Support Vector Machines (SVM) classifier. However, it is not known how the performance of ResNet-50 compares to other pre-trained DL models, especially given that the dataset contained a small number of images. In [36], ResNet1523 and XGBoost classifiers were evaluated on a 3-class dataset (normal 1341 images, pneumonia 1345 images and only 62 COVID-19) with 30% holdout. From previous two studies, DF have been proposed for COVID-19 detection, but it was not investigated in depth.
The main challenges with the previous literature are either related to: 1) the small number of COVID-19 images, in comparison to other diseases considered, included in the datasets thus leading to unbalanced datasets and/or 2) the need for GPU resources to train the newly proposed CNN or the pretrained CNNs. Moreover, the class separability of COVID-19 DF is not known in problems where a larger number of diseases is considered.
3) The severe case of COVID-19 and TB is also not previously investigated, to the best of our knowledge.
In this paper, we harvest the power of the good feature presentation of 14 state-of-the-art pretrained deep networks and the simplicity of machine learning classifiers for COVID-19 detection on a CPU machine in just under ten minutes for the whole training and testing, something that makes the proposed CAD suitable for low computational power settings. In addition, the proposed DF pipeline is evaluated on a balanced 2186 X-ray images of 5 classes including COVID-19, normal, bacterial pneumonia, viral pneumonia and TB.
The main contributions of the current study are: 1) We constructed a five-class COVID 19 dataset, named COVID-19 five-class balanced dataset, including a large number of COVID-19 and tuberculosis images, which was not investigated before to the best of our knowledge. 2) A pipeline of features from 14 individual state-of-the-art pretrained deep networks combined with machine learning classifier are investigated using a five-fold cross validation scheme to avoid overfitting, without the need to train the pretrained networks.
3) The proposed pipeline can run on a CPU machine, which makes it simple and efficient, and suitable for low-middle income countries. 4) Five-class separability analysis of COVID-19 DF using Silhouette criterion clustering evaluation and high dimensional t-SNE visualization were investigated to understand the separation of the 2186 5-class images.
Methodology
COVID-19 5-Class Balanced Dataset
The 5-class COVID-19 dataset, constructed in this study, consists of 3 main sources. The COVID-19 cases are acquired from Cohen et. al dataset [37] which is collection of X-ray and CT images of variety of pulmonary diseases, including COVID-19, SARS and MEARS, compiled from many sources. The dataset is updated on a regular basis to include more cases. Until 15/6/2020 5 , it had 752 X-ray images, among them 435 COVID-19 X-ray images which were utilized in this study. This dataset has the largest number of X-ray images of COVID-19 cases. The CT and lateral X-ray images were excluded from this study. The average age of the COVID-19 patients is 54.6 years ± 16.7 (mean ± standard deviation), including 256 males and 136 females, and the gender for 43 images was not provided, since the metadata accompanying the dataset is not complete.
The second source of images is the Pneumonia and normal dataset [38] which has 5863 X-ray images 6 of 3-class including normal, pneumonia bacterial and pneumonia viral. We randomly selected 439 images of each class from the pneumonia dataset to be included in the study, to construct the COVID-19 balanced 5-class dataset.
The last source of X-ray images is TB dataset from U.S. National Library of Medicine, consisting of two datasets of chest X-ray. TB dataset were made available to expedite research in CAD of pulmonary diseases specially TB 7 [18,22]. The TB dataset includes 394 images from 2 sources (336 images from China set and 58 images from Montgomery County set) [18,22]. The TB X-ray images has a slightly less number than 5 https://github.com/ieee8023/covid-chestxray-dataset 6 https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia 7 https://www.kaggle.com/kmader/pulmonary-chest-xray-abnormalities?select=ChinaSet_AllFiles other classes in this study which may have an influence on the detection performance due to unbalanced classes [42]. To balance the TB class, augmentation [43] of random images has been done by rescaling 40 random X-ray images. The details of the constructed COVID-19 5-class balanced dataset is illustrated in Table 1. The current COVID-19 5-class dataset has a larger number of COVID-19 X-ray images, than previous studies [4,11,12,35]. Samples of X-ray images of the 5 classes included in this study, with their labels, are shown in Fig. 1. Fig.1 Samples of the five-class X-ray images for the COVID-19, normal, pneumonia bacterial, pneumonia viral and TB.
The Pipeline of Deep Feature Extraction from Pretrained Networks and Machine
Learning Classification
In this section, the details of the proposed COVID-19 detection method will be presented. Fig. 2 presents the block diagram of the proposed pipeline for COVID-19 detection with deep features and machine learning classifiers on 5-class COVID-19 balanced dataset. The details of each part of the methodology will be presented in the next paragraphs.
Fig.2
The block diagram of the proposed 5-classes COVID-19 detection method.
In order to extract the deep features from the pretrained networks, 14 individual state-of-the-art pretrained networks were included in this study ranging from the networks with simpler designs such as AlexNet and ShuffleNet to the advanced complex designs such as VGG-19 and NASNet-Large which requires large GPU power for training. These networks are pretrained on ImageNet dataset [33] which has one million images for 1000 classes. In this study, we will input the X-ray images to each individual pretrained network and extract the feature vectors directly at the fully connected layer, without the need for training the network on the X-ray images. The full details of the 14 pretrained networks utilized in this study for feature extraction, with their input size, number of layers the name of the fully connected layer as well as the number of parameters are illustrated in Table 2. Preprocessing was performed to change the input size of the X-ray images to match the input of the targeted pretrained network ( Table 2). The size of the extracted feature vector for each network was equal to 2186 x 1000 (number of X-ray images x number of DF), for each of the 14 networks. To perform the classification of the extracted features for each of the 14 networks, we utilized machine learning classifiers. Classification Learner (CL) App. in Matlab 2019b was used to perform the classification. We utilized five-fold Cross Validation (CV) for analysis in this study to avoid overfitting, as done in [4,12,26]. To select the best classifier for the COVID-19 5-class dataset, we carried out a quadratic, box constraint level=1, one vs. one classification); 2) medium Gaussian SVM (kernel type:
Gaussian, kernel scale= 32, box constraint level=, one vs. one classification). As for the ensemble of subspace discriminant classifier, random subspace ensembles [52] of a group of discriminant analysis classifier was used. Subspace ensembles have been utilized with the following settings (learner: discriminant. learning rate =0.1, number of learners=30, subspace dimension=500).
Performance Evaluation of the Proposed COVID-19 Detection Pipeline
The performance of the proposed COVID-19 detection was evaluated with the following classification performance measures including accuracy, precision, recall, specificity, and F-score, features. In addition, The Silhouette criteriona clustering evaluation measure - [55] was used to evaluate the quality of the 5-class COVID-19 features.
Results
In this section, we will present the results of the analysis of the five-class COVID-19 balanced dataset with the proposed deep feature extraction and machine learning classification pipeline. Fig.3 Table 3.
The pipeline consisting of ResNet-50 and ensemble of subspace discriminant classifier is the best performer among all networks and classifiers (detection accuracy 91.6 ± 2.6%, accuracy ± CI), as confirmed statically, in addition to being efficient (6.9 minutes for DF and 2 minutes for classification, see Table 3). It will be chosen to perform the subsequent analysis in this study. The confusion matrix for the 5-class COVID-19 detection dataset with ResNet-50 and ensemble of subspace discriminant classifier is shown in Fig. 4. COVID-19 and TB achieved the best performance compared to the other classes, where less than 1% of the COVID-19 cases were mis-detected as normal and TB (Fig. 4), while the lowest performance was for pneumonia bacterial and viral. In addition, classification measures for each class in terms of precision, recall, specificity, F-score and AUC were (Table 4). The highest specificity was for COVID-19, normal and TB classes, which shows the potential use of our proposed method to aid for quick COVID-19 and TB detection. The second part of the analysis was to investigate the class separability with the state-of-the-art high dimensional t-SNE visualization to understand how the features are distributed in the feature space Table 5, where the higher the values, the better the class separation, showed that COVID-19 and normal have a high Silhouette criterion values whereas pneumonia-bacterial has the lowest value (Table 5). Pneumonia-viral 0.3220
Tuberculosis -0.3190
Since there is an area of overlap between the 2-class of COVID-19 and TB to that of normal, we investigated three-class of COVID-19 vs. normal vs. TB due to the relevance of TB to some low-and middle-income countries [5]. Table 6 shows classification performance matrices for the three-class problem and the corresponding confusion matrix is illustrated in Fig. 6. The overall accuracy for the three-class problem was 98.6±1.4 % (accuracy ± CI). The t-SNE visualization displayed in Fig. 7 shows clear area of separation between normal and COVID-19 and also TB. There are some TB cases that are overlapped with COVID-19 cases. Finally, to investigate the performance of our proposed COVID-19 detection method for 2-class COVID-19 vs. normal, similar to [4,11,12]. The results are shown in Fig.8 and t-SNE visualization is shown in Fig. 9. High detection accuracy rate of 99.9 was obtained on 847 images of COVID-19 and normal with five-fold CV. We then ran a test with our proposed method on the four-class COVID-19 dataset [12] with four-fold CV for a straightforward comparison with that study. Our detection accuracy was equal to 90.2% which outperformed their accuracy (89.6%) [12]. Notably, our approach only required a CPU computer while their work needed a Tesla GPU machine.
Fig. 10
The confusion matrix of our proposed method on the four-class dataset used in [12].
Discussion
We presented in this paper an efficient approach employing existing and pretrained DL models for the classification of five-class COVID-19 images with three traditional machine learning classifiers. The main power of the features produced by these models is attributed to the fact that these networks learned the most discriminant parameters to classify the original images, upon which these models were developed, without any hand crafting feature extraction algorithms. Interestingly, the DF features were extracted from models not specifically trained on COVID-19 images but on general images of nearly 1000 classes and each model learned its own features or descriptors that best separate between the considered classes of images.
When observing the accuracy of the utilized DL models, it was found that ResNet-50 [46] had the highest classification accuracy across the quadratic SVM and the ensemble of subspace discriminant classifier, while being close to the best model results when using the medium Gaussian SVM classifier.
Our n-way ANOVA analysis supported the statistical analysis of these results indicating a significant difference between the performance of the classification models across the 14 utilized DL models, with significant differences << 0.001 for all tests.
Once the results were statistically verified, we then studied the computational tie requirements needed Table 3.
COVID-19 and TB are two classes that are best detected among the five classes investigated in this study, specificity is equal to 99.8 % and 99.3%, respectively (Table 4). This may be clinically important, especially for some low and middle income countries, to detect both TB and COVID-19 since TB has a higher mortality rate more than COVID-19. The proposed pipeline can run on low resource settings (CPU computer instead of GPU).
The silhouette criterion values for the TB cases were low (Table 5), despite having a good detection accuracy (Table 4). This may be attributed that TB has two distinct clusters (Fig.5 and Fig.7), which increase the distance between the samples belonging to TB class, despite being distinct from normal and other types of pneumonia. When inspecting the t-SNE visualizations of five-classes balanced COVID-19 features extracted from ResNet-50 for 2186 X-ray images, it was found that COVID-19 and TB were completely separated from normal and Pneumonia bacterial and viral. However, there was some degree of overlapping between COVID-19 and TB when visually inspecting the t-SNE feature projections across the first two dimensions. However, this visual overlapping was tackled with the power of our nonlinear classification models, in a 3D t-SNE visualization.
We also investigated the classification performance for three-class (COVID-19 vs. normal vs. TB) with
ResNet-50 and ensemble of subspace discriminant and the two-class (COVID-19 vs. normal). In both set of tests, COVID-19 is shown to be completely separated from normal, and clearly separated from TB.
As we quantified the performance of our proposed combination of DL and machine learning models, we then analysed the reported performance results from several other research groups as shown in
Table7 in comparison to our model results. Our proposed COVID-19 detection pipeline outperformed the state-of-the-art-literature, illustrated in Table 7. It should be noted that each model tested a different dataset, different testing split and also different classes. Hence, the comparison between these models here is for illustrative purposes but it shows the potential of our approach combining DF and simple classifiers in this task. The study has a potential limitation of relatively small number of COVID-19 and TB images, despite being the largest compared to previous literature so far. More COVID-19 and TB images are needed to improve the robustness of the proposed in a future research. The separation of the TB cases in two clusters in the t-SNE visualization should also be investigated in the future.
Conclusion
We proposed a combination of deep and machine learning models for the classification of X-ray chest images into five classes including COVID-19, in contrast with viral pneumonia, bacterial pneumonia, and TB, and in comparison, to the normal healthy subjects. Our work was motivated by the challenging conditions in low resource environments, where TB and COVID19 may be major healthcare problems.
A key characteristic of our study is that the pre-trained networks can be utilized without GPU support as no (re-)training of the deep networks is required. Thus, we are able to extract DF of the X-ray images efficiently with a CPU-enabled computer. Our developments were tested using five-fold crossvalidation, achieving 91.6 ± 2.6 % accuracy(accuracy ± CI) with a pipeline consisting of ResNet-50
for DF computation and ensemble of subspace discriminant classifiers in the distinction of the five groups or classes of diseases. In addition, we explored the classification accuracy in three-and twoclass problems (COVID-19, TB and healthy cases; and COVID-19 and healthy cases; respectively) and obtained accuracies over 98% on both cases. Our study is limited by the relatively small number of COVID-19 and TB images, but it shows the promise of a pipeline requiring low computational resources to contribute to the detection of COVID-19 and TB using X-ray images where other more advanced or computationally demanding techniques may not be available.
Declaration of Competing Interests
None
CoroNet CNN to detect COVID-19 from X-ray images of 4-class dataset of 284 COVID-19, 310 normal 330 pneumonia bacterial and 327 pneumonia viral. Their dataset represents the largest balanced 4-class COVID-19 dataset so far. Detection accuracy of 89.6% was obtained with 4-fold cross validation with Google Collaboratory Ubuntu server and Tesla K80 graphics card.
pilot study with multiple classifiers in the CL App, including k-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), and variety of ensemble classifiers including boosting, bagging and subspace discriminant. The best three classifiers in the pilot study were quadratic SVM, medium Gaussian SVM and ensemble of subspace discriminant classifiers. The setting for the SVM classifiers utilized in this study are as follows 1) quadratic SVM (kernel function:
(True Positive), TN (True Negative), FN (False Negative) and FP (False Positive). We also estimated the Area Under the Receiver Operating Characteristics (ROC) Curve (AUC) in the CL app. in MATLAB 2019b. An n-way analysis of variance (ANOVA) statistical test was employed to investigate the effects of two factors on the mean of the accuracy results. The first factor being related to the selected DL models with 14 levels (14 models) and the second being related to the selected 3 traditional classifiers (quadratic SVM, medium Gaussian SVM, and ensemble of subspace discriminant classifiers). We have also studied other class combinations, including 3-classes (COVID-19 vs. normal vs TB) and 2 -classes (COVID-19 vs. normal) to show the robustness of the proposed method with different number of COVID-19 classes.To investigate class separability of the COVID-19 features, we utilized t-Distributed Stochastic Neighbor Embedding (t-SNE) data visualization[53,54] to reduce dimensions of the deep features computed with the pretrained networks. This was done for the 5-class, 3-class and 2-class COVID-19
shows the results of the classification accuracy with the 14 state-of-the-art deep feature learning methods integrated with three classifiers (quadratic SVM, medium Gaussian SVM and ensemble of subspace discriminant). Moreover, we also estimated the Confidence Interval (CI) for the detection accuracy calculated at 95% confidence level (Fig.3). ResNet-50 and ResNet-101 were the best DF, with all classifiers, compared to the other networks. In addition, ensemble of subspace discriminant classifier was the best on average among the other classifiers. To show the effectiveness of proposed method, we calculated the time for feature extraction of all 2186 images, with all pretrained networks(Table 3), which ranges from few minutes for simple networks (AlexNet and ResNet-50) to >30 minutes for big networks such as VGG-19 and NASNet-Large. On average, 0.09 s is needed to extract feature with AlexNet for a single image. This time was calculated on a Core i5 CPU computer with 16 GB RAM. It is worth noting that the time to perform the classification of the whole dataset with five-fold crossvalidation was equal to 0.5 minutes for SVM classifiers and 2 minutes for ensemble of subspace discriminant classifier. It is noteworthy to mention that all classification in this study was done with 5fold CV to avoid overfitting.Using the n-way ANOVA, a p-value < 0.001 returned by this test indicated that the average classification accuracies are significantly different across the three levels of the classifiers factor (three traditional classifiers). This was followed by a Bonferroni corrected t-test which further indicated that, across all DL models, significant differences existed between the quadratic SVM and ensemble of subspace discriminant classifier (p-value = 0.002) and that between medium Gaussian SVM and ensemble of subspace discriminant classifiers (p-value < 0.001), but not between the two version of SVM. Similarly, there were also statistically significant differences across the 14 levels of the DL models factor, with a p-value < 0.001. Additionally, the combination of ResNet-50 with the ensemble of subspace discriminant classifiers was the most significantly different from other combinations of DL models and classifiers, followed by ResNet-101 with the ensemble of subspace discriminant classifiers.However, ResNet-101 required almost double the computational time of ResNet-50, per the results from
Fig. 3
3Classification accuracy for DF extracted from the 14 deep networks investigated in this study with 5-class COVID-19 images with 3 machine learning classifiers. The average of all classifiers is also shown with error bars representing standard deviation. Whiskers represent Confidence Interval (CI) calculated at the 95% confidence level of the detection accuracy.
Fig. 4
4Confusion matrix for ResNet-50 for 5-class COVID-19 classification with ensemble of subspace discriminant classifier (5-fold CV).
(
Fig.5). Clear separation of the COVID-19 and TB classes compared to that of normal can be observed, while there is an area of overlap of bacterial and viral pneumonia. Silhouette criterion values illustrated in
Fig. 5
5t-SNE visualizations of five-class balanced COVID-19 features extracted from ResNet-50 for
Fig. 6
6Confusion matrix for the 3-class (COVID-19 vs. normal vs. TB).
Fig. 7 t
7-SNE visualization in 3 dimensions for the 3-class DF of ResNet-50 for COVID-19 vs. normal vs. TB. See corresponding confusion matrix in Fig.6.
Fig. 8 Fig. 9 2
89Confusion matrix for 2-class of COVID-19 vs. normal. The detection accuracy is equal to 99.9 ± 0.5 % (accuracy ± CI at 95% confidence level) -class t-SNE visualization for COVID-19 vs. normal. See corresponding confusion matrix inFig.8.
The subsequent analysis focused on ResNet-50, as we decided to further understand the performance of this DL network across each of the individual classes in the five-class dataset considered here. The importance of this step in the analysis can be appreciated when considering the similarity between some of the symptoms of the considered diseases, specially fever, tiredness and cough, for TB and COVID-19. Interestingly, the trained ResNet-50 model showed only 1% errors when it comes to the proper identification of COVID-19 and TB, while most of the confusion took a place between pneumonia bacterial and viral, with pneumonia viral being the most confused about class of diseases. These results were further supported by all the precision, recall, specificity, F-score and AUC measures for ResNet-50. These measures support the potential use of the proposed model as a quick and reliable CAD in COVID-19 and TB detection.
Table 1
1The details of the COVID-19 5-classes dataset.Class
Number of x-ray images
COVID-19
435
Normal
439
Pneumonia-bacterial
439
Pneumonia-viral
439
Tuberculosis
434 (394+40 augmented)
All 5-class dataset
2186
Table 2
2The details of the 14 pretrained deep networks utilized in this study.Network
name
Reference
Input size
Fully
connected layer
No.
of
layer
s
No. of
parameters
in Million
1
AlexNet
Krizhevsky et al. [44] 227x227x3
fc8
25
61
2
Inception-ResNet-v2
Szegedy et al. [45]
299x299x3
predictions
824
23.2
3
DenseNet-201
Huang et al. [31]
224x224x3
fc1000
708
20
4
ResNet-18
He et al. [46]
224x224x3
fc1000
71
11.7
5
ResNet-50
He et al. [46]
224x224x3
fc1000
177
25.6
6
ResNet-101
He et al. [46]
224x224x3
fc1000
347
44.6
7
VGG-16
Simonyan and
Zisserman [47]
224x224x3
fc8
41
138
8
VGG-19
Simonyan and
Zisserman [47]
224x224x3
fc8
47
144
9
MobileNet-v2
Sandler et al. [32]
224x224x3
Logits
154
3.5
10
ShuffleNet
Zhang et al. [48]
224x224x3
node_202
172
1.4
11
GoogLeNet
Szegedy et al. [49]
224x224x3 loss3-classifier
144
7
12
Xception
Chollet et al. [50]
299x299x3
predictions
170
22.9
13
NASNet-Mobile
Zoph et al. [51]
224x224x3
predictions
913
5.3
14
NASNet-Large
Zoph et al. [51]
331x331x3
predictions
1243
88.9
Table 3
3The time needed to extract deep features from all pretrained networks for all 2186 images.Time estimated for 5-fold CV on Core i5 CPU computer with 16 GB RAM.
No.
Network Name
Time for deep feature
extraction (minutes) for 2186
images
1
AlexNet
3.25
2
Inception-ResNet-v2
33.54
3
DenseNet-201
20.79
4
ResNet-18
6.50
5
ResNet-50
6.90
6
ResNet-101
11.60
7
VGG-16
31.85
8
VGG-19
38.72
9
MobileNet-v2
5.37
10
ShuffleNet
4.62
11
GoogLeNet
7.16
12
Xception
24.75
13
NASNet-Mobile
8.07
14
NASNet-Large
94.05
Table 4
4The classification performance including precision, recall, specificity, F-score and AUC for ResNet-50 and ensemble of subspace discriminant classifier. The overall detection accuracy is equalto 91.6 ±2.6 % (accuracy ± Confidence Interval (CI) at 95% confidence level)
Class
Precision Recall Specificity F-score AUC
COVID-19
99
98.6
99.8
98.8
1
Normal
94
97.2
98.5
95.6
0.99
Pneumonia-bacterial
81
85.4
95
83.1
0.97
Pneumonia-viral
86.3
77.2
97
81.5
0.97
Tuberculosis
97.3
99.3
99.3
98.3
1
Table 5
5Silhouette criterion values for 5-class COVID-19 balanced dataset.Class
Silhouette criterion values
COVID-19
0.5345
Normal
0.5843
Pneumonia-bacterial
-0.7114
Table 6
6The classification performance for 3-class (COVID-19 vs. normal vs. TB) with ResNet-50and ensemble of subspace discriminant with 5-fold CV. The overall detection accuracy is equal to
98.6 ± 1.4 % (accuracy ± CI at 95% confidence level)
Class
Precision
Recall
Specificity F-score
AUC
COVID-19
98.39
98.39
99.20
98.39
1
Normal
100
98.86
100
99.43
1
Tuberculosis
97.49
98.62
98.74
98.05
1
Table 7 .
7Comparison to the previous literature on COVID-19 detection with X-ray images. 127 COVID-19/ 127 normal/127 pneumonia * Detection accuracy (Acc.) with 95% Confidence Interval (CI) ** The CI for the previous literature is not provided.Study
Dataset
Evaluatio
n method
Techniques used
Detection
accuracy
Narin et al.
[26]
2-class:
50 COVID-19/ 50 normal
5-fold CV
Transfer learning with
Resnet50 and InceptionV3
98%
Panwar et al.
[30]
2-class:
142 COVID-19/
142 normal
Holdout
30%
nCOVnet CNN
88%
Altan et al.
[56]
3-class:
219 COVID-19
1341 normal
1345 pneumonia viral
Holdout
27%
2D curvelet transform,
chaotic salp swarm
algorithm (CSSA),
EfficientNet-B0
99%
Chowdhury et
al. [11]
3-class:
423 COVID-19
1579 normal
1485 pneumonia viral
5-fold CV
Transfer learning with
ChexNet
97.7%
Wang and
Wong [21]
3-class:
358 COVID-19/ 5538 normal/ 8066
pneumonia
Holdout
30%
COVID-Net
93.3%
Kumar et al.
[36]
3-class:
62 COVID-19/1341 normal/ 1345
pneumonia
Holdout
30%
Resnet1523 features and
XGBoost classifier
90%
Sethy and
Behera [35]
3-class:
Holdout
20%
Resnet50 features and SVM
95.33%
Ozturk et al.
[4]
3-class:
125 COVID-19/ 500 normal 500
pneumonia
5-fold CV
DarkCovidNet CNN
87.2%
Khan et al.
[12]
4-class:
284 COVID-19/ 310 normal/ 330
pneumonia bacterial/ 327 pneumonia viral
4-fold CV
CoroNet CNN
89.6%%
This study
5-class:
435 COVID-19/439 normal/ 439
pneumonia bacterial/ 439 pneumonia viral/
434 Tuberculosis
3-class:
435 COVID-19/439 normal/ 434
Tuberculosis
5-fold CV
Resnet50 features and
ensemble of subspace
discriminant classifier
91.6 ±2.6%*
98.6 ± 1.4%*
https://www.worldometers.info/coronavirus/
AcknowledgmentsThe authors are grateful for the research groups who provided the X-ray images. We would like also to thank all medial staff around the world who are in the first line of defense against COVID-19. We are grateful for research community around the world who made research papers freely available for immediate download to facilitate worldwide research sharing to help in the current COVID-19 crisis.
The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak. H A Rothan, S N Byrareddy, J Autoimmun. 2020102433Rothan HA, Byrareddy SN. The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak. J Autoimmun 2020:102433.
WHO) WHO. Coronaviruses (COVID-19). (WHO) WHO. Coronaviruses (COVID-19) 2020.
Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. X Xie, Z Zhong, W Zhao, C Zheng, F Wang, J Liu, Radiology. 2020Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. Radiology 2020:200343.
Automated detection of COVID-19 cases using deep neural networks with X-ray images. T Ozturk, M Talo, E A Yildirim, U B Baloglu, O Yildirim, U R Acharya, Comput Biol Med. 2020103792Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Acharya UR. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med 2020:103792.
Predicted impact of the COVID-19 pandemic on global tuberculosis deaths in 2020. P Glaziou, 2020Glaziou P. Predicted impact of the COVID-19 pandemic on global tuberculosis deaths in 2020. MedRxiv 2020.
Deep Transfer Learning based Classification Model for COVID-19 Disease. Y Pathak, P K Shukla, A Tiwari, S Stalin, S Singh, P K Shukla, 2020Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S, Shukla PK. Deep Transfer Learning based Classification Model for COVID-19 Disease. IRBM 2020.
A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). S Wang, B Kang, J Ma, X Zeng, M Xiao, J Guo, 2020Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). MedRxiv 2020.
Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. A A Ardakani, A R Kanafi, U R Acharya, N Khadem, A Mohammadi, Comput Biol Med. 2020103795Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput Biol Med 2020:103795.
Can AI help in screening viral and COVID-19 pneumonia? ArXiv Prepr. Meh Chowdhury, T Rahman, A Khandakar, R Mazhar, M A Kadir, Z Mahbub, Bin, Chowdhury MEH, Rahman T, Khandakar A, Mazhar R, Kadir MA, Mahbub Z Bin, et al. Can AI help in screening viral and COVID-19 pneumonia? ArXiv Prepr ArXiv200313145 2020.
Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. A I Khan, J L Shah, M M Bhat, Comput Methods Programs Biomed. 2020105581Khan AI, Shah JL, Bhat MM. Coronet: A deep neural network for detection and diagnosis of COVID- 19 from chest x-ray images. Comput Methods Programs Biomed 2020:105581.
Applications of Machine Learning and Artificial Intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. S Lalmuanawma, J Hussain, L Chhakchhuak, Chaos, Solitons & Fractals. 2020110059Lalmuanawma S, Hussain J, Chhakchhuak L. Applications of Machine Learning and Artificial Intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. Chaos, Solitons & Fractals 2020:110059.
Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images. Ee-D Hemdan, M A Shouman, M E Karar, ArXiv Prepr. Hemdan EE-D, Shouman MA, Karar ME. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images. ArXiv Prepr ArXiv200311055 2020.
Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning. S Minaee, R Kafieh, M Sonka, S Yazdani, G J Soufi, ArXiv Prepr. Minaee S, Kafieh R, Sonka M, Yazdani S, Soufi GJ. Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning. ArXiv Prepr ArXiv200409363 2020.
Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. I D Apostolopoulos, T A Mpesiana, Phys Eng Sci Med. 20201Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med 2020:1.
Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19. K Hammoudi, H Benhabiles, M Melkemi, F Dornaika, I Arganda-Carreras, D Collard, ArXiv PreprHammoudi K, Benhabiles H, Melkemi M, Dornaika F, Arganda-Carreras I, Collard D, et al. Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19. ArXiv Prepr ArXiv200403399 2020.
Automatic tuberculosis screening using chest radiographs. S Jaeger, A Karargyris, S Candemir, L Folio, J Siegelman, F Callaghan, IEEE Trans Med Imaging. 33Jaeger S, Karargyris A, Candemir S, Folio L, Siegelman J, Callaghan F, et al. Automatic tuberculosis screening using chest radiographs. IEEE Trans Med Imaging 2013;33:233-45.
Improving tuberculosis diagnostics using deep learning and mobile health technologies among resource-poor and marginalized communities. Y Cao, C Liu, B Liu, M J Brunette, N Zhang, T Sun, P Zhang, J Peinado, E S Garavito, Llg , IEEE Int. Conf. Connect. Heal. Appl. Syst. Eng. Technol. Y. Cao, C. Liu, B. Liu, M. J. Brunette, N. Zhang, T. Sun, P. Zhang, J. Peinado, E. S. Garavito LLG et al. Improving tuberculosis diagnostics using deep learning and mobile health technologies among resource-poor and marginalized communities. IEEE Int. Conf. Connect. Heal. Appl. Syst. Eng. Technol., 2016, p. 274-281.
Deep Learning for Health Informatics. D Rav, C Wong, F Deligianni, M Berthelot, J Andreu-Perez, B Lo, IEEE J Biomed Heal Informatics. 21Rav D, Wong C, Deligianni F, Berthelot M, Andreu-perez J, Lo B. Deep Learning for Health Informatics. IEEE J Biomed Heal Informatics 2017;21:4-21.
COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images. L Wang, A Wong, ArXiv Prepr. Wang L, Wong A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images. ArXiv Prepr ArXiv200309871 2020.
Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. S Candemir, S Jaeger, K Palaniappan, J P Musco, R K Singh, Z Xue, IEEE Trans Med Imaging. 33Candemir S, Jaeger S, Palaniappan K, Musco JP, Singh RK, Xue Z, et al. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans Med Imaging 2013;33:577-90.
DFU_QUTNet: diabetic foot ulcer classification using novel deep convolutional neural network. L Alzubaidi, M A Fadhel, S R Oleiwi, O Al-Shamma, J Zhang, Multimed Tools Appl. 2019Alzubaidi L, Fadhel MA, Oleiwi SR, Al-Shamma O, Zhang J. DFU_QUTNet: diabetic foot ulcer classification using novel deep convolutional neural network. Multimed Tools Appl 2019:1-23.
Standard plane localization in fetal ultrasound via domain transferred deep neural networks. H Chen, D Ni, J Qin, S Li, X Yang, T Wang, IEEE J Biomed Heal Informatics. 19Chen H, Ni D, Qin J, Li S, Yang X, Wang T, et al. Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE J Biomed Heal Informatics 2015;19:1627-36.
Automated Glaucoma Diagnosis Using Deep and Transfer Learning: Proposal of a System for Clinical Testing. M Norouzifard, A Nemati, H Gholamhosseini, R Klette, K Nouri-Mahdavi, S Yousefi, Norouzifard M, Nemati A, GholamHosseini H, Klette R, Nouri-Mahdavi K, Yousefi S. Automated Glaucoma Diagnosis Using Deep and Transfer Learning: Proposal of a System for Clinical Testing.
. Int, Conf, Image Vis. Comput. New Zeal., IEEE. Int. Conf. Image Vis. Comput. New Zeal., IEEE; 2018, p. 1-6.
Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. A Narin, C Kaya, Z Pamuk, ArXiv Prepr. Narin A, Kaya C, Pamuk Z. Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. ArXiv Prepr ArXiv200310849 2020.
Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on. M Loey, F Smarandache, N E Khalifa, GAN and Deep Transfer Learning. Symmetry (Basel). 12651Loey M, Smarandache F, M Khalifa NE. Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning. Symmetry (Basel) 2020;12:651.
Towards an efficient deep learning model for covid-19 patterns detection in x-ray images. E Luz, P L Silva, R Silva, G Moreira, ArXiv Prepr. Luz E, Silva PL, Silva R, Moreira G. Towards an efficient deep learning model for covid-19 patterns detection in x-ray images. ArXiv Prepr ArXiv200405717 2020.
COVIDiagnosis-Net: Deep Bayes-SqueezeNet based Diagnostic of the Coronavirus Disease 2019 (COVID-19) from X-Ray Images. F Ucar, D Korkmaz, Med Hypotheses. 2020109761Ucar F, Korkmaz D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based Diagnostic of the Coronavirus Disease 2019 (COVID-19) from X-Ray Images. Med Hypotheses 2020:109761.
Application of Deep Learning for Fast Detection of COVID-19 in X-Rays using nCOVnet. H Panwar, P K Gupta, M K Siddiqui, R Morales-Menendez, V Singh, Chaos, Solitons & Fractals. 109944Panwar H, Gupta PK, Siddiqui MK, Morales-Menendez R, Singh V. Application of Deep Learning for Fast Detection of COVID-19 in X-Rays using nCOVnet. Chaos, Solitons & Fractals 2020:109944.
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitHuang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2017, p. 4700-8.
M Sandler, A Howard, M Zhu, A Zhmoginov, L-C Chen, Mobilenetv2: Inverted residuals and linear bottlenecks. Proc. IEEE Conf. Comput. Vis. pattern Recognit. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C. Mobilenetv2: Inverted residuals and linear bottlenecks. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2018, p. 4510-20.
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Int J Comput Vis. 115Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Imagenet large scale visual recognition challenge. Int J Comput Vis 2015;115:211-52.
Deep Learning Toolbox TM User's Guide. M H Beale, M T Hagan, H B Demuth, MathWorks Inc. 2019Beale MH, Hagan MT, Demuth HB. Deep Learning Toolbox TM User's Guide. MathWorks Inc 2019:1- 20.
Detection of coronavirus disease (covid-19) based on deep features. P K Sethy, S K Behera, Preprints. 20200303002020Sethy PK, Behera SK. Detection of coronavirus disease (covid-19) based on deep features. Preprints 2020;2020030300:2020.
Accurate Prediction of COVID-19 using Chest X-Ray Images through Deep Feature Learning model with SMOTE and Machine Learning Classifiers. R Kumar, R Arora, V Bansal, V J Sahayasheela, H Buckchash, J Imran, Kumar R, Arora R, Bansal V, Sahayasheela VJ, Buckchash H, Imran J, et al. Accurate Prediction of COVID-19 using Chest X-Ray Images through Deep Feature Learning model with SMOTE and Machine Learning Classifiers. MedRxiv 2020.
COVID-19 image data collection. J P Cohen, P Morrison, L Dao, ArXiv PreprCohen JP, Morrison P, Dao L. COVID-19 image data collection. ArXiv Prepr ArXiv200311597 2020.
Identifying medical diagnoses and treatable diseases by image-based deep learning. D S Kermany, M Goldbaum, W Cai, Ccs Valentim, H Liang, S L Baxter, Cell. 172Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018;172:1122-31.
A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. M Rahimzadeh, A Attar, Informatics Med Unlocked. 2020100360Rahimzadeh M, Attar A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Informatics Med Unlocked 2020:100360.
Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. L Brunese, F Mercaldo, A Reginelli, A Santone, Comput Methods Programs Biomed. 2020105608Brunese L, Mercaldo F, Reginelli A, Santone A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. Comput Methods Programs Biomed 2020:105608.
Sensor Applications and Physiological Features in Drivers’ Drowsiness Detection: A Review. A Chowdhury, R Shankaran, M Kavakli, M M Haque, 10.1109/JSEN.2018.2807245IEEE Sens J. 1748Chowdhury A, Shankaran R, Kavakli M, Haque MM. Sensor Applications and Physiological Features in Drivers’ Drowsiness Detection: A Review. IEEE Sens J 2018;1748:3055-67. https://doi.org/10.1109/JSEN.2018.2807245.
Computer aided diagnosis for suspect keratoconus detection. I Issarti, A Consejo, C Koppen, J J Rozema, M Jiménez-García, S Hershko, Comput Biol Med. 109Issarti I, Consejo A, Koppen C, Rozema JJ, Jiménez-garcía M, Hershko S. Computer aided diagnosis for suspect keratoconus detection. Comput Biol Med 2019;109:33-42.
. 10.1016/j.compbiomed.2019.04.024https://doi.org/10.1016/j.compbiomed.2019.04.024.
Survey on deep learning with class imbalance. J M Johnson, T M Khoshgoftaar, 10.1186/s40537-019-0192-5J Big Data. 6Johnson JM, Khoshgoftaar TM. Survey on deep learning with class imbalance. J Big Data 2019;6. https://doi.org/10.1186/s40537-019-0192-5.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Adv. Neural Inf. Process. Syst. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst., 2012, p. 1097-105.
Inception-v4, inception-resnet and the impact of residual connections on learning. C Szegedy, S Ioffe, V Vanhoucke, A A Alemi, Thirty-first AAAI Conf. Artif. Intell. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-first AAAI Conf. Artif. Intell., 2017.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitHe K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2016, p. 770-8.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ArXiv PreprSimonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. ArXiv Prepr ArXiv14091556 2014.
An extremely efficient convolutional neural network for mobile devices. X Zhang, X Zhou, Lin M , Sun J Shufflenet, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitZhang X, Zhou X, Lin M, Sun J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2018, p. 6848-56.
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitSzegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2015, p. 1-9.
Xception: Deep learning with depthwise separable convolutions. F Chollet, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitChollet F. Xception: Deep learning with depthwise separable convolutions. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2017, p. 1251-8.
Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q Le, Proc. IEEE Conf. Comput. Vis. pattern Recognit. IEEE Conf. Comput. Vis. pattern RecognitZoph B, Vasudevan V, Shlens J, Le Q V. Learning transferable architectures for scalable image recognition. Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2018, p. 8697-710.
The random subspace method for constructing decision forests. T K Ho, IEEE Trans Pattern Anal Mach Intell. 20Ho TK. The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 1998;20:832-44.
Barnes-hut-sne. L Van Der Maaten, Proc. Int. Conf. Learn. Represent. Int. Conf. Learn. RepresentVan Der Maaten L. Barnes-hut-sne. Proc. Int. Conf. Learn. Represent., 2013.
Visualizing data using t-SNE. Maaten L Van Der, G Hinton, J Mach Learn Res. 9Maaten L van der, Hinton G. Visualizing data using t-SNE. J Mach Learn Res 2008;9:2579-605.
Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. P J Rousseeuw, J Comput Appl Math. 20Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 1987;20:53-65.
Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. A Altan, S Karasu, Chaos, Solitons & Fractals. 110071Altan A, Karasu S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos, Solitons & Fractals 2020:110071.
Using X-ray images and deep learning for automated detection of coronavirus disease. K Elasnaoui, Y Chawki, J Biomol Struct Dyn. 2020Elasnaoui K, Chawki Y. Using X-ray images and deep learning for automated detection of coronavirus disease. J Biomol Struct Dyn 2020:1-22.
|
[
"https://github.com/ieee8023/covid-chestxray-dataset"
] |
[
"Negative skewness of radial pairwise velocity in the quasi-nonlinear regime: Zel'dovich approximation",
"Negative skewness of radial pairwise velocity in the quasi-nonlinear regime: Zel'dovich approximation"
] |
[
"Ayako Yoshisato \nDepartment of Physics\nOchanomizu University\nOtsuka 2-2-1112-0012BunkyoTokyoJapan\n",
"Masahiro 2⋆ \nResearch Institute of Systems Planning\nSakuragaoka-cho 2-9150-0031SibuyaTokyoJapan\n",
"Morikawa \nDepartment of Physics\nOchanomizu University\nOtsuka 2-2-1112-0012BunkyoTokyoJapan\n",
"Hideaki Mouri \nMeteorological Research Institute\nNagamine 1-1305-0052TsukubaJapan\n"
] |
[
"Department of Physics\nOchanomizu University\nOtsuka 2-2-1112-0012BunkyoTokyoJapan",
"Research Institute of Systems Planning\nSakuragaoka-cho 2-9150-0031SibuyaTokyoJapan",
"Department of Physics\nOchanomizu University\nOtsuka 2-2-1112-0012BunkyoTokyoJapan",
"Meteorological Research Institute\nNagamine 1-1305-0052TsukubaJapan"
] |
[
"Mon. Not. R. Astron. Soc"
] |
According to N -body numerical simulations, the radial pairwise velocities of galaxies have negative skewness in the quasi-nonlinear regime. To understand its origin, we calculate the probability distribution function of the radial pairwise velocity using the Zel'dovich approximation, i.e., an analytical approximation for gravitational clustering. The calculated probability distribution function is in good agreement with the result of N -body simulations. Thus the negative skewness originates in relative motions of galaxies in the clustering process that the infall dominates over the expansion.
|
10.1046/j.1365-8711.2003.06760.x
|
[
"https://arxiv.org/pdf/astro-ph/0110137v2.pdf"
] | 15,227,810 |
astro-ph/0110137
|
fcc52d735f06cf59197da8339d44892683518a8e
|
Negative skewness of radial pairwise velocity in the quasi-nonlinear regime: Zel'dovich approximation
2003. 4 November 2018
Ayako Yoshisato
Department of Physics
Ochanomizu University
Otsuka 2-2-1112-0012BunkyoTokyoJapan
Masahiro 2⋆
Research Institute of Systems Planning
Sakuragaoka-cho 2-9150-0031SibuyaTokyoJapan
Morikawa
Department of Physics
Ochanomizu University
Otsuka 2-2-1112-0012BunkyoTokyoJapan
Hideaki Mouri
Meteorological Research Institute
Nagamine 1-1305-0052TsukubaJapan
Negative skewness of radial pairwise velocity in the quasi-nonlinear regime: Zel'dovich approximation
Mon. Not. R. Astron. Soc
0002003. 4 November 2018Accepted 2003 May 1; in original form 2002 October 10arXiv:astro-ph/0110137v2 29 (MN L A T E X style file v2.2)cosmology: theory -large-scale structure of universe
According to N -body numerical simulations, the radial pairwise velocities of galaxies have negative skewness in the quasi-nonlinear regime. To understand its origin, we calculate the probability distribution function of the radial pairwise velocity using the Zel'dovich approximation, i.e., an analytical approximation for gravitational clustering. The calculated probability distribution function is in good agreement with the result of N -body simulations. Thus the negative skewness originates in relative motions of galaxies in the clustering process that the infall dominates over the expansion.
INTRODUCTION
Let us consider two galaxies separated from each other by the vector r. If their peculiar velocities are u1 and u2, the pairwise velocity v is defined as
v = u2 − u1 = v n + v ⊥ n ⊥ .(1)
Here the unit vectors n and n ⊥ are parallel and perpendicular to the separation vector r. We study probability distribution functions (PDFs) of v and v ⊥ , which are respectively defined as radial and transverse components of the pairwise velocity (−∞ < v < +∞, 0 v ⊥ < +∞). The radial pairwise velocity is of fundamental importance in observational cosmology (Peebles 1993, section 20). It serves as a probe of the matter distribution. Its PDF is essential to converting observed data in the redshift space into those in the real space.
For the quasi-nonlinear regime of the present-day universe (r = |r| ≃ 10 1 -10 2 Mpc), N -body numerical simulations have shown that the radial-velocity PDF exhibits a negative average and negatively skewed pronounced tails (Efstathiou et al. 1988;Fisher et al. 1994;Zurek et al. 1994;Magira, Jing & Suto 2000). These properties are attributable to coherent motions of galaxies approaching each other, since gravitational clustering is the predominant process in the quasi-nonlinear regime.
However, with the results of N-body simulations alone, we cannot understand how the negative average, negative skewness and pronounced tails of the radial-velocity PDF are related with gravitational clustering. To clarify this relation, an analytical treatment is desirable.
The linear perturbation, i.e., the most convenient analytic tool, in insufficient to study the above-mentioned properties of the radial-velocity PDF. This is because these properties are nonlinear. The linear perturbation merely amplifies peculiar velocities of all the galaxies by the same factor. Thus the radial-velocity PDF retains its Gaussianity in the initial stage, where motions of the individual galaxies are random and independent. Although the leading terms of the average v and standard deviation v 2 happen to be squares of the linear perturbation, those of the higher-order moments, e.g., v 3 and v 4 , are nonlinear perturbations (see Juszkiewicz, Fisher & Szapudi 1998). Juszkiewicz et al. (1998) used a second-order Eulerian perturbation and reproduced successfully the moments up to the third order, v , v 2 , and v 3 , as well as the negatively skewed PDF. However, the PDF was obtained with a conventional but unjustified ansatz, instead of the rigorous theory that requires complicated calculations. Seto & Yokoyama (1998) used a first-order Lagrangian perturbation, i.e., the Zel'dovich approximation (ZA;Zel'dovich 1970). Although their radial-velocity PDF exhibits pronounced tails, they failed to reproduce the negative average and had to shift their PDF by hand in order to achieve agreement with N -body simulations. Overall, the mechanism that determines the radial-velocity PDF is still controversial.
We use ZA to reexamine the PDF. It would be surprising if ZA really reproduced the pronounced tails but not the negative average v . The former is associated with the higher moments such as v 4 . For the quasi-nonlinear regime, ZA is a good approximation . Each of the galaxies is assumed to move along the gravitational force that is determined by the initial density distribution. Thus the galaxies cluster together. The velocity and density fields undergo a nonlinear evolution, which is expected to cause the negative average, negative skewness and pronounced tails of the radial-velocity PDF.
We have to be careful about the extent to which ZA can describe gravitational clustering. Of particular importance is the range of the separation r where ZA is applicable. It has been known that ZA is applicable only to a limited range, but the applicable range has not been estimated so far. The quantitative estimation is done for the first time in this paper. We subsequently demonstrate that Seto & Yokoyama (1998) made their ZA analysis outside its applicable range.
Throughout the main text, we adopt the Einstein-de Sitter universe with a Hubble constant H0 = 50 km s −1 Mpc −1 for comparison with the previous works. In Section 2, an analytic form of the joint PDF for the radial and transverse velocities is derived with ZA. The applicable separation range of ZA is estimated in Section 3. We compare ZA with N -body simulations in Section 4. The origin of the negative average and negative skewness is discussed in Section 5. We discuss the existing relevant models in Section 6. The conclusion is presented in Section 7. The application to other cosmological models and so on are discussed in Appendices.
PAIRWISE VELOCITY DISTRIBUTION
Analytical form
Using ZA, we newly derive an analytic form of the joint PDF for the radial and transverse velocities at the separation r and the time t, P (v , v ⊥ |r, t). It is assumed that the vectors v n and v ⊥ n ⊥ have isotropic distributions. The joint PDF is normalized as
P v , v ⊥ |r, t dv 2πv ⊥ dv ⊥ = ξ(r, t) + 1,(2)
where ξ(r, t) is the two-point correlation function for the number density of galaxies. The radial-velocity PDF, P (v |r, t), is obtained by integrating the joint PDF over the transverse velocity v ⊥ . Let us consider a relative motion of two galaxies. Their separation vector r = r(t) and pairwise velocity v = v(t) are written with ZA as
r = x2 − x1 = ri + Ḋ Di vi, v = u2 − u1 = v n + v ⊥ n ⊥ =Ḋ Di vi.(3)
Here D = D(t) ∝ t 2/3 is the linear growth factor of the density fluctuation. We set D = 1 for the present-day universe. The suffix i indicates quantities at the initial time t = ti. Since the galaxies move along straight lines, the relative motion lies on the plane determined by the vectors ri and vi.
With an increase of the time from t to t + dt, a galaxy pair is defined to move from (v , v ⊥ , r) to (v ′ , v ′ ⊥ , r ′ ). From conservation of the number of galaxies, we have
P v ′ , v ′ ⊥ |r ′ , t + dt dv ′ 2πv ′ ⊥ dv ′ ⊥ 4πr ′2 dr ′ = P v , v ⊥ |r, t dv 2πv ⊥ dv ⊥ 4πr 2 dr.(4)
From equation (3), the time evolution of dv 2πv ⊥ dv ⊥ 4πr 2 dr is obtained as
dv ′ 2πv ′ ⊥ dv ′ ⊥ 4πr ′2 dr ′ = 1 + 3Ḋ D dt dv 2πv ⊥ dv ⊥ 4πr 2 dr.(5)
The solution is
dv 2πv ⊥ dv ⊥ 4πr 2 dr = Ḋ Di 3 dv ,i 2πv ⊥,i dv ⊥,i 4πr 2 i dri.(6)
Thus the joint PDF at the time t is related to the PDF at the initial time ti as
P v , v ⊥ |r, t = Ḋ i D 3 P v ,i , v ⊥,i |ri, ti .(7)
From equation (3), the initial separation ri is obtained as
ri = r − Ḋ D v 2 + Ḋ D v ⊥ 2 .(8)
Likewise, the initial pairwise velocities v ,i and v ⊥,i are obtained as
v ,i =Ḋ i D rv ri − Ḋ D v 2 + v 2 ⊥ ri , v ⊥,i =Ḋ i D rv ⊥ ri ,(9)
(see also Appendix A). Seto & Yokoyama (1998) used ZA to derive directly an analytic form of the radial-velocity PDF, P (v |r, t). We prefer the joint PDF, P (v , v ⊥ |r, t), which has a more simplified form and provides us with more straightforward information about, e.g., the origin of the negative skewness in the radial velocity (Section 5).
Initial condition
Here we determine the initial condition for P (v , v ⊥ |r, t). Although the same formulation was adopted in Seto & Yokoyama (1998), we give a more complete description for our discussion in Sections 3 and 5. The basic assumption is that the initial density fluctuation is random Gaussian while the initial peculiar-velocity field is homogeneous, isotropic, random and vorticityfree.
Since the initial density fluctuation is random Gaussian, the corresponding PDF of the radial and transverse velocities is also Gaussian (Fisher 1995),
P v ,i , v ⊥,i |ri, ti = 1 (2π) 3 σ 2 (ri)σ 4 ⊥ (ri) exp − 1 2 v 2 ,i σ 2 (ri) + v 2 ⊥,i σ 2 ⊥ (ri) .(10)
Thus we only have to obtain the dispersions of the initial pairwise velocities σ 2 (ri) and σ 2 ⊥ (ri). The averages v ,i and v ⊥,i do not have to be considered because they vanish in the limit Di → 0.
Since the initial peculiar-velocity field is homogeneous, the pairwise-velocity dispersions are derived from the one-point velocity dispersion and the two-point velocity correlation:
σ 2 (ri) = 2 u 2 ,i (xi) s − 2 u ,i (xi)u ,i (xi + ri) s σ 2 ⊥ (ri) = 2 u 2 ⊥,i (xi) s − 2 u ⊥,i (xi)u ⊥,i (xi + ri) s.(11)
Here · s denotes a spatial average and should not be confused with · that denotes an average over galaxy pairs; u ,i and u ⊥,i are the radial and transverse components of the peculiar velocity ui, respectively. We relate the velocity correlations to the power spectrum of the density fluctuation as follows (Górski 1988). Using a theory for a vector field that is homogeneous, isotropic, random and vorticity-free (Monin & Yaglom 1975, section 12), we have
u ,i (xi)u ,i (xi + ri) s = 2 ∞ 0 j0(kiri) − 2 j1(kiri) kiri Ei(ki)dki, u ⊥,i (xi)u ⊥,i (xi + ri) s = 2 ∞ 0 j1(kiri) kiri Ei(ki)dki.(12)
Here j0 and j1 are first-kind spherical Bessel functions,
j0(x) = sin x x and j1(x) = sin x x 2 − cos x x ,(13)
and Ei(ki) is the power spectrum of the initial peculiar-velocity field,
Ei(ki)dki = 1 2 |ui(xi)| 2 s .(14)
Using the linear perturbation, we relate the power spectrum of the initial peculiar-velocity field Ei(ki) to that of the initial density fluctuation Pi(ki). The linear peculiar-velocity field is described by the linear density field as
ui(xi) = iḊ i Di ki k 2 iδ i(ki) exp(ikixi) dki (2π) 3/2 ,(15)
whereδi(ki) is the Fourier transform of the density contrast δi(xi) (Peebles 1993, section 21). The power spectrum Pi(ki) of the initial density fluctuation is
δ i(ki)δi(k ′ i ) * s = Pi(ki) (2π) 3 δ(ki − k ′ i ),(16)
where * denotes the complex conjugate. From equations (14)- (16), the power spectrum Ei(ki) of the initial peculiar-velocity field is obtained as
Ei(ki) = 2π Ḋ i Di 2 Pi(ki).(17)
Therefore, the radial-and transverse-velocity dispersions at the initial stage are
σ 2 (ri) = 8π 3 Ḋ i Di 2 1 − 3j0(kiri) + 6 j1(kiri) kiri Pi(ki)dki = Ḋ i Di 2 R 2 (ri), σ 2 ⊥ (ri) = 8π 3 Ḋ i Di 2 1 − 3 j1(kiri) kiri Pi(ki)dki = Ḋ i Di 2 R 2 ⊥ (ri).(18)
The initial condition for P (v , v ⊥ |r, t) is determined by equations (10) and (18).
Since the peculiar velocity depends on the gravitational potential produced by the density fluctuation, equation (18) includes the information that galaxies are about to cluster together. In fact, the radial velocity dispersion σ 2 (ri) reflects the two-point velocity correlations u ,i (xi)u ,i (xi + ri) s. At a small separation ri where galaxies are about to move in the same direction, the correlation is positive. At a moderately large separation where galaxies are about to cluster from the opposite sides, the correlation is negative. At a very large separation, the correlation is absent.
Power spectrum of initial density fluctuation
The power spectrum of the initial density fluctuation Pi(ki) is adopted from the standard cold dark matter model (Peebles 1993, section 25):
Pi(ki) = BD 2 i ki 1 + αki + (βki) 3/2 + (γki) 2 ν 2/ν ,(19)
where α = 25.6 Mpc, β = 12 Mpc, γ = 6.8 Mpc and ν = 1.13 for the Einstein-de Sitter universe with H0 = 50 km s −1 Mpc −1 (Bond & Efstathiou 1984;Efstathiou, Bond & White 1992). The normalization factor B is determined from the temperature fluctuation of the cosmic microwave background:
B = 12Ω −1.54 0 c 4 5πH 4 0 Qrms T0 2 .(20)
Here the quadrupole fluctuation amplitude Qrms is set to be 9.5 µK with the present-day temperature T0 = 2.73 K. The same initial power spectrum was adopted in N -body simulations of Fisher et al. (1994) and Zurek et al. (1994), which are to be compared with our calculation, and also in the ZA calculation of Seto & Yokoyama (1998). Other initial power spectra and cosmological models are studied in Appendix B.
Note that there are relations v ,i ∝Ḋi, v ⊥,i ∝Ḋi, σ (ri) ∝Ḋi, σ ⊥ (ri) ∝Ḋi and Pi(ki) ∝ D 2 i in equations (9), (18) and (19). Thus the initial condition for P v , v ⊥ |r, t does not depend on the values of Di andḊi.
The initial spectrum Pi(ki) is used without any modification. Coles, Melott & Shandarin (1993) suggested that ZA's performances are improved if small-scale fluctuations are removed from the initial spectrum by using a window function. Although this appears to be true in the calculation of the matter distribution, the amplitudes of pairwise velocity dispersions would be incorrectly underestimated as shown in Seto & Yokoyama (1998). Therefore, we do not adopt this modification.
Analytical estimation
Let us consider a pair of galaxies that are approaching each other. If its initial separation is ri, its typical initial radial velocity is estimated as v ,i ≃ −σ (ri) = −ḊiD −1 i R (ri). From equation (3), we have
ri − r ≃ − Ḋ Di v ,i ≃ D Di R (ri).(21)
The shell crossing occurs if r = 0. Thus ZA is valid as far as ri ≫ DD −1 i R (ri). Since this condition implies r ≃ ri, we replace ri with r and obtain
r ≫ D Di R (r).(22)
This is ZA's applicable range. Fig. 1 compares the right-hand side of equation (22) with the left-hand side as a function of the separation r at D = 0.1, 0.2, 0.4, 1, 2 and 10. ZA is applicable over the separations where the dashed line (the left-hand side) is well above the solid line (the right-hand side). It is evident that the applicable range is progressively narrow with a progress of the time. At D = 1, ZA is applicable to r ≫ 10 Mpc.
Numerical estimation
The shell crossing affects the time evolution of the two-point correlation function. This is because, at a given separation, galaxies become less clustered after the onset of the shell crossing. Using ZA, we numerically calculate the two-point correlation function ξ and its time derivative ∂ξ/∂t at D = 1 (see equation 2). The results are plotted as a function of the separation r in Fig 2. The time growth ∂ξ/∂t of the two-point correlation function is maximal at r ≃ 10 Mpc. This separation coincides with that for r = DD −1 i R (r) ( fig. 1 in Section 3.1). Thus ZA's applicable range is r ≫ 10 Mpc at D = 1. Any discussion on ZA has to be restricted within this range.
Applicability in velocity range
Even if the separation r is within ZA's applicable range, ZA is valid only for the velocity range v ≪Ḋ D r.
For v >ḊD −1 r > 0, the initial radial velocity is negative as shown in equation (9). Such a galaxy pair has once clustered together with a negative radial velocity, experienced the shell crossing, and then turned to have a positive radial velocity. In the limit |v | → ∞, equations (7)- (10) and (18)
P v , v ⊥ |r, t → 1 (2πR∞) 3 Di D 3 exp − v 2 + v 2 ⊥ 2R∞ Di D 2 ,(24)
where R∞ = 8π 3 Pi(ki)dki.
This is overestimation for the positive radial velocity because of the shell crossing as stated above.
COMPARISON WITH N -BODY SIMULATIONS
The radial-velocity PDFs are shown in Fig. 3 for r = 1, 5, 10 and 23 Mpc at D = 0.1, 0.2, 0.4, 1, 2 and 10 (solid lines). They are compared with the results of N -body simulations of Zurek et al. (1994) and Fisher et al. (1994) at D = 1 (dashed lines).
For reference, we also show Gaussian PDFs (dotted lines). Fisher et al. (1994) presented the radial-velocity PDF at D = 1 for ξ = 0.1, which corresponds to r ≃ 23 Mpc ( fig. 2a). This separation is within ZA's applicable range. The PDF for ξ = 0.1 in the N -body simulation is in agreement with the PDF at r = 23 Mpc in ZA ( fig. 3). Both the PDFs exhibit a negative average as well as negative skewness. They are surely due to gravitational clustering of galaxies.
The average v , standard deviation (v − v ) 2 1/2 and skewness (v − v ) 3 / (v − v ) 2 3/2 of the radial velocities are −190 km s −1 , 430 km s −1 and −0.69, respectively, in the N -body simulation. They are −133 km s −1 , 462 km s −1 and −0.36, respectively, in ZA. The velocity limitḊD −1 r is 1150 km s −1 . The average and standard deviation in the N -body simulation are nearly the same as those in ZA. However, the skewness in the N -body simulation is somewhat different from that in ZA. The radial-velocity PDF in the N -body simulation has a more pronounced tail in the negative side. This is because gravitational acceleration of clustering galaxies is more significant than that assumed in ZA, which simply extrapolates the initial velocity vi (equation 3).
For the separation r = 23 Mpc, the shell crossing becomes significant and ZA becomes invalid at D ≃ 2 ( fig. 1). The PDF becomes symmetric (fig. 3). Eventually, almost all the galaxy pairs experience the shell crossing. The PDF becomes Gaussian as observed at D = 10. Zurek et al. (1994) presented the radial-velocity PDF at D = 1 for r ≃ 1, 5 and 10 Mpc. These separations are outside ZA's applicable range at D = 1 (figs 1 and 2). Fig. 3 shows the evolution of the PDF. At D = 0.1, ZA is valid. The PDF exhibits a negative average and negative skewness. With increasing D, the shell crossing becomes significant, ZA becomes invalid, and the PDF becomes symmetric. At D = 1, the PDF in ZA is quite different from that in the N -body simulation. While ZA yields v = +8, +9 and −54 km s −1 for r = 1, 5 and 10 Mpc, respectively, the N -body simulation yields v = −280, −355 and −276 km s −1 . Seto & Yokoyama (1998) made a ZA calculation for r = 1-11 Mpc at D = 1. Since ZA and N -body simulations yield different radial-velocity PDFs, they concluded that gravitational clustering is not incorporated completely in ZA. Since the PDFs for r ≃ 1 and 5 Mpc in ZA and N -body simulations exhibit exponential tails, ln P (v |r, t) ∝ −|v |, they also concluded that simple kinematics as ZA results in these tails. However, the separations r = 1-11 Mpc at D = 1 are outside ZA's applicable range. The exponential tails at r = 1 and 5 Mpc in ZA are spurious and attributable to galaxies that have experienced the shell crossing for the positive side of the PDF and to galaxies that are still in gravitational infall for the negative side (see also Section 6).
ORIGIN OF NEGATIVE SKEWNESS
The negative average and negative skewness of the radial-velocity PDF originates in galaxy clustering, i.e., the infall v < 0 dominates over the expansion v > 0. We discuss how this process is incorporated in ZA. Since ZA is valid at DD −1 i R (r) ≪ r, we separately study the velocity ranges |v | ≪ḊD −1 i R ,ḊD −1 i R ≪ |v | ≪ḊD −1 r andḊD −1 r ≪ |v |, whereḊD −1 i R = 495 km s −1 andḊD −1 r = 1150 km s −1 at D = 1 for r = 23 Mpc.
First, we study the velocity rangeḊD −1 i R ≪ |v | ≪ḊD −1 r. Since most of the galaxy pairs have relatively small (8) and (9) are simplified as
transverse velocities v ⊥ ≃ḊḊ −1 i σ ⊥ (r) ≃ḊD −1 i R ⊥ (r) ≃ḊD −1 i R (r), equationsri ≃ r 1 − Ḋ D v r ,(26)v ,i ≃Ḋ i D v and v ⊥,i ≃Ḋ i D v ⊥ .(27)
Thus the initial separation ri depends on the sign of the present radial velocity v :
ri(v = +V ) < ri(v = −V ),(28)
where V > 0. For the initial separation ri, the radial-velocity dispersion σ (ri) has the following dependence as shown in Fig. 4a:
σ (r ′ i ) < σ (r ′′ i ) for r ′ i < r ′′ i 100 Mpc.(29)
The initial PDF, P (v ,i , v ⊥,i |ri, ti), is Gaussian with the standard deviation σ (ri). Equation (27) implies |v ,i | ≫ σ (ri). For such an argument that exceeds the standard deviation, a Gaussian function with a larger standard deviation yields a larger value as shown in Fig. 4b. Hence we expect
P v ,i , v ⊥,i |r ′ i , ti < P v ,i , v ⊥,i |r ′′ i , ti for r ′ i < r ′′ i ,(30)
which leads to
P v = +V, v ⊥ |r, t < P v = −V, v ⊥ |r, t .(31)
Thus the infall v < 0 dominates over the expansion v > 0 in this velocity range. The PDF is accordingly asymmetric. Second, we study the velocity range |v | ≪ḊD −1 i R . Since the absolute value of v tends to be much smaller than v ⊥ , the sign of v is not important to the values of ri and v ,i in equations (8) and (9). We expect P (v = +V, v ⊥ |r, t) ≃ P (v = −V, v ⊥ |r, t). The radial velocities among galaxies with small peculiar velocities retain the initial Gaussian character. These galaxies have not moved significantly from their initial positions.
Third, we study the velocity range |v | ≫ḊD −1 r. Equations (8) and (9) yield ri ≃ DḊ −1 |v | and hence v ,i ≃ −ḊiḊ −1 |v |, which in turn yields P (v = +V, v ⊥ |r, t) ≃ P (v = −V, v ⊥ |r, t). This is due to shell crossing as discussed in Section 3.3. For the real universe, the radial-velocity PDF is expected to be asymmetric in this velocity range. Therefore, the negative average and negative skewness in ZA stem from the inequality P
(v = +V, v ⊥ |r, t) < P (v = −V, v ⊥ |r, t) in the velocity rangeḊD −1 i R ≪ |v | ≪ḊD −1 r.
The existence of this velocity range is assured by the condition for ZA's applicability, DD −1 i R ≪ r. Whenever ZA is applicable, its radial-velocity PDF has a negative average and negative skewness.
The dependence of the radial-velocity dispersion σ (ri) on the separation ri in equation (29) is also an assured behavior. As discussed in Section 2.2, the radial-velocity dispersion σ (ri) reflects the difference between the one-point velocity dispersion u 2 ,i (xi) s and the two-point velocity correlation u ,i (xi)u ,i (xi + ri) s. They are identical at ri = 0. With an increase of the separation ri, the two-point correlation decreases slowly in accordance with motions of galaxies that are about to move coherently and cluster together.
DISCUSSION ON RELEVANT MODELS
The pairwise velocity was studied by Juszkiewicz et al. (1998) using a second-order Eulerian perturbation. At the separation of 20.8 Mpc, the average, standard deviation and skewness of the radial velocities are −200 km s −1 , 430 km s −1 and −1.00, respectively. They are −190 km s −1 , 440 km s −1 and −0.75, respectively, in the N -body simulation under the same conditions. The agreement is better than it is in ZA (Section 4), probably because ZA does not conserve the momentum (Juszkiewicz et al. 1998). This could be a disadvantage of ZA, in addition to the problem of shell crossing. The second-order Eulerian perturbation has its own disadvantage that the formulation is too complicated to yield the radial-velocity PDF. Thus ZA and the Eulerian perturbation are complementary. Juszkiewicz et al. (1998)
showed that (v − v ) 3 approximately scales as v (v − v ) 2 .
The negative skewness is induced by the negative average that reflects gravitational clustering of galaxies as discussed in Section 5.
The Lagrangian and Eulerian perturbations are useless in the strongly nonlinear regime, ξ(r) ≫ 1 and accordingly r ≪ 10 Mpc at D = 1, for which N -body simulations have shown that the radial-velocity PDF has an exponential tail (Section 4). We favor the model of Sheth (1996) and Diaferio & Geller (1996). The exponential tails were reproduced by averaging relative velocities of galaxy pairs belonging to the same galaxy clusters. These clusters were assumed to be virialized as well as isothermal and have Gaussian velocity distributions with various dispersions according to the theory of Press & Schechter (1974). The PDF is symmetric, i.e., v = 0. This is because there was assumed to exist no gravitational clustering. Since the N -body simulation yields v < 0 for r = 1 and 5 Mpc (Section 4), we consider that gravitational clustering still exists in practice at these separations.
CONCLUSION
The radial pairwise velocities of galaxies exhibit a negative average as well as negatively skewed pronounced tails in the quasinonlinear regime. To understand their origin, we have used ZA, an approximation for gravitational clustering, and analytically studied the radial-velocity PDF.
We have estimated for the first time the separation range where ZA is applicable. For the cold dark matter model of the Einstein-de Sitter universe with H0 = 50 km s −1 Mpc −1 , the applicable range is r ≫ 10 Mpc at D = 1 (Section 3). Any discussion on ZA has to be restricted within this range.
We have compared our ZA calculation with the result of the N -body simulation done under the same initial condition by Fisher et al. (1994). ZA reproduces successfully the negative average and negative skewness of the radial-velocity PDF (Section 4).
The negative average and negative skewness originate in galaxy clustering according to the initial gravitational potential, i.e., the infall v < 0 dominates over the expansion v > 0, as pointed out by Juszkiewicz et al. (1998). This information is contained in the dependence of the initial velocity dispersion on the initial separation (Section 5).
The radial-velocity PDF in N -body simulations has a pronounced tail in the negative side. Since ZA cannot fully reproduce this property, we have attributed it to gravitational acceleration of clustering galaxies that is not fully incorporated in ZA (Section 4). To confirm this conclusion, we would require higher-order approximations, e.g., post-ZA and post-post-ZA. They use not only the initial velocity vi but also the initial accelerationvi and so on (Bouchet et al. 1995; see also ). The approximations Padé-post-ZA and Padé-post-post-ZA developed by would be also useful. (Ω0 = 0.3, λ0 = 0.0, h = 0.83 and Γ = 0.25). Here Ω0 is the density parameter, λ0 is the normalized cosmological constant, and h is defined as H0/(100 km s −1 Mpc −1 ). Thus SCDM is the Einstein-de Sitter universe, while LCDM is a flat universe with a cosmological constant and OCDM is an open universe. The power spectrum of the initial density fluctuation Pi(ki) was from the cold dark matter model of Bardeen et al. (1986, see also Sugiyama 1995:
Pi(ki) = BD 2 i ki ln 2 (1 + 2.34q) (2.34q) 2 1 + 3.89q + (16.1q) 2 + (5.46q) 3 + (6.71q) 4 −1/2 ,
where q = ki/Γh. The normalization factor B was determined by comparing the radial-velocity dispersion at infinity σ (r → ∞) in the linear theory of equation (18)
Figure 1 .
1Graphical representation of equation (22) at D = 0.1, 0.2, 0.4, 1, 2 and 10 as a function of the separation r. The dashed line denotes the left-hand side of the equation. The solid lines denote the right-hand side. They are in units of Mpc. ZA is applicable over the separations where the dashed line is well above the solid line.
Figure 2 .
2Two-point correlation function ξ (a) and its time derivative ∂ξ/∂t (b) at D = 1 in ZA. We show ∂ξ/∂t in units of t −1 0 , where t 0 is the present age of the universe. The abscissa is the separation r in units of Mpc.
Figure 3 .
3Radial-velocity PDFs. Solid lines: ZA calculation at D = 0.1, 0.2, 0.4, 1, 2 and 10 for r = 1, 5, 10 and 23 Mpc. Dashed lines: N -body simulations at D = 1 of Zurek et al. (1994) for r = 1-2, 5-6 and 10-11 Mpc and of Fisher et al. (1994) for r = 23 Mpc. Dotted lines: Gaussian PDFs. The abscissa is in units of the radial-velocity dispersion σ (r) in ZA, which is 462 km s −1 at D = 1 for r = 23 Mpc.
Figure 4 .
4(a) Dependence of the initial pairwise-velocity dispersions σ (r i ) (solid line) and σ ⊥ (r i ) (dashed line) on the initial separation r i . The dispersions are in arbitrary units. The separation is in units of Mpc. (b) Two Gaussian PDFs with different standard deviations.
Figure B1 .
B1Radial-velocity PDFs for SCDM, LCDM and OCDM at r = 22.8 h −1 Mpc. Solid lines: ZA calculation. Dashed lines: N -body simulation of Magira et al. (2000). Dotted lines: Gaussian PDFs with the same dispersions σ (r) as ZA's PDFs. The abscissa is in units of km s −1 .
with that in the N -body simulation (591, 606 and 603 km s −1 for SCDM, LCDM and OCDM). Fig. B1 shows the radial-velocity PDF at 22.8 h −1 Mpc. This separation is within ZA's applicable range. ZA and the N -body simulation are in satisfactory agreement, except for the tails of the PDF that are pronounced in the N -body simulation. In SCDM, the average, standard deviation and skewness are −66 km s −1 , 589 km s −1 and −0.22 in the N -body simulation, while they are −88 km s −1 , 551 km s −1 and −0.18 in ZA. In LCDM, the average, standard deviation and skewness are −153 km s −1 , 616 km s −1 and −0.51 in the N -body simulation, while they are −155 km s −1 , 496 km s −1 and −0.38 in ZA. In OCDM, the average, standard deviation and skewness are −127 km s −1 , 618 km s −1 and −0.40 in the N -body simulation, while they are −136 km s −1 , 512 km s −1 and −0.28 in ZA.
⋆ E-mail: [email protected], [email protected] (AY); [email protected] (MM); [email protected] (HM)
APPLICABLE RANGE OF ZEL'DOVICH APPROXIMATIONGalaxies in ZA continue to move straight, and pass away each other even if they once cluster together. Since actual clustering galaxies are bounded gravitationally, ZA is no longer valid after this so-called shell crossing. At a fixed time, ZA is not applicable to small separations where the shell crossing is statistically significant. In Section 3.1, we analytically estimate the separation range where ZA is applicable. In Section 3.2, we make a numerical estimation on the basis of time evolution of the two-point correlation function. The applicability in the velocity range is also discussed in Section 3.3.
ACKNOWLEDGMENTSThis paper is in part the result of AY's research toward fulfillment of the requirements of the Ph. D. degree at Ochanomizu University. The authors are grateful to the referee for helpful comments.APPENDIX A: TIME EVOLUTION OF PROBABILITY DISTRIBUTION FUNCTIONLet us expand the joint PDF, P (v ′ , v ′ ⊥ |r ′ , t + dt), around the time t using ZA (equations 3 and 5):From comparison between equations (4) and (A1), we obtain the time-evolution equation of the joint PDF asThis equation is equivalent to a collisionless Boltzmann equation because ZA ignores encounters among the individual galaxies. The first, second and third terms represent the acceleration, rotation and translation of the galaxies, respectively. However, (v , v ⊥ , r) do not constitute the generalized coordinates. This fact yields the third term 3P in the parenthesis of the first term (see equation 5).The integration of equation (A2) by dv 2πv ⊥ dv ⊥ yields the evolution of the two-point correlation function ξ(r, t),which corresponds to a continuity equation. This is a reasonable result because the galaxy number is conserved as in equation(4).APPENDIX B: PAIRWISE VELOCITIES IN OTHER COSMOLOGICAL MODELSThus far we have assumed the Einstein-de Sitter universe with a Hubble constant H0 = 50 km s −1 Mpc −1 . The application to other cosmological models is straightforward. We only have to know the values of D andḊ and substitute them to equations (7)-(9). The functional forms of D are summarized inSuto (1993). We compare ZA with N -body simulations of Magira et al. (2000, see alsoSuto et al. 1999). They considered three models, SCDM (Ω0 = 1.0, λ0 = 0.0, h = 0.50 and Γ = 0.50), LCDM (Ω0 = 0.3, λ0 = 0.7, h = 0.70 and Γ = 0.21) and OCDM
. J M Bardeen, J R Bond, N Kaiser, A S Szalay, ApJ. 30415Bardeen, J.M., Bond, J.R., Kaiser, N., Szalay, A.S. 1986, ApJ, 304, 15
. J R Bond, G Efstathiou, ApJ. 28545Bond J.R., Efstathiou G. 1984, ApJ, 285, L45
. F R Bouchet, S Colombi, E Hivon, R Juszkiewicz, A&A. 296575Bouchet F.R., Colombi S., Hivon E., Juszkiewicz R. 1995, A&A, 296, 575
. P Coles, A L Melott, S F Shandarin, MNRAS. 260765Coles P., Melott A.L., Shandarin S.F. 1993, MNRAS, 260, 765
. A Diaferio, M J Geller, ApJ. 46719Diaferio A., Geller M.J. 1996, ApJ, 467, 19
. G Efstathiou, J R Bond, S D M White, MNRAS. 2581Efstathiou G., Bond J.R., White S.D.M. 1992, MNRAS, 258, 1p
. G Efstathiou, C S Frenk, S D M White, M Davis, MNRAS. 235715Efstathiou G., Frenk C.S., White S.D.M., Davis M. 1988, MNRAS, 235, 715
. K B Fisher, ApJ. 448494Fisher K.B. 1995, ApJ, 448, 494
. K B Fisher, M Davis, M A Strauss, A Yahil, J P Huchra, MNRAS. 267927Fisher K.B., Davis M., Strauss M.A., Yahil A., Huchra J.P. 1994, MNRAS, 267, 927
. K Górski, ApJ. 3327Górski K. 1988, ApJ, 332, L7
. R Juszkiewicz, K B Fisher, I Szapudi, ApJ. 5041Juszkiewicz R., Fisher K.B., Szapudi I. 1998, ApJ, 504, L1
. H Magira, Y P Jing, Y Suto, ApJ. 52830Magira H., Jing Y.P., Suto Y. 2000, ApJ, 528, 30
. T Matsubara, A Yoshisato, M Morikawa, ApJ. 5047Matsubara T., Yoshisato A., Morikawa M. 1998, ApJ, 504, 7
A S Monin, A M Yaglom, Statistical Fluid Mechanics: Mechanics of Turbulence. J.L. LumleyCambridge, MAMIT Press2Monin A.S., Yaglom A.M. 1975, Statistical Fluid Mechanics: Mechanics of Turbulence, vol. 2, ed. J.L. Lumley. MIT Press, Cambridge, MA
Principles of Physical Cosmology. P J E Peebles, ApJ. 187425Press W., Schechter PPeebles P.J.E. 1993, Principles of Physical Cosmology. Princeton Univ. Press, Princeton, NJ Press W., Schechter P. 1974, ApJ, 187, 425
. N Seto, J Yokoyama, ApJ. 492421Seto N., Yokoyama J. 1998, ApJ, 492, 421
. R K Sheth, MNRAS. 2791310Sheth R.K. 1996, MNRAS, 279, 1310
. N Sugiyama, ApJS. 100281Sugiyama, N. 1995, ApJS, 100, 281
. Y Suto, Prog. Theor. Phys. 901173Suto Y. 1993, Prog. Theor. Phys., 90, 1173
. Y Suto, H Magira, Y P Jing, T Matsubara, K Yamamoto, Prog. Theor. Phys. Supple. 133183Suto Y., Magira H., Jing Y.P., Matsubara T., Yamamoto, K. 1999, Prog. Theor. Phys. Supple., 133, 183
. A Yoshisato, T Matsubara, M Morikawa, ApJ. 49848Yoshisato A., Matsubara T., Morikawa M. 1998, ApJ, 498, 48
. Y B Zel'dovich, A&A. 584Zel'dovich Y.B. 1970, A&A, 5, 84
. W H Zurek, P J Quinn, J K Salmon, M S Warren, ApJ. 431559Zurek W.H., Quinn P.J., Salmon J.K., Warren M.S. 1994, ApJ, 431, 559
|
[] |
[
"Spatiotemporal Mapping of Photocurrent in a Monolayer Semiconductor Using a Diamond Quantum Sensor",
"Spatiotemporal Mapping of Photocurrent in a Monolayer Semiconductor Using a Diamond Quantum Sensor"
] |
[
"Brian B Zhou [email protected] \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n\nDepartment of Physics\nBoston College\nChestnut Hill02467MassachusettsUSA\n",
"Paul C Jerger \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n",
"Kan-Heng Lee \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n\nSchool of Applied and Engineering Physics\nCornell University\n14853IthacaNYUSA\n",
"Masaya Fukami \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n",
"Fauzia Mujid \nDepartment of Chemistry and James Franck Institute\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n",
"Jiwoong Park \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n\nDepartment of Chemistry and James Franck Institute\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n",
"David D Awschalom \nInstitute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n\nInstitute for Molecular Engineering and Materials Science Division\nArgonne National Laboratory\n60439LemontIllinoisUSA\n"
] |
[
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Department of Physics\nBoston College\nChestnut Hill02467MassachusettsUSA",
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"School of Applied and Engineering Physics\nCornell University\n14853IthacaNYUSA",
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Department of Chemistry and James Franck Institute\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Department of Chemistry and James Franck Institute\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Institute for Molecular Engineering\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Institute for Molecular Engineering and Materials Science Division\nArgonne National Laboratory\n60439LemontIllinoisUSA"
] |
[] |
The detection of photocurrents is central to understanding and harnessing the interaction of light with matter. Although widely used, transport-based detection averages over spatial distributions and can suffer from low photocarrier collection efficiency. Here, we introduce a contact-free method to spatially resolve local photocurrent densities using a proximal quantum magnetometer. We interface monolayer MoS 2 with a near-surface ensemble of nitrogen-vacancy centers in diamond and map the generated photothermal current distribution through its magnetic field profile. By synchronizing the photoexcitation with dynamical decoupling of the sensor spin, we extend the sensor's quantum coherence and achieve sensitivities to alternating current densities as small as 20 nA/µm. Our spatiotemporal measurements reveal that the photocurrent circulates as vortices, manifesting the Nernst effect, and rises with a timescale indicative of the system's thermal properties. Our method establishes an unprecedented probe for optoelectronic phenomena, ideally suited to the emerging class of two-dimensional materials, and stimulates applications towards large-area photodetectors and stick-on sources of magnetic fields for quantum control. * Φ/ 0.5 * 8 * 2 ,
|
10.1103/physrevx.10.011003
|
[
"https://arxiv.org/pdf/1903.09287v1.pdf"
] | 85,459,538 |
1903.09287
|
bc0e1a95723dcd76a2a1ca452a94615572e7efee
|
Spatiotemporal Mapping of Photocurrent in a Monolayer Semiconductor Using a Diamond Quantum Sensor
Brian B Zhou [email protected]
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
Department of Physics
Boston College
Chestnut Hill02467MassachusettsUSA
Paul C Jerger
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
Kan-Heng Lee
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
School of Applied and Engineering Physics
Cornell University
14853IthacaNYUSA
Masaya Fukami
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
Fauzia Mujid
Department of Chemistry and James Franck Institute
University of Chicago
60637ChicagoIllinoisUSA
Jiwoong Park
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
Department of Chemistry and James Franck Institute
University of Chicago
60637ChicagoIllinoisUSA
David D Awschalom
Institute for Molecular Engineering
University of Chicago
60637ChicagoIllinoisUSA
Institute for Molecular Engineering and Materials Science Division
Argonne National Laboratory
60439LemontIllinoisUSA
Spatiotemporal Mapping of Photocurrent in a Monolayer Semiconductor Using a Diamond Quantum Sensor
1 * These authors contributed equally to this work. †
The detection of photocurrents is central to understanding and harnessing the interaction of light with matter. Although widely used, transport-based detection averages over spatial distributions and can suffer from low photocarrier collection efficiency. Here, we introduce a contact-free method to spatially resolve local photocurrent densities using a proximal quantum magnetometer. We interface monolayer MoS 2 with a near-surface ensemble of nitrogen-vacancy centers in diamond and map the generated photothermal current distribution through its magnetic field profile. By synchronizing the photoexcitation with dynamical decoupling of the sensor spin, we extend the sensor's quantum coherence and achieve sensitivities to alternating current densities as small as 20 nA/µm. Our spatiotemporal measurements reveal that the photocurrent circulates as vortices, manifesting the Nernst effect, and rises with a timescale indicative of the system's thermal properties. Our method establishes an unprecedented probe for optoelectronic phenomena, ideally suited to the emerging class of two-dimensional materials, and stimulates applications towards large-area photodetectors and stick-on sources of magnetic fields for quantum control. * Φ/ 0.5 * 8 * 2 ,
Introduction
The extraordinary features of two-dimensional van der Waals (2D vdW) systems have opened new directions for tailoring the interaction of light with matter, with potential to impact technologies for imaging, communications, and energy harvesting. The detection of photo-induced carriers is critical to realizing practical photosensing and photovoltaic devices [1][2][3] , as well as to characterizing novel photo-responses, including optical manipulation of spin and valley indices 4,5 , circular 6-8 and shift 9,10 photocurrents driven by non-trivial Berry curvature and scattering-protected photocurrents at a Dirac point 11 . Transport-based detection of photocurrents in 2D materials is susceptible to inefficient photocarrier extraction, requiring light to be directed near junctions with strong built-in electric field, and thus complicates the scale-up of devices to practical sizes [1][2][3] . To expand our understanding of light-matter interaction and overcome existing technical limitations, new photodetection approaches that offer high spatial and temporal resolution are needed.
In this work, we demonstrate a novel technique using an embedded quantum magnetometer [12][13][14] to detect and spatially resolve photocurrent densities via their local magnetic field signature. We transfer a monolayer MoS 2 (1L-MoS 2 ) sheet grown by metal-organic chemical vapor deposition 15 (MOCVD) onto a diamond chip hosting a near-surface ensemble of nitrogenvacancy (NV) centers. The magnetic field due to photocurrents, driven in this case by the photothermoelectric effect (PTE) 16,17 , modifies the quantum precession of the NV center spin, inducing a phase that can be optically detected [18][19][20] . Due to the near-field nature of our probe, our method does not require remote carrier extraction, and thus eliminates the need for electrical contacts and avoids challenges due to carrier trapping and potential fluctuations in large-area devices 2 . Moreover, in contrast to scanning photocurrent microscopy [6][7][8][9][10][11]16 , our technique provides diffraction-limited spatial resolution for both excitation and detection (Fig. 1a). This enables detailed spatial information to be extracted even when the net photocurrent between two contacts in a conventional measurement is zero.
With wide phase space applicability and potential nanoscale spatial resolution, NV magnetometry has emerged as a premier tool for probing current distributions in materials 13 , revealing insights on the structure of vortices in high-T c superconductors 21,22 and the effect of microscopic inhomogeneity on transport in graphene 23 and nanowires 24 . These demonstrations all probed direct current (dc) flow and were accordingly limited in sensitivity by the inhomogeneous dephasing time T 2 * of the NV center. Here, we leverage the ability to control the timing of the photoexcitation to implement for the first time a "quantum lock-in" protocol to isolate alternating photocurrents. This protocol simultaneously decouples the NV center from wideband magnetic noise, extending its coherence time to the homogeneous T 2 limit, and achieves a sensitivity of 20 nA/µm for alternating current (ac) densities, fifty times smaller than previous work on dc current sensing in graphene (~1 µA/µm in Ref. 23 ). Moreover, in contrast to dc sensing, our ac technique opens the investigation of the dynamics of photocarrier generation. Through changing the repetition rate of the photo-excitation pulses, we can resolve non-equilibrium response over a temporal bandwidth spanning from 1 ⁄~ 10 kHz to an upper range of ~10 MHz, set by achievable driving speeds on the NV center spin 25 . nm excitation partially overlaps the NV emission spectrum and overwhelms the signal of single NV centers, precluding their identification upon coverage (Fig. 1e). To increase the NV signal and facilitate arbitrary spatial mapping, we instead utilize an engineered diamond sample hosting an ensemble of near-surface NV centers (~40 nm deep; ~85 NV centers per focused optical spot).
I. Hybrid NV-MoS 2 Photosensing Platform
Additionally, we band-pass filter the detected PL between 690 nm and 830 nm to predominantly isolate NV center emission, as shown in the room-temperature PL spectra of monolayer MoS 2 and a typical ensemble NV sample (Fig. 1f). Importantly, the excitation wavelength for photocarriers in MoS 2 must be sufficiently longer than the zero-phonon line of the NV center (637 nm) to minimize thermally-assisted absorption 28 by the NV center during the photocurrent sensing duration. Excitation and subsequent decay of the NV center will decohere the spin superposition, leading to reduced read-out contrast and sensitivity 29 . We excite at 661 nm, but have verified that our effects persist for longer excitation wavelength (see Supplementary Section 5.1).
Our photocurrent sensing protocol is based on an XY8-N dynamical decoupling sequence commonly used for the NV-based detection of ac fields from the precession of remote nuclei [18][19][20] .
However, in contrast to applications in nuclear magnetic resonance, here we directly control both the frequency and the phase of the targeted oscillating field through the timing of the photoexcitation pulses (Fig. 2a). This enables us to sweep the phase of the oscillating field relative to the NV sensing sequence and examine the full dependence, increasing sensitivity compared to measuring an averaged response over random phases. We prepare the superposition state | ⟩ 1 √2 ⁄ |0⟩ | 1⟩) and allow it to evolve to | 1 √2 ⁄ |0⟩ | 1⟩) under the influence of photo-excitation and the XY8-N rephasing pulses (N repetitions of 8 -pulses). We first match the spacing between the -pulses to a half period of the photo-excitation frequency
( 1 2 ⁄ here) and probe the acquired phase for varying relative delays between the start of the -pulses and the photo-excitation (Fig. 2a). The X and Y-projections cos and sin , respectively, of the final state | are optically detected after an appropriate projection pulse on the NV center.
In essence, the delay of our "quantum lock-in" protocol plays the analogous role to the relative phase between the signal and reference oscillator in a classical lock-in measurement. If photoexcitation generates an instantaneous, square pulse current density ⃗ in the MoS 2 monolayer, then the phase accumulated by the NV center will be maximized for an optimal delay 0°. We denote this maximal accumulated phase as Φ, with its amplitude and sign determined by the amplitude and direction of the local current density. Alternatively, if the photocurrent rises and falls with a characteristic timescale (purple trace in Fig. 2a), maximum phase accumulation will occur for nonzero optimal delay ( 0°) (see Supplementary Section 4.1). For a current density ⃗ with sinusoidal time-dependence, the accumulated phase will depend on as Φ cos ). This form represents a good approximation to our data due to the smoothing effect of the photocurrent rise and fall times. In Fig. 2b, we plot the analytical behaviors for and under this model as a function of the delay and the maximal phase Φ.
II. Detection and Mapping of Photo-Nernst Currents
We first perform a photocurrent sensing protocol with N = 2 and = 7.6 s over an uncovered area of diamond (probe and excitation beams slightly offset). Consistent with negligible absorption by the NV center or bulk diamond at 661 nm, we cannot detect the presence of photoexcitation and measure 0 for all ( Supplementary Fig. S6). Remarkably, when we shift to an area where monolayer MoS 2 covers the diamond, we detect oscillations in and as is varied of the XY8 block, we can increase Φ linearly (Fig. 2f).
The maximum accumulated phase Φ represents a weighted time-integral of the ac magnetic field along the NV axis produced by the photocurrents. By modeling the pulse shape of ⃗ , we can estimate the final instantaneous field from the time-integrated field (Fig. 2a) via:
where is a pulse shape dependent factor, Φ is measured in radians, and the factor of 0.5 stems from the 50% duty-cycle of the photo-excitation. The factor increases monotonically with from 1 for square pulses to 2 as → ∞ (see Supplementary Section 4.1). Unless otherwise stated, we utilize a constant factor 1.25, corresponding roughly to the range of our typical measurements ( = 7.6 s). In Fig. 2e, we resolve as small as 0.84 0.08 mG for about 2 hours of averaging time (0.14 0.05 mG for additional data shown in Supplementary Fig. S8). From this, we assert a minimum sensitivity to a sheet current density of ~20 nA/µm flowing perpendicular to the NV axis (along the y-direction), which produces a field ~ 0. By scanning the probe beam relative to the excitation spot, we map the magnetic field distribution generated by the photo-Nernst vortex. In Fig. 3a and b, we present the measured for line scans along the x-and y-axes, respectively. Crucially, we show that changes sign as expected when the static magnetic field B is reversed, indicating that the chirality of the vortex also reverses (Fig. 3a). The solid lines in Fig. 3a versus negative (Fig. 3a). For the y-direction, becomes slightly negative as we pass outside the ring of maximum current density and the z-component of the fringing field reverses ( Fig. 3b).
In Fig. 3c, we plot the current distribution ⃗ used to approximate the experimental field profile together with independent thermal modeling of the laser-induced temperature distribution in monolayer MoS 2 . The modeled ⃗ peaks at ~ 1.0 , in close agreement with the predicted location of the maximum thermal gradient (Supplementary Fig. S15). At our base temperature for the diamond substrate ( = 6 K), the photocurrent vortex is enhanced by the reduced thermal conductivity of monolayer MoS 2 and the large thermal interface resistance to the substrate, which permit large thermal gradients (~18 K/ max) and a spatial distribution significantly larger than the excitation spot size (Gaussian intensity with standard deviation 0.45 ). As increases, the thermal conductivity 32 and thermal interface conductance 33 both increase and we find that the detected at 0.95 diminishes, disappearing around 20 K (Supplementary Fig. S10).
Using ⃗ , we estimate that the integrated current for one side of the vortex is ~1.3 µA for an excitation power of 25 uW before the objective (85% transmission). This implies a Nernst photoresponsivity of ~60 mA/W for 226 G parallel to the NV axis (130 G perpendicular to the sample).
For the same magnetic field, this value for ungated monolayer MoS 2 is higher than the giant Nernst photo-responsivities reported for a graphene-hexagonal boron nitride heterostructure that is gatetuned to its van Hove singularities 31 . This enhancement in MoS 2 is consistent with its lower thermal conductivity and higher Seebeck coefficient stemming from a favorable density of states for its gapped band structure 16, 17 . In Supplementary Fig. S10, we verify that the Nernst photocurrent is linear in the external magnetic field and non-saturating up to 500 G, as expected for the low field regime 31 .
Our unique probe provides additional insight into the dynamics of photocarrier generation. In Fig. 3d, we examine the optimal delay between the NV driving and photoexcitation pulses as a function of , using a sequence with = 7.6 s. As the probe beam moves away from the excitation spot, increases. This effect can be explained if the rise time for the local photocurrent, which dominates the contribution to the local field, increases for larger | |. To corroborate this hypothesis, we map the leading edge of the photocurrent rise by varying the pulse spacing in the synchronized sensing protocol. To deduce , the value of the field at the end of the pulse, we need to account for variations in pulse shape as changes. For each set with different , we utilize the measured delay to infer the factor within our pulse shape model.
In Fig. 3e, we compare , for two different locations. We confirm an exponential rise to the photocurrent with a time constant ~ 1 s that increases for larger | |. The extracted rise times are sufficient to explain the measured , suggesting that no additional effects, such as carrier propagation from the excitation spot, contribute significantly to the delay (see Supplementary Section 5.5). Indeed, we find that is, to within error, independent of the external magnetic field, which would affect carrier propagation.
The rise times for can be compared to a model of system's transient thermal response.
Indeed, the rise time for the thermal gradient ⁄ increases for larger | | (Supplementary Section 6). This matches the qualitative experimental trend (Fig. 3c,d) and supports a picture of photocurrents generated locally by PTE. Interestingly, to approximate the microsecond-scale photocurrent rise times, we need to assume a heat capacity for monolayer MoS 2 that is significantly higher than theoretically predicted 34,35 . Our model estimates ~ 200 J/ kg * * for temperatures below ~50 K, while is generally taken 16 as 400 J/ kg * K for single crystal monolayer MoS 2 at 300 K. Even considering that we use polycrystalline MoS 2 , this discrepancy may suggest extrinsic contributions to the estimated . For example, excess heat capacity could arise from PMMA residue or a layer of cryopumped adsorbates, and the latter is known to significantly raise the measured low-temperature heat capacity of other low-dimensional materials 36,37 (see Supplementary Fig. S2). Further investigations under systematic outgassing and sample cleaning conditions are required to clarify this phenomena, as well as to explore potential applications toward the sensing of absorbed gases.
Finally, we demonstrate the ability to detect light without prior knowledge of its frequency or phase. Gating the light at a constant frequency with an independent controller, we examine the projection of the final state | as we scan the spacing of the XY8-8 sequence. When the frequency 1 2 ⁄ of the decoupling sequence matches to within a bandwidth Δ . 11⁄ , the average value of over random starting delays is diminished from its initial full projection, resulting in a resonant dip 19 . In Fig. 4, we demonstrate this unsynchronized detection scheme for three different frequencies = 65 kHz, 110 kHz, and 333 kHz. Due to the rise time of the PTE photocurrents, our sensitivity to optical power decreases for higher , necessitating stronger excitation to see the same contrast change. However, the NV sensing protocol itself is effective for frequencies up to several tens of MHz 25 and thus can be combined with faster photocurrent mechanisms for optimal photodetection.
Discussion
Our demonstration broaches wide-ranging opportunities for investigating fascinating opto-
Methods
Sample Fabrication
An NV center ensemble was created ~40 nm deep into an [001]-oriented diamond sample by 15 After optically initializing the NV spin into |0⟩, the XY8-N dynamical decoupling sequence applied to the NV center consists of 8N+2 qubit rotations:
2
Here, the subscript indicates the axis on the Bloch sphere for the qubit rotation, and { /2, indicates the rotation angle. The axis of the final /2 projection pulse determines which component , of the final spin superposition is rotated onto the |0⟩ (bright) state for readout.
The pulses are uniformly spaced by the interval , whereas the /2 pulses are spaced by /2.
The timing (frequency, delay) of the photoexcitation and NV microwave pulses can be synchronously controlled to nanosecond resolution using an arbitrary waveform generator.
Data Analysis and Modeling
All errorbars reported in our paper are 95% confidence intervals. Complete details of the data fitting, thermal modeling and stray field modeling are supplied in the Supplementary Information.
Figure 1a
1adisplays the experimental setup for detecting photocurrents in monolayer MoS 2 .
(
Fig. 2c; 1.07 m, = 0 m), revealing the presence of an ac magnetic field due to photocurrents. Fitting the and projections simultaneously to their expected behavior (solid lines), we deduce that the maximal phase Φ increases in magnitude as the photoexcitation power increases and that the optimized delay is nonzero, indicating a finite photocurrent rise time (Fig. 2c). In principle, MoS 2 -assisted heating of the diamond substrate could induce a timeperiodic change in the zero-field splitting D and also lead to effective precession via detuning of our microwave pulses from resonance. However, we rule out this scenario by showing that the phase accumulated by the initial state | ⟩ 1 √2 ⁄ |0⟩ | 1⟩) is opposite to that by 1 √2 ⁄ |0⟩ | 1⟩), while changes in D are expected to affect both | 1⟩ symmetrically (Supplementary Fig. S7). Strikingly, when the probe beam is moved to the opposite side of excitation beam ( -0.67 m, = 0 m), the phase accumulated by the NV center switches sign, signifying a reversal in direction of the local photocurrent density ⃗ (Fig. 2d). For both locations, we summarize the dependence of the maximum phase Φ on the photoexcitation power, showing a sublinear increase (Fig. 2e). This behavior is consistent with a PTE origin for the photocurrents as the thermal gradient induced in monolayer MoS 2 by laser heating begins to saturate based on our thermal modeling (Supplementary Section 6). The absence of any interface potentials that induce directional electric fields in our implementation further suggests a PTE origin, which has been shown to be the dominant photocurrent generation mechanism in monolayer MoS 2 in prior experiments 16,17 . Finally, by increasing the sensing duration through the number of repetitions N
the depth of the NV center. This improvement in sensitivity compared to dc current sensing (~1 µA/µm in graphene 23 ) is enabled by our synchronized dynamical decoupling protocol, which extends the NV ensemble's coherence time from * 0.51 to 8 2 235 and provides access to coherent oscillations in sin , more sensitive than cos to small . In steady state, the divergence-free condition ∇ • ⃗ = 0 and rotational symmetry of our experiment imply that any photocurrent must flow as vortices around the excitation spot, explaining the reversal in the direction of ⃗ observed in Fig. 2c and d. Nonzero photocurrent, defining a chirality to the vortex, also requires the breaking of time-reversal symmetry, supplied here by the external magnetic field. We deem the resulting current profile a "photo-Nernst vortex", since the radial temperature gradient induced by the excitation beam and the out-of-plane result in azimuthal current flow, everywhere transverse to the temperature gradient as in the Nernst effect. Previously, photo-Nernst currents 30,31 were detected by scanning photocurrent microscopy at the edges of exfoliated graphene devices. However, spatial mapping of an unperturbed vortex in the interior of a 2D material has not been possible, as notably it generates zero net current in a transport measurement.
and b present the simulated field profiles at the NV center depth for a model of the photocurrent distribution ⃗ . We assume an azimuthal flow with amplitude ⃗ ∝ since ⃗ is expected to be proportional to the gradient of an approximately Gaussian photoinduced temperature distribution (Supplementary Section 7). Although we phenomenologically incorporate deviations from a perfectly circular excitation beam to better match the experimental profile, the salient features of a vortex current density are clear. is nonzero at the vortex center due to the z-component of the field produced by the current loops. Cutting across the vortex along x, the fringing fields beneath the plane of the current loops either align or anti-align with the oblique [111]-oriented NV axis, leading to the asymmetry in the magnitude of for positive
electronic phenomena in materials. Highlighted by our mapping of a photo-Nernst vortex, the functionality of spatial resolution lacking in transport-based techniques could allow clear-cut characterization of chiral photocurrents localized at edges 38 , directional currents controlled by coherent optical injection 39 , and the scattering of valley-polarized 4 , Weyl point 8-10 , or Dirac-point 11 photocurrents from disorder. Beyond fundamental interests, nano-engineering of the diamond surface into pillared arrays would reduce the thermal interface conductance and possibly extend the results here to room temperature. This, in conjunction with the use of wide-field imaging techniques 23 and isotopically purified samples 40 that prolong the NV's quantum coherence, could enable ultrasensitive, large-area photodetector arrays using our technique. Moreover, the optical generation of ac magnetic fields that are spatially localized, but also available on-demand anywhere across the area covered by MoS 2 provides simplified, stick-on alternatives for fabricating devices to manipulate solid-state spins. Adding to capabilities such as electrical spin readout 41 , spontaneous emission tuning 42 and 2D ferromagnetism 43 , our demonstration of spinphotocurrent interaction widens the perspective for integrated quantum technologies based on the quantum emitter-2D material platform.
. The implantation energy was 30 keV, with an area dose of 1012 ions/cm 2 . The samples were annealed at 850 °C and then 1100 °C to form roughly 85 NVs per optical spot. The external field was supplied by a permanent magnet, and microwave pulses were delivered by a wire coil suspended above the sample surface.Monolayer MoS 2 was grown on SiO 2 /Si via MOCVD. It was then spin-coated with PMMA and baked at 180 °C. After applying thermal release tape (TRT), the stack was mechanically peeled off the SiO 2 /Si substrate and transferred to the diamond in a vacuum chamber. Lastly, the TRT and PMMA were removed. Three separate MoS 2 samples were transferred and investigated in this work, all displaying NV-based photocurrent detection.Further details of the sample growth and fabrication can be found in the Supplementary Information.NV Photocurrent Sensing TechniqueThe photocurrent sensing sequence consists of simultaneous ac optical excitation of the monolayer MoS 2 and dynamical decoupling pulses applied to the NV center spin. A 532 nm laser for NV spin readout and 661 nm laser for photocurrent excitation were focused by an objective onto the sample held at 6 K inside a closed-cycle cryostat (Montana Instruments). The 661 nm laser spot was laterally displaced by impinging on the objective's back aperture at a slight angle away from normal. The displacement was measured by collecting reflected light off the sample into a camera and determining the center locations of the beams using calibrated pixel sizes. The 661 nm excitation was pulsed by modulating its polarization with an electro-optical modulator and passing the output through a Glan-Thompson polarizer. The polarization of the 661 nm beam was thereafter set to be right circularly polarized; however, the results here are not dependent on the polarization.
Figure 1 |
1Photosensing platform based on monolayer MoS 2 and NV centers a) Experimental setup. A MOCVD-grown monolayer MoS 2 sheet is transferred on top of a bulk diamond sample hosting a near-surface ensemble of NV centers. Illuminating the MoS 2 with an excitation laser (661 nm) generates a temperature distribution within the monolayer that drives a photocurrent distribution ⃗ . The NV center spin state senses the local magnetic field produced by the photocurrents and is optically read-out by a separate green laser (532 nm). DC1 -dichroic mirror (550 nm); DC2 -dichroic mirror (685 nm); BP -bandpass (690-830 nm); PLphotoluminescence. b) Crystal structure of the NV center in the diamond lattice. We address the subset of NV centers aligned with the [111]-direction and define the coordinate directions x and y as indicated. c) Energy levels of the NV ground-state triplet as a function of , the magnetic field parallel to the NV center axis. Resonant microwave pulses (RF) shown prepare and manipulate an equal superposition of the = |0⟩ and | 1⟩ states for phase acquisition. d) Optical micrograph of monolayer MoS 2 on top of a diamond substrate after vacuum transfer. To enhance optical contrast, this micrograph is taken prior to cleaning off the poly(methyl methacrylate) (PMMA) coating layer, which supports the MoS 2 during transfer. e) Room-temperature PL image of the boundary region between monolayer MoS 2 and a bare diamond substrate, containing single NV centers. Image taken with DC1 and DC2 filters only. f) Room-temperature PL spectrum of monolayer MoS 2 and an ensemble NV sample under 532 nm illumination. The photoexcitation wavelength (661 nm) used in the sensing sequence is longer than the NV zero-phonon line (ZPL; 637 nm) to minimize optical excitation of the NV center. The orange shaded region (690-830 nm)depicts the region of collected PL for NV spin readout.
Figure 2 |
2Dynamically-decoupled sensing protocol for detection of ac photocurrentsa) Synchronized photo-excitation and dynamical decoupling sequence. The excitation laser is gated at a frequency 1 2 ⁄ , while an XY8-N dynamical decoupling sequence with spacing between the -pulses is applied to the NV center spin. The delay between the first -pulse and the end of the last photoexcitation pulse is denoted as , which is given as a phase (e.g.,225° for the pulse trains shown). The phase of the last RF 2 ⁄ projection pulse prior to readout by the probe laser determines which projection (X or Y) of the final NV center superposition state is measured. Bottom: schematic of the time-dependent current density generated in monolayer MoS 2 . b) Expected dependence of the final NV spin projections and on the delay (horizontal axis) and the maximum phase Φ acquired at the optimal delay (vertical axis). The sign and magnitude of Φ are determined by the direction and magnitude of the local photocurrents. c) Experimental and projections of the final NV spin state as the delay is varied for two different optical powers (sequence with N = 2, = 7.6 s). The probe beam ( 1.07 m) is positioned to the right of the excitation beam. The solid lines are simultaneous fits to both projections, allowing determination of Φ and . d) Same measurement but with the probe beam ( -0.67 m) to the left of the excitation beam. (not shown) looks similar to c), but is inverted, indicating that the local photocurrent direction is reversed. e) Dependence of the maximum phase Φ on the optical power for -0.67 m (purple) and 1.07 m (green).The right-hand axis converts Φ to the maximal field amplitude of the pulse, , along the NV axis. f) Dependence of Φ on the number of repetitions N of the XY8-N sequence ( = 7.6 s).
Figure 3 |
3Spatiotemporal mapping of a photo-Nernst vortex a) Line scan of the maximal stray field as the probe moves along the x-axis of the vortex ( 0 m, 25 ). is extracted from the phase acquired while sweeping the delay . When the direction of the external magnetic field is reversed, changes sign in accordance with the Nernst effect (here we fix the positive field direction to be along [111]). b) Line scan of across the y-axis ( 0 m). For the y-axis, the stray fields of the photocurrent vortex have a strong component perpendicular to the NV-axis, and is dominated by the zcomponent of the total field, which diminishes away from the vortex center. The solid lines in a) and b) are simulated stray fields along the NV axis for a modeled vortex current density ⃗ ; slightly different parameters are used for a) and b). c) Comparison of the modeled current density ⃗ , the simulated laser-induced temperature distribution in the monolayer MoS 2 , and the power density of the excitation beam for 25 , assuming rotational symmetry and using the measured excitation beamwidth. The spatial profile of ⃗ is in close agreement with the gradient of . d) Dependence of the optimal delay on the coordinate . The solid line is a linear fit. e) Measured versus the spacing of the synchronized photosensing sequence for two different values of using 38 . Here, the timing of the photoexcitation and NV driving pulses are changed together. The rise time for increases for larger | |, corroborating the trend in shown in d). The solid lines are exponential fits.
Two independently steerable laser beams (probe: 532 nm, excitation: 661 nm) are joined by dichroic mirrors and focused by a confocal microscope onto the MoS 2 -diamond stack, held at a base temperature of 6 K (see Supplementary Section 1). We define the coordinate axes {x,y,z} to be parallel to the edges of the [001]-faced diamond sample with [110]-cut edges, as depicted in Fig. 1b. The position of the probe spot relative to the excitation spot on the sample is denoted by the vector (R x , R y ). By aligning the external magnetic field along the [111] axis, we selectively address a single subset out of the four possible NV lattice orientations, noting that full vector magnetometry can be achieved by extending our measurements to multiple orientations. An insulated wire coil placed over the MoS 2 monolayer delivers the resonant radio-frequency (RF)pulses for NV spin manipulation.Figure 1cdiagrams the energy levels of the NV spin-triplet ground-state, showing the | 1⟩ sublevels separated from |0⟩ by the zero-field splitting parameter2.87 GHz. Applying
along the NV center axis lifts the degeneracy of the | 1⟩ states,
which acquire equal and opposite Zeeman-shifts at a rate
2.8 MHz/Gauss.
In Fig. 1d, we display an optical micrograph of a MoS 2 -diamond stack assembled by our
vacuum stacking technique 26 . The high-quality MOCVD-grown monolayer 15 readily covers our
entire 2x2x0.5 mm diamond sample, but we deliberately expose a portion of the diamond to
facilitate control measurements (see Supplementary Section 2). The slight absorption 27 of the 532
nm probe laser by the single atomic layer of MoS 2 does not interfere with initialization of the NV
center spin state. However, monolayer MoS 2 's strong intrinsic photoluminescence (PL) from 532
AcknowlegmentsWe thank C. F. de las Casas, A. L. Yeats, C. P. Anderson for experimental suggestions, and K.Figure 4 | Detection of unsynchronized, variable frequency ac photocurrentsThe X projection of the final NV state using an unsynchronized XY8-8 sensing sequence. The photoexcitation frequency is fixed for each set and the frequency of the NV driving pulses
Photodetectors based on graphene, other two-dimensional materials and hybrid systems. F H L Koppens, Nat. Nanotech. 9Koppens, F. H. L. et al. Photodetectors based on graphene, other two-dimensional materials and hybrid systems. Nat. Nanotech. 9, 780-793 (2014).
Photonics and optoelectronics of 2D semiconductor transition metal dichalcogenides. K F Mak, J Shan, Nat. Photon. 10Mak, K. F. & Shan, J. Photonics and optoelectronics of 2D semiconductor transition metal dichalcogenides. Nat. Photon. 10, 216-226 (2016).
Van der Waals heterostructures and devices. Y Liu, Nat. Rev. Mater. 49016042Liu, Y. et al. Van der Waals heterostructures and devices. Nat. Rev. Mater. 490, 16042 (2016).
The valley Hall effect in MoS2 transistors. K F Mak, K L Mcgill, J Park, P L Mceuen, Science. 344Mak, K. F., McGill, K. L., Park, J. & McEuen, P. L. The valley Hall effect in MoS2 transistors. Science 344, 1489-1492 (2014).
Optical manipulation of valley pseudospin. Z Ye, D Sun, T F Heinz, Nat. Phys. 13Ye, Z., Sun, D. & Heinz, T. F. Optical manipulation of valley pseudospin. Nat. Phys. 13, 26-29 (2016).
Generation and electric control of spin-valley-coupled circular photogalvanic current in WSe2. H Yuan, Nat. Nanotech. 9Yuan, H. et al. Generation and electric control of spin-valley-coupled circular photogalvanic current in WSe2. Nat. Nanotech. 9, 851-857 (2014).
Dichroic spin-valley photocurrent in monolayer molybdenum disulphide. M Eginligil, Nat. Commun. 67636Eginligil, M. et al. Dichroic spin-valley photocurrent in monolayer molybdenum disulphide. Nat. Commun. 6, 7636 (2015).
Direct optical detection of Weyl fermion chirality in a topological semimetal. Q Ma, Nat. Phys. 13Ma, Q. et al. Direct optical detection of Weyl fermion chirality in a topological semimetal. Nat. Phys. 13, 842-847 (2017).
Colossal mid-infrared bulk photovoltaic effect in a type-I Weyl semimetal. G B Osterhoudt, 10.1038/s41563-019-0297-4Nat. Mater. Osterhoudt, G. B. et al. Colossal mid-infrared bulk photovoltaic effect in a type-I Weyl semimetal. Nat. Mater. (2019). doi:10.1038/s41563-019-0297-4
Nonlinear photoresponse of type-II Weyl semimetals. J Ma, 10.1038/s41563-019-0296-5Nat. Mater. Ma, J. et al. Nonlinear photoresponse of type-II Weyl semimetals. Nat. Mater. (2019). doi:10.1038/s41563-019-0296-5
Giant intrinsic photoresponse in pristine graphene. Q Ma, Nat. Nanotech. 14Ma, Q. et al. Giant intrinsic photoresponse in pristine graphene. Nat. Nanotech. 14, 145- 150 (2019).
Quantum sensing. C L Degen, F Reinhard, P Cappellaro, Rev. Mod. Phys. 8935002Degen, C. L., Reinhard, F. & Cappellaro, P. Quantum sensing. Rev. Mod. Phys. 89, 035002 (2017).
Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond. F Casola, T Van Der Sar, A Yacoby, Nat. Rev. Mater. 317088Casola, F., van der Sar, T. & Yacoby, A. Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond. Nat. Rev. Mater. 3, 17088 (2018).
Quantum technologies with optically interfaced solid-state spins. D D Awschalom, R Hanson, J Wrachtrup, B B Zhou, Nat. Photon. 12Awschalom, D. D., Hanson, R., Wrachtrup, J. & Zhou, B. B. Quantum technologies with optically interfaced solid-state spins. Nat. Photon. 12, 516-527 (2018).
High-mobility three-atom-thick semiconducting films with wafer-scale homogeneity. K Kang, Nature. 520Kang, K. et al. High-mobility three-atom-thick semiconducting films with wafer-scale homogeneity. Nature 520, 656-660 (2015).
Large and Tunable Photothermoelectric Effect in Single-Layer MoS 2. M Buscema, Nano Lett. 13Buscema, M. et al. Large and Tunable Photothermoelectric Effect in Single-Layer MoS 2. Nano Lett. 13, 358-363 (2013).
High thermoelectric power factor in two-dimensional crystals of MoS2. K Hippalgaonkar, Phys. Rev. B. 95115407Hippalgaonkar, K. et al. High thermoelectric power factor in two-dimensional crystals of MoS2. Phys. Rev. B 95, 115407 (2017).
Nuclear magnetic resonance spectroscopy on a (5-nanometer)3 sample volume. T Staudacher, Science. 339Staudacher, T. et al. Nuclear magnetic resonance spectroscopy on a (5-nanometer)3 sample volume. Science 339, 561-563 (2013).
Nanoscale NMR spectroscopy and imaging of multiple nuclear species. S J Devience, Nat. Nanotech. 10DeVience, S. J. et al. Nanoscale NMR spectroscopy and imaging of multiple nuclear species. Nat. Nanotech. 10, 129-134 (2015).
Magnetic resonance spectroscopy of an atomically thin material using a single-spin qubit. I Lovchinsky, Science. 355Lovchinsky, I. et al. Magnetic resonance spectroscopy of an atomically thin material using a single-spin qubit. Science 355, 503-507 (2017).
Quantitative nanoscale vortex imaging using a cryogenic quantum magnetometer. L Thiel, Nat. Nanotech. 11Thiel, L. et al. Quantitative nanoscale vortex imaging using a cryogenic quantum magnetometer. Nat. Nanotech. 11, 677-681 (2016).
Scanned probe imaging of nanoscale magnetism at cryogenic temperatures with a single-spin quantum sensor. M Pelliccione, Nat. Nanotech. 11Pelliccione, M. et al. Scanned probe imaging of nanoscale magnetism at cryogenic temperatures with a single-spin quantum sensor. Nat. Nanotech. 11, 700-705 (2016).
Quantum imaging of current flow in graphene. J Tetienne, Sci. Adv. 31602429Tetienne, J. et al. Quantum imaging of current flow in graphene. Sci. Adv. 3, e1602429 (2017).
Nanoscale Imaging of Current Density with a Single-Spin Magnetometer. K Chang, A Eichler, J Rhensius, L Lorenzelli, C L Degen, Nano Lett. 17Chang, K., Eichler, A., Rhensius, J., Lorenzelli, L. & Degen, C. L. Nanoscale Imaging of Current Density with a Single-Spin Magnetometer. Nano Lett. 17, 2367-2373 (2017).
Spectroscopy of Surface-Induced Noise Using Shallow Spins in Diamond. Y Romach, Phys. Rev. Lett. 11417601Romach, Y. et al. Spectroscopy of Surface-Induced Noise Using Shallow Spins in Diamond. Phys. Rev. Lett. 114, 017601 (2015).
Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures. K Kang, Nature. 550Kang, K. et al. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures. Nature 550, 229-233 (2017).
Thickness-dependent in-plane thermal conductivity of suspended MoS 2 grown by chemical vapor deposition. J J Bae, Nanoscale. 9Bae, J. J. et al. Thickness-dependent in-plane thermal conductivity of suspended MoS 2 grown by chemical vapor deposition. Nanoscale 9, 2541-2547 (2017).
Optical cryocooling of diamond. M Kern, Phys. Rev. B. 95235306Kern, M. et al. Optical cryocooling of diamond. Phys. Rev. B 95, 235306 (2017).
Spin Coherence during Optical Excitation of a Single Nitrogen-Vacancy Center in Diamond. G D Fuchs, A L Falk, V V Dobrovitski, D D Awschalom, Phys. Rev. Lett. 108157602Fuchs, G. D., Falk, A. L., Dobrovitski, V. V. & Awschalom, D. D. Spin Coherence during Optical Excitation of a Single Nitrogen-Vacancy Center in Diamond. Phys. Rev. Lett. 108, 157602 (2012).
Photo-Nernst current in graphene. H Cao, Nat. Phys. 12Cao, H. et al. Photo-Nernst current in graphene. Nat. Phys. 12, 236-239 (2016).
Multiple hot-carrier collection in photo-excited graphene Moiré superlattices. S Wu, Sci. Adv. 21600002Wu, S. et al. Multiple hot-carrier collection in photo-excited graphene Moiré superlattices. Sci. Adv. 2, e1600002 (2016).
Effects of Defects on the Temperature-Dependent Thermal Conductivity of Suspended Monolayer Molybdenum Disulfide Grown by Chemical Vapor Deposition. M Yarali, Yarali, M. et al. Effects of Defects on the Temperature-Dependent Thermal Conductivity of Suspended Monolayer Molybdenum Disulfide Grown by Chemical Vapor Deposition.
. Adv. Funct. Mater. 271704357Adv. Funct. Mater. 27, 1704357 (2017).
Theory of substrate-directed heat dissipation for singlelayer graphene and other two-dimensional crystals. Z.-Y Ong, Y Cai, G Zhang, Phys. Rev. B. 94165427Ong, Z.-Y., Cai, Y. & Zhang, G. Theory of substrate-directed heat dissipation for single- layer graphene and other two-dimensional crystals. Phys. Rev. B 94, 165427 (2016).
Effect of temperature on thermal properties of monolayer MoS2 sheet. J Su, Z Liu, L Feng, N Li, J. Alloys Compd. 622Su, J., Liu, Z., Feng, L. & Li, N. Effect of temperature on thermal properties of monolayer MoS2 sheet. J. Alloys Compd. 622, 777-782 (2015).
Analytical insight into the lattice thermal conductivity and heat capacity of monolayer MoS 2. D Saha, S Mahapatra, Phys. E. 83Saha, D. & Mahapatra, S. Analytical insight into the lattice thermal conductivity and heat capacity of monolayer MoS 2. Phys. E 83, 455-460 (2016).
Quantized Phonon Spectrum of Single-Wall Carbon Nanotubes. J Hone, B Batlogg, Z Benes, A T Johnson, J E Fischer, Science. 289Hone, J., Batlogg, B., Benes, Z., Johnson, A. T. & Fischer, J. E. Quantized Phonon Spectrum of Single-Wall Carbon Nanotubes. Science 289, 1730-1733 (2000).
Low-temperature specific heat of single-wall carbon nanotubes. J C Lasjaunias, K Biljaković, Z Benes, J E Fischer, P Monceau, Phys. Rev. B. 65113409Lasjaunias, J. C., Biljaković, K., Benes, Z., Fischer, J. E. & Monceau, P. Low-temperature specific heat of single-wall carbon nanotubes. Phys. Rev. B 65, 113409 (2002).
Terahertz Radiation Driven Chiral Edge Currents in Graphene. J Karch, Phys. Rev. Lett. 107276601Karch, J. et al. Terahertz Radiation Driven Chiral Edge Currents in Graphene. Phys. Rev. Lett. 107, 276601 (2011).
Coherent control of ballistic photocurrents in multilayer epitaxial graphene using quantum interference. D Sun, Nano Lett. 10Sun, D. et al. Coherent control of ballistic photocurrents in multilayer epitaxial graphene using quantum interference. Nano Lett. 10, 1293-1296 (2010).
Ultralong spin coherence time in isotopically engineered diamond. G Balasubramanian, Nat. Mater. 8Balasubramanian, G. et al. Ultralong spin coherence time in isotopically engineered diamond. Nat. Mater. 8, 383-387 (2009).
Ultrafast electronic readout of diamond nitrogen-vacancy centres coupled to graphene. A Brenneis, Nat. Nanotech. 10Brenneis, A. et al. Ultrafast electronic readout of diamond nitrogen-vacancy centres coupled to graphene. Nat. Nanotech. 10, 135-139 (2014).
Electrical control of optical emitter relaxation pathways enabled by graphene. K J Tielrooij, Nat. Phys. 11Tielrooij, K. J. et al. Electrical control of optical emitter relaxation pathways enabled by graphene. Nat. Phys. 11, 281-287 (2015).
Probing magnetism in 2D materials at the nanoscale with single spin microscopy. L Thiel, arXiv:1902.01406v1doi:arXiv:1902.01406v1Preprint atThiel, L. et al. Probing magnetism in 2D materials at the nanoscale with single spin microscopy. Preprint at arXiv:1902.01406v1 (2019). doi:arXiv:1902.01406v1
|
[] |
[
"Topography-induced persistence of atmospheric patterns",
"Topography-induced persistence of atmospheric patterns"
] |
[
"D Ciro \nInstitute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n\n",
"B Raphaldini \nInstitute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n\n",
"C Raupp \nInstitute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n\n"
] |
[
"Institute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n",
"Institute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n",
"Institute for Astronomy\nGeophysics and Atmospheric Sciences\nUniversity of Sao Paulo\n"
] |
[] |
Atmospheric blockings are persistent large-scale climate patterns with duration between days and weeks. In principle, blockings might involve a large number of modes interacting non-linearly, and a conclusive description for their onset and duration is still elusive. In this paper we introduce a simplified account for this phenomena by means of a single-triad of Rossby-Hawritz waves perturbed by one topography mode. It is shown that the dynamical features of persistent atmospheric patterns have zero measure in the phase space of an unperturbed triad, but such measure becomes finite for the perturbed dynamics. By this account we suggest that static inhomogeneities in the twodimensional atmospheric layer are required for locking flow patterns in real space.PACS numbers:
| null |
[
"https://arxiv.org/pdf/1905.05071v1.pdf"
] | 152,282,690 |
1905.05071
|
16fc2c9e6434be6a2d085a68f948ab7e60b56347
|
Topography-induced persistence of atmospheric patterns
13 May 2019
D Ciro
Institute for Astronomy
Geophysics and Atmospheric Sciences
University of Sao Paulo
B Raphaldini
Institute for Astronomy
Geophysics and Atmospheric Sciences
University of Sao Paulo
C Raupp
Institute for Astronomy
Geophysics and Atmospheric Sciences
University of Sao Paulo
Topography-induced persistence of atmospheric patterns
13 May 2019PACS numbers:
Atmospheric blockings are persistent large-scale climate patterns with duration between days and weeks. In principle, blockings might involve a large number of modes interacting non-linearly, and a conclusive description for their onset and duration is still elusive. In this paper we introduce a simplified account for this phenomena by means of a single-triad of Rossby-Hawritz waves perturbed by one topography mode. It is shown that the dynamical features of persistent atmospheric patterns have zero measure in the phase space of an unperturbed triad, but such measure becomes finite for the perturbed dynamics. By this account we suggest that static inhomogeneities in the twodimensional atmospheric layer are required for locking flow patterns in real space.PACS numbers:
I. INTRODUCTION
Atmospheric blocking is a phenomena whereby the basic predominantly zonal flow of the atmosphere breaks, acquiring a significant meridional component, possibly correlated with inhomogeneities such as topography and thermal forcing, e.g. by ocean/continent contrasts. Although time fraction of blocked states of the atmosphere is relatively small, estimated between 2% and 22% in the Northern Hemisphere [3], it is enough to influence the climatological mean and result in weather conditions that depart from the mean and are often extreme, causing important socio-economic impacts. Blocking phenomena effects are most strongly felt in the Northern Hemisphere in mid-latitudes where blocked states result in advection of cold winds coming from the Arctic Pole, causing episodes of extreme cold during the winter [7], [4], although less significant, blocking events occur in the Southern hemisphere [2], [5].
Proper representation of the onset and duration of blocking events still pose difficulties in climate modeling and weather forecasts, often limiting the predictability [3], and being underestimated in frequency of occurrence [6]. Further research on the basic mechanisms behind blocking transitions, may lead to a better understanding and improvements in its predictability in the future.
Theoretical models of these transitions usually rely on barotropic quasi-geostrophic equation with inclusion of topography and/of inhomogeneus thermal forcing, models can be either forced-dissipative or Hamiltonian. In the first scenario zonal and different blocked states constitute equilibria of the underlying dynamical system, these equilibria usually can be characterized as selective decay states of the equations which are zonal flows and flows correlated with the topography 1, therefore transitions between zonal and blocked states can be understood as trajectories of the system connecting the vicinities of different equilibria in phase space.
In this paper we are interested in the conservative dynamics of the system, so that viscosity will be neglected and the system becomes Hamiltonian. From the point of view of wave turbulence, transitions between flow pat-terns are the result of the nonlinear interaction between Rossby waves, where there is a zonal flow is represented by a Rossby mode with zero zonal wave number and therefore zero frequency, and other modes with meridional flows and nonzero frequency, that absorb its energy.
Modal decomposition of the barotropic quasigeostrophic equation in Rossby waves lead to the existence of wave triads, where three modes exchange energy non-linearly, and each mode can be member of different triads simultaneously. This picture, allow us to interpret the complex behavior of the full system in terms the energy flow between triads which are the non-linear units of the full dynamical system.
In this paper, we investigate in detail the dynamics of a single triad of the barotropic quasi-geostrophic equation with and without topography. In particular, we are interested in the existence of triads developing highamplitude, non-drifting Rossby waves, in correspondence to persistent blocking patterns in the atmosphere. It is shown that in the simple case without topography such states exist amid periodic solutions of the triad mismatch phase, but the solution has zero measure in the solution space, i.e. it is statistically impossible. However, when introducing one congruent topography mode as a small perturbation to the system, the measure of such solutions becomes finite, and high-amplitude non-drifting states become available for a range of initial conditions in phase space. This important feature emerges from important topological changes in the invariants of the perturbed problem, illustrating the critical role of topography in this simplified account of the phenomena.
The paper is organized as follows. In Sec. II we introduce the Rossby waves decomposition of barotropic quasi-geostrophic model with topography, then, in Sec. II A the dynamics of a single triad is studied in detail and the zero-drift solutions are characterized. In Sec. IV the topography is introduced and its effects on the zero-drift solutions is numerically investigated. In Sec. V we conclude the main content with our conclusions and perspectives. In the Appendix section VI relevant calculations and approximations are presented.
II. THE MODEL
In this work we consider a simplified barotropic quasigeostrophic model of the atmosphere with variable depth, between the topography and the free upper surface. By decomposing the depth in a constant and a fluctuating part H = H +H, and by means of a perturbative expansion described in the Appendix VI A, it can be shown that the relative vorticity satisfies
∂ζ ∂t + J a (ψ, ζ + f − h),(1)
where ψ is the velocity stream, ζ = ∇ 2 ψ is the relative vorticity, f = 2Ω cos θ is the planetary vorticity or the Coriolis parameter and h = fH/H is the depth fluctuation modulated by the Coriolis parameter in units of its mean value in the atmospheric depth. This fluctuation is due to both the topography and the free surface, but here we will refer to it as the topography. In the following, the Jacobian function J is defined in terms of regular spherical coordinates with θ = 0 in the north pole
J a (g 1 , g 2 ) = 1 a 2 sin θ ∂g 1 ∂θ ∂g 2 dφ − ∂g 1 ∂φ ∂g 2 dθ ,
where a is the mean planetary radius. It is worth noting that during the derivation of this model we did not employed the beta-plane approximation, so that ψ is the global velocity stream, and the velocity can be obtained anywhere via
u = (1 − H/H)n × ∇ψ
wheren is the unit vector normal to the planetary surface and u is a non-solenoidal field in contrast to the usual barotropic vorticity models.
To perform a more universal analysis we measure the velocities in units of the characteristic flow velocity u 0 , the distances in units the mean planetary radius a, and the time in units of (2Ω) −1 , leading to a dimensionless version of (1)
∂ζ ∂t + ∂ψ ∂φ + J 1 (ψ, λζ − εh) = 0,(2)
where λ = u 0 /2Ωa is the ratio between the characteristic flow velocity and the equatorial planetary velocity, the stream ψ is measured in units of u 0 a, ζ in units of u 0 /a, and the Jacobian operator is taken in a unit sphere. The parameter ε = |H| max /H, is the maximum topography value, so that the relative topography h(θ, φ) is of order one, as well as ψ(θ, φ) and ζ(θ, φ).
Since λ and ε are small parameters and all the fields are of the same order the dominant role of the Rossby-Hawritz waves becomes explicit and the nonlinear terms in the Jacobian provide small corrections to the evolution equation that enable energy transfers between modes. Due to the completeness of the spherical harmonics we can expand the stream function ψ and the depth h as Laplace's series,
ψ(θ, φ, t) = m,n a m,n (t)Y m n (θ, φ),(3)h(θ, φ) = m,n h m,n Y m n (θ, φ),(4)
where, in the linear regime the stream amplitudes oscillate with the Hawritz-Rossby frequencies (here re-scaled by 2Ω).
ω m,n = m n(n + 1) = m k n .(5)
Plugging (3) and (4) in (2) and using the orthogonality of the spherical harmonics together with well known selection rules we can derive a reduced dynamical system involving various amplitudes a i = a mi,ni , which can be organized in triads {ψ α , ψ β , ψ γ }, satisfying m α +m β = m γ . In general, triads are coupled by shared modes that transfer energy between them, and the energy of the system can flow in complicated patterns in the modal space.
A. Equations for a single triad with topography
Instead of considering complex modal structures, we focus our attention in the dynamics of a single triad with topography, which sets the basic development scenarios that get modified by the interaction with other modes. The construction of the dynamical system and the equations for the general problem with unspecified modal structure can be found in Appendix VI B.
For a single triad {ψ 1 , ψ 2 , ψ 3 }, and its conjugated modes, with one single topography mode compatible with ψ 3 the amplitudes evolve according to in 1 (n 1 + 1)ȧ 1 = −m 1 a 1 + λKµ 32 a * 2 a 3 + εKha * 2 , in 2 (n 2 + 1)ȧ 2 = −m 2 a 2 + λKµ 13 a * 1 a 3 − εKha * 1 , in 3 (n 3 + 1)ȧ 3 = −m 3 a 3 + λKµ 12 a 1 a 2 , where µ βα = k β −k α , and K is the interaction coefficients between the modes as described in Appendix VI B. By requiring the stream function to be real-valued we obtain the form
ψ(θ, φ, t) = 3 i=1 |a i |P mi ni (cos θ) cos[m i φ + β i (t)],(6)
where β i (t) is the phase of the complex variable a i . In its present form the system can be easily fed with physical parameters, but more useful predictions can be made if we re-scale the variables to identify a more fundamental combination of parameters that allows to compare all triads in the same footing.
To to this we define the constants
α 1 = k −1 2 − k −1 3 , α 2 = k −1 3 − k −1 1 , α 3 = k −1 2 − k −1 1 ,
and a new set of dynamical variables
z i = λK(α j α k ) 1/2 k i a i e iϕ0,i , i = j = k.
where the phases ϕ 0,i are yet to be determined, and satisfy ϕ 0,3 = 2ϕ 0,1 , ϕ 0,1 = ϕ 0,2 . Then, we put the topography amplitude in its polar form h = |h|e iϕ h , and introduce two dependent perturbation amplitudes
ε 1 = k 3 k 3 − k 1 K |α 1 α 2 ||h|ε,(7)ε 2 = k 3 k 3 − k 2 K |α 1 α 2 ||h|ε,(8)
then, the phase of εh becomes ϕ h + β 12 , where β 12 = π/2 if sgn(α 1 α 2 ) = −1, and β 12 = 0 otherwise. To express the system in terms of universal parameters alone we can set ϕ 0,1 = −ϕ h − β 12 , which fixes all phases and leads to the most compact form for this dynamical system
iż 1 = −ω 1 z 1 + z * 2 z 3 + ε 1 z * 2 , iż 2 = −ω 2 z 2 + z * 1 z 3 + ε 2 z * 1 , (9) iż 3 = −ω 3 z 3 + z 1 z 2 .
where ε 1,2 are real and can be positive or negative and satisfy ε 1 /ε 2 = (k 3 − k 2 )/(k 3 − k 1 ). In general, the amplitudes z i (t) are complex variables and the previous dynamical system is six -dimensional and non-linear, but it can be tackled analytically in a number of useful particular situations. Instead of studying numerically this complex system we proceed analytically from the less general single triad without topography to the more complete situation of two triads with zonal flow and topography. This will allow us to identify the most relevant features of the system and the variables that contain most relevant information of the system. The numerical analysis will only be employed as an illustrative tool after we identify the relevant features of the system.
III. SINGLE TRIAD DYNAMICS
The dynamics of a single triad without topography is completely integrable, i.e. there are sufficient constants of motion to reduce the dynamics to two dimensions, where systems are in general integrable. Taking ε 1 = ε 2 = 0 in (9) the dynamical system can be derived from the Hamilton equationsż i = ∂H/∂z * i , anḋ
z * i = −∂H/∂z i , with Hamiltonian function H({z i }, {z * i }) = i(ω 1 |z 1 | 2 + ω 2 |z 2 | 2 + ω 3 |z 3 | 2 −z 1 z 2 z * 3 − z * 1 z * 2 z 3 ),(10)
this will be relevant below for the construction of a complete set of constants of motion.
Writing the complex amplitudes in polar form z i (t) = r i (t)e iϕi(t) we obtain a real variables dynamical systeṁ
r 1 = r 2 r 3 sin ϕ, (11) r 2 = r 1 r 3 sin ϕ, (12) r 3 = −r 1 r 2 sin ϕ,(13)ϕ = ω + r 2 r 3 r 1 + r 1 r 3 r 2 − r 1 r 2 r 3 cos ϕ,(14)
where ω = ω 3 − ω 1 − ω 2 , and ϕ = ϕ 3 − ϕ 1 − ϕ 2 , are the frequency and phase mismatch respectively. In the unperturbed situation, the polar representation already reduces the dimension of the problem to four . By combining (11, 12) and (14) we can find two constants of motion
I 2 1 = r 2 1 + r 2 3 ,(15)I 2 2 = r 2 2 + r 2 3 ,(16)
which represent circles of radius I 1 and I 2 in the spaces r 1 − r 3 and r 2 − r 3 respectively, allowing us to put r 1 and r 2 in terms of I 1 , I 2 and r 3 alone, reducing the dimension of the system to two. Provided that r 3 < I 1,2 at all times, we can make more universal observations by measuring r 3 in units of I min = min(I 1 , I 2 ), so that the resulting two-dimensional dynamical system takes the form
dx dτ = −(1 − x 2 ) 1/2 (κ 2 − x 2 ) 1/2 sin ϕ,(17)dϕ dτ = ω + x κ 2 − x 2 + x 1 − x 2 − 1 x × (1 − x 2 ) 1/2 (κ 2 − x 2 ) 1/2 cos ϕ(18)
where the amplitude and time were re-scaled by x = r 3 /I min , τ = I min t, and there are only two parameters to control the dynamics κ = I max /I min , ω = ω/I min .
This system encodes all the dynamics of a single triad for given values of the mismatch ω, and constants I 1 , I 2 , determined by the initial amplitudes of the modes. In principle, these equations can be integrated to determine r 3 as a function of time, but the geometry of phase space x − ϕ, contains a great deal of information without requiring to integrate the dynamical system. To exploit this fact we just need to determine the fixed points of (17-18) and their type of stability. In Appendix VI C this is done with some detail, resulting in six different equilibria, two pairs of hyperbolic points or saddles, one pair S ± 0 at (ϕ, x) = (±π/2, 0) and the other S ± 0 , at (ϕ, x) = (±π/2, 1), and two elliptic points or centers C 0 , C π at ϕ = 0 and π, for which the corresponding values of x can be determined numerically from high order polynomial equation (see Appendix VI C).
Let us choose for illustrative purposes a triad with rescaled mismatch ω = −0.76, and κ = 1.2. in Fig. 1, we show the fixed points, and how they organize the phase-space flow, mainly, the saddle points, which are connected to the separatrices that delimit the regions with oscillations and rotations in ϕ. To put x in terms of ϕ alone we can use the fact that the Hamiltonian function (10) is explicitly independent of time, and consequently a constant of motion. Putting (10) in polar form we can derive a real constant of motion that couples r 3 and ϕ in terms of the other system parameters
Im H(r 3 , ϕ) = ω 1 r 2 1 (r 3 ) + ω 2 r 2 2 (r 3 ) + ω 3 r 2 3 −2r 1 (r 3 )r 2 (r 3 )r 3 cos ϕ,
which can be rescaled in the same fashion as the dynamical system to obtain
E(ϕ, x) = ωx 2 /2 − (1 − x 2 ) 1/2 × (κ 2 − x 2 ) 1/2 x cos ϕ,(19)
where the zero of E was chosen appropriately to eliminate the dependence on non-essential parameters. Notice also that, the dynamical system (17-18) can be obtained from the Hamilton equations,φ = ∂E/∂x anḋ (19) is the Hamiltonian function, and {ϕ, x} is the coordinate -momentum pair respectively. This implies that the set of transformations that led from (9) to (17-18) preserve the Hamiltonian structure, i.e they were canonical transformations [].
x = −∂E/∂ϕ, where
Since E(ϕ(τ ), x(τ )) is constant along the solutions of the dynamical system, the level sets of E(x, ϕ) can be used to illustrate the nature of the solutions. In Fig. 1, we divided the phase space ϕ − x, in three sub-domains depending on the type of behavior of ϕ. Two regions with bounded oscillations in ϕ (in white), where E < ω and E > 0, separated by a region with unbounded evolution of ϕ (in gray), where ω < E < 0. The evolution of x is always bounded and periodic, which is expected due to r 3 < I min , and because of the dimension of the space, except for the infinite time orbits in the separatrices that only arrive (or depart) asymptotically to (from) the saddle points. Expectedly, the frequency of the solutions will go to zero at the separatrices and have finite values otherwise, tending to the linear frequencies near the centers C 0 , C π for close small amplitude oscillations.
To summarize, in this particular situation, for E < ω < 0, the amplitude of the third wave r 3 ∝ x oscillates around a large value, and its phase oscillates about ϕ = 0. As we increase the energy, to ω < E < 0, the fluctuations in the wave amplitude grow, but now occur about an intermediate value, and the mismatch phase ϕ, moves counterclockwise and unbounded. Finally, if we increase the energy to E > 0, the fluctuations in the amplitude decrease and occur about an even lower value, while the mismatch phase gets confined again, but oscillates about ϕ = π. As will be illustrated now, this analysis reverses when ω > 0. In Fig. 2 we depict in few instances of phase space for different parameters ω and κ. The centers C 0 and C π , x-coordinate interchange as ω goes from negative to positive, and the separatrices flip the bounded region when the phase mismatch flips direction as well. Also, since the unbounded motion occurs between E = ω and E = 0, larger magnitudes of ω lead to larger regions of unbounded motion. Conversely, increases in κ, push the separatrices to each other, reducing the size of the rotation region.
A. Inherent modal drift
In this work, we are interested in the development of atmospheric patterns and their persistence in relation to atmospheric blockings. As mentioned before, the contri-bution to the flow pattern from a mode α = (m, n) has the form
ψ α ∝ r α (t)P m n (cos θ) cos(m i φ + ϕ i (t)),
where the modal phase ϕ i (t), introduces a timedependent offset in the zonal direction for the mode contribution to the global flow pattern. In the context of atmospheric blocking, we can expect that the relevant mode ψ 3 , sustains a relatively large amplitude r 3 for a long time, and its phase becomes stagnant or oscillates about the blocking longitude ϕ b , which might be correlated to the topography phase ϕ h , that already sets the zero of the mode phases in (9). An interesting candidate for the blocking states are those orbits close to the high amplitude center at mismatch value ϕ = 0 (available for ω < 0), where the amplitude of r 3 is stably large and ϕ(t) oscillates about zero. This, however does not imply that the individual phases ϕ i (t) are oscillating too, but only their mismatch. In the following we address the conditions under which the relevant mode ϕ 3 tends to oscillate about zero as well.
Consider the rescaled equation for the phase of the topography mode ϕ 3
dϕ3 dτ = ω 3 − A(x) cos ϕ, where A(x) = (1 − x 2 ) 1/2 (κ 2 − x 2 ) 1/2 /x, ω 3 = ω 3 /I min .
Now, we consider a periodic solution near the fixed point C 0 = (0, x 0 ), which can be approximated in the linear region by
x(τ ) ≈ x 0 + a x cos(ω 0 τ ) ϕ(τ ) ≈ a ϕ sin(ω 0 τ )
where ω 0 is the linear frequency at the fixed point C 0 , and the fluctuation amplitudes a x , a ϕ , satisfy |a x | ≪ x 0 , and |a ϕ | ≪ 1. To a second order, the evolution of the phase velocity along the linear solution is
dϕ 3 dτ ≈ ω 3 − A(x 0 ) − a x A ′ (x 0 ) cos(ω 0 τ ) − a 2 x 2 A ′′ (x 0 ) cos 2 (ω 0 τ ) + a 2 ϕ A(x 0 ) sin 2 (ω 0 τ ),
and the drift velocity of the third phase is the average oḟ ϕ 3 taken along the orbit
u 3 = 1 T T 0 dϕ 3 dτ dτ,
which leads to the simple form
u 3 = ω 3 − A 0 + (a 2 ϕ A 0 − a 2 x A ′′ 0 )/4,(20)
giving the actual drift velocity of a wave pattern in the zonal direction as a function of the amplitude of the phase fluctuation a ϕ and the coordinate x 0 , of the fixed point C 0 in phase space ϕ − x.
In principle, the zonal drift of ψ 3 (20), can vanish for a particular choice of system parameters, and at one single particular solution in phase space, e.g., if we take an oscillation in x−ϕ for which ω 3 = A 0 −(a 2 ϕ A 0 −a 2 x A ′′ 0 )/4, so that the corresponding level set in ϕ − x becomes the limit between positive and negative drifting evolution for ϕ 3 .
In Fig. 3, we show a particular triplet that supports zero-drift solutions for ϕ 3 , corresponding parameters are ω = −1.18, κ = 1.66, and x 0 = 0.8. As described above, the zero drift solution occurs at a single level set in phase space, and other solutions exhibit positive or negative drift depending on whether they are inside or outside such contour. Since zero-drift solutions of (20), have zero measure in phase space, it is clear that the mode locking required for the atmospheric blocking is not a statistically representative in the space of solutions, and consequently, is not a feature on the dynamics of a single triad, and, in general, all modes will drift with finite velocity in the real space. In the following, we will show that mode locking is a more general feature of the dynamics when a topography mode interacts with the triad at particular system parameters, i.e. there is a nonzero measure region in phase space where this phenomenon occurs.
IV. SINGLE TRIAD WITH TOPOGRAPHY
Now that we identified the fundamental parameters of a single triad without topography, we can characterize better the effects of the interaction with a single topography mode congruent with the third wave pattern. Before rewriting the original system (9) in polar form, it can be seen that I 1 and I 2 as defined in (15,16) are no longer constants of motion, but we can use them to derive a new invariant of the form
M 2 = ±[ε 2 (|z 1 | 2 + |z 3 | 2 ) − ε 1 (|z 2 | 2 + |z 3 | 2 )],(21)
where the sign is chosen in such a way that M > 0. Provided that M is a constant, the largest Manley-Rowe quantity (mult. by the corresponding ε), will be always be so, maintaining its distance to the other quantity. In general, we can put r 3 in terms of r 1 , r 2 using (21) and the system parameters, then, the original system (9) can be recasted in polar form to obtain the four-dimensional systeṁ
r 1 = r 2 r 3 (r 1 , r 2 ) sin ϕ + ε 1 r 2 sin(ϕ − ϕ 3 ), r 2 = r 1 r 3 (r 1 , r 2 ) sin ϕ + ε 2 r 1 sin(ϕ − ϕ 3 ), ϕ = ω + r 2 r 1 + r 1 r 2 r 3 (r 1 , r 2 ) − r 1 r 2 r 3 (r 1 , r 2 ) cos ϕ + ε 1 r 2 r 1 + ε 2 r 1 r 2 cos(ϕ − ϕ 3 ), ϕ 3 = ω 3 − r 1 r 2 r 3 (r 1 , r 2 ) cos ϕ,
where the function r 3 , takes an appropriate form, for instance r 3 (r 1 , r 2 ) = [(M 2 + ε 1 r 2 2 − ε 2 r 2 1 )/(ε 2 − ε 1 )] 1/2 . In the case without topography the evolution was entirely determined by the initial phase mismatch ϕ = ϕ 3 − ϕ 2 − ϕ 1 , and the modal amplitudes, but in this case the initial phase of ϕ 3 , must be specified as well.
To understand the effect of the topography in the third phase, notice that the perturbation does not affect the equation forφ 3 directly but only trough its cumulative effects in r 1 (t) and r 2 (t), which are no longer specified uniquely by the value of r 3 and the constants of motion. As before, consider the average of the equation forφ 3 along a trajectory close to C 0 .
u 3 = ω 3 − 1 T T 0 r 1 (t)r 2 (t) r 3 (r 1 (t), r 2 (t)) cos ϕ(t) dt,
where T is the period of the corresponding unperturbed orbit around C 0 . Provided that r 1 (t), r 2 (t) and ϕ(t) are no longer exactly periodic, the resulting drift velocity for one cycle will differ slightly from the next, and the drift velocity becomes a function of time.
In Fig. 4 we show the results of the numerical integration of the equations of motion for one triad with ω = −1.0, κ = 1.34, with and without topography for different initial conditions in r 3 −ϕ. For this choice of parameters the system contains an elliptic fixed point with large amplitude, x 0 = r 3,0 /I min = 0.8, around which we will study the motion with perturbation amplitudes of ε 1 = 0.01, ε 2 = 0.0175. As mentioned above the drift velocity in the unperturbed case is constant and the phase of the third mode in general oscillates while drifting away. This occurs even when the initial condition in phase space corresponds to bounded solutions of the mismatch phase ϕ (which is here the case). On the other hand, when we introduce a topography mode congruent with the wave ψ 3 , for the same initial conditions, we obtain a bounded periodic phase drift for ϕ 3 about its initial value, in other words, the topography effectively locks its corresponding Rossby wave spatially for a finite region of phase space, i.e. in contrast with the unperturbed triad the perturbed solutions exhibiting bounded drifts have finite measure in phase space, and consequently finite probability when sorting initial configurations. Figure 5: Comparison between the corresponding third mode amplitudes for different initial mismatches (same as Fig. 4) in the perturbed and unperturbed situations.
A complementary important feature of the perturbed motion is that it does not affect importantly the amplitude of the relevant wave, but only modulates it slightly in time. With this, we combine the two required features for simplified atmospheric blocking, a bounded drift of the third Rossby wave with a sustained large amplitude.
V. CONCLUSIONS
It this work we have shown that individual triads of the barotropic quasi-geostrophic model present unbounded phase drifts on each mode, implying that the underlying flow pattern of each wave oscillates about a rotating phase in the real space. The drift motion is even present when the triad mismatch phase performs bounded oscillations in phase space at high amplitudes of a relevant mode. Particular solutions without drift do exist, but these have zero measure in phase space. Not surprisingly, this implies that a single triad is unable to represent the meridional persistence of atmospheric blocking, one its most basic features. This scenario changes drastically by the introduction of a single topography mode consistent with one wave in the Rossby-Hawritz triad. The topography here acts as a small perturbation on the Hamiltonian system leading to periodic changes in the drift velocity, resulting in the non-linear phase capture of the corresponding mode. This generates a finite volume of solutions in phase space without net phase drift for the mode consistent with the topography, i.e. persistent patterns have a finite probability when initial conditions are randomly selected in phase space. Fortunately, such important changes in the phase dynamics do not affect drastically the characteristic large amplitude of the mode corresponding to the topographic pattern, and, consequently, the topography-related mode can be large while its phase is bounded meridionally in real space. The results obtained in this work are very general, as the mechanisms mentioned here were not obtained for particular modal structures, numerical examples were used for illustrative purposes, but they do not restrict our analysis, which is in fact, based in a rescaled dimensionless dynamical system encompasing an infinite family of triads. With this we are implying that the non-linear phase capture is a common mechanism triggered by topography (or any inhomogeneity in the atmospheric layer), and it leads to dynamical features consistent with atmospheric blockings. The emergence of such states or their duration might involve the interaction with other triads, and will be investigated in a future paper, but the prescence of persistent patterns in the elementary triads is already expected from this reductionist treatment.
VI. APPENDIX
A. Derivation of the quasi-geostrophic model
In general, the conservation of the potential vorticity in an incompressible two-dimensional flow in a rotating frame with angular velocity Ω is given by
D t ζ + f H = 0,(22)
where ζ =n · ∇ × u is the relative vorticity, withn the normal to the planet surface, f = 2Ω cos θ the planetary vorticity, and H(θ, φ, t) the fluid depth at the latitude θ, longitude φ and time t. The material derivative is defined in terms of the relative velocity in the rotating frame D t = ∂/∂t + u · ∇, so that mass conservation reads
D t H = −H∇ · u(23)
which inserted in (22) leads to
D t (ζ + f ) + (ζ + f )∇ · u = 0,(24)
which can be recasted in conservation form as
∂ t ζ + ∇ · [(ζ + f ) u] = 0,(25)
where it becomes clear that the potential vorticity is carried by the velocity flow. Now, in order to obtain a useful yet simple model of the atmosphere we consider the particular situation in which the dept H is not changing in time, so that the flow H u becomes incompressible, and can be written in terms of a suitable stream function ξ in the form H u =n × ∇ξ.
(26)
Decomposing the depth in a large meanH and a small fluctuationH, and dividing both sides of (26) by the mean depthH, we can write the non-solenoidal velocity field as
u =n × ∇ψ (1 + η) ,(27)
where η =H(θ, φ)/H, and ψ = ξ/H. Replacing this form of the velocity in (25) we obtain a new form of the conservation of the absolute vorticity
∂ζ ∂t + J ψ, ζ + f 1 + η = 0,(28)
where J(f 1 , f 2 ) =n · (∇f 1 × ∇f 2 ). This equation, now in terms of scalar fields alone requires an additional relation between the stream ψ and the relative vorticity ζ, given by
ζ = ∇ 2 ψ 1 + η − ∇η · ∇ψ (1 + η) 2 ,(29)
which, formally, allow us to determine the evolution of the stream, provided that we know the static relative fluctuation η. The fluctuation is due to both the topography and the free surface, but here we will refer to it as the topography.
Perturbative expansion and hybrid model
In the limit without topography η → 0, the velocity field becomes solenoidal and (28) becomes the regular non-linear form of the barotropic vorticity (BV) equation. This indicates a functional dependence between the stream ψ and the topography η, which can be weighted by a tunable parameter ε, then perform a perturbative expansion of the form
ψ = ψ 0 + εψ 1 + · · · ,
The leading orders of (29) are ζ 0 + εζ 1 = ∇ 2 ψ 0 + ε(∇ 2 ψ 1 − η∇ 2 ψ 0 − ∇η · ∇ψ 0 ), (30) which replaced in (28) leads to ∂ ∂t (ζ 0 +εζ 1 )+J[ψ 0 +εψ 1 , (ζ 0 +εζ 1 +f )(1−εη)] = 0,
where ζ 0 = ∇ 2 ψ 0 , and ζ 1 = ∇ 2 ψ 1 − η∇ 2 ψ 0 − ∇η · ∇ψ 0 , are the zero and first order vorticity. At this point, if we perform a regular separation in powers of ε, we recover the regular barotropic vorticity equation at order zero and get a complicated expression for the first order, which includes the topography. Instead of proceeding in this fashion, we consider the influence of the topography on the zeroth order atmospheric patterns and collect the remaining terms in an auxiliary equation indented to correct the stream to a first order:
∂ζ 0 ∂t + J(ψ 0 , ζ 0 + f − ηf ) = 0, (32) ∂ζ 1 ∂t + J(ψ 1 , ζ 0 + f ) + J(ψ 0 , ζ 1 − ηζ 0 ) = 0,(33)
Equation (32) resembles the usual BV equation but now includes a topographic forcing weighted by the Coriolis parameter h = ηf = 2ΩH cos θ/H. Dropping the indices we obtain a hybrid model between zero and one used along the text.
∂ζ ∂t + J(ψ, ζ + f − h),(34)
where ζ = ∇ 2 ψ, and the velocity is non-solenoidal because of the topography correction.
u = (1 − h/f )n × ∇ψ.(35)
B. Reduced dynamical system and modal interaction
In general, by introduzing the stream expansion (3), and the topography decomposition (4), in the barotropic quasi-geostrophic equation (2). And using the orthogonality of the spherical harmonics, it can be shown that the dynamics of the amplitudes is given by
k γȧγ = im γ a γ + i α,β K γβα [λk β a α a β + εa α h β ],(36)
where (α, β, γ) are collective indices of the form γ = (m γ , n γ ), and k γ = n γ (n γ + 1). The products between amplitudes are a consequence of the nonlinearity of the Jacobian function, and are weighted by the interaction coefficients K defined by
K γβα = 1 −1 P γ m β P β dP α dz − m α P α dP β dz dz,(37)
where P γ (z) = P m n (cos θ) are the associated Legendre polynomial, and it can be shown that the K's vanish, unless the following conditions are satisfied. m α + m β = m γ , m 2 α + m 2 β = 0, n α n β n γ = 0, n α = n β , n α + n β + n γ is odd, (n α − |m a |) 2 + (n β − |m β |) 2 = 0, |n α − n β | < n γ < n α + n β , β =γ and α =γ, whereγ = (−m γ , n γ ), for γ = (m γ , n γ ).
in 1 (n 1 + 1)ȧ 1 = −m 1 a 1 + λKµ 32 a * 2 a 3 −εK(h * 2 a 3 − h 3 a * 2 ), in 2 (n 2 + 1)ȧ 2 = −m 2 a 2 + λKµ 13 a * 1 a 3 +εK(h * 1 a 3 − h 3 a * 1 ), in 3 (n 3 + 1)ȧ 3 = −m 3 a 3 + λKµ 12 a 1 a 2 −εK(h 2 a 1 − h 1 a 2 ).
C. Fixed points of a single triad
In the following we classify the fixed points and stability of the single triad two-dimensional system (17-18). By requiring dx/dτ = 0, and recalling that x 1, and κ 1 we obtain the conditions
x 1 = 1, ϕ 2 = 0, ϕ 3 = π,(38)
Now we examine these conditions respect to the equation for the evolution in ϕ
dϕ dτ = ω + x κ 2 − x 2 + x 1 − x 2 − 1 x × (1 − x 2 ) 1/2 (κ 2 − x 2 ) 1/2 cos ϕ
which, when x → x 1 =1 takes the form dϕ dτ → ω + (κ 2 − 1) 1/2 (1 − x 2 ) 1/2 cos ϕ, so that as x → 1 the motion in phase space becomes mostly horizontal, but because of cos ϕ, the direction of dϕ/dτ reverses at ϕ ± π/2, regardless of the finite offset caused by ω, then, we have two fixed points
x 1 = 1, ϕ + 1 = π/2, ϕ − 1 = −π/2.(39)
Notice also, that dϕ/dτ also diverges as x → 0, which means that, even in dx/dτ is finite, the flow in phase space becomes horizontal, but, by the previous argument, reverses at ϕ±π/2, leading to a couple of new fixed points x 4 = 0, ϕ + 4 = π/2, ϕ − 4 = −π/2.
For the remaining fixed points we insert ϕ 2 and ϕ 3 in dϕ/dτ = 0, to obtain ωx (1 − x 2 )(κ 2 − x 2 )±[2(1+κ 2 )x 2 −3x 4 −κ 2 ] = 0 (40) which can be written as quartic polynomial equation in x 2 , and can be solved using computational algebra, but such solution is extensive and has little value for phenomenological interpretation. However, we can guarantee that such solution exist for both ϕ 2 and ϕ 3 . To see this, note that the l.h.s. function in (40) can be written as f lhs (x) = ωf 1 (x) ± f 2 (x), where f 1 (0) = f 1 (1) = 0, and f 2 (0) = −κ 2 , f 2 (1) = κ 2 − 1. Provided that f 2 (0) < 0 and f 2 (1) > 0, the function f lhs changes sign between x = 0 and x = 1, regardless of the sign of ω. Given that f lhs (x) has no singularities it follows that it must vanish at least one time in 0 < x < 1. With this we guarantee that there is at least one pair of fixed point solutions ϕ 2 = 0 , 0 < x 2 < 1, ϕ 3 = π, 0 < x 3 < 1, with further analysis we can make the intervals narrower, but here we are only interested in the existence of such fixed points. To summarize we have S ± 1 : (ϕ ± 1 , x 1 ) = (±π/2, 1) → hyperbolic, S ± 0 : (ϕ ± 1 , x 4 ) = (±π/2, 0) → hyperbolic, C 0 : (ϕ 2 , x 2 ) = (0, 0 < x 2 < 1) → Elliptic, C π : (ϕ 3 , x 3 ) = (π, 0 < x 2 < 1) → Elliptic .
The classification of these equilibria was made through the discriminant of the Jacobian matrix, and it wont be explicited here.
Figure 1 :
1Contours of (19) and fixed points of (17-18) for a triad with ω = −0.76, and κ = 1.2. The phase space is separated in regions with bounded (white) and unbounded motions (gray) in ϕ. The separatrices in blue and red contain solutions with infinite period and delimit the regions with finite period solutions.
Figure 2 :
2Phase space dependence on the control parameters. Large magnitudes of ω increase the unbounded solutions, while large values of κ reduce the amount of bounded solutions.
Figure 3 :
3Dynamics of the third phase ϕ3, for a triad supporting a zero-drift solution (black waveform and contour) at a stationary, high-amplitude of the third wave. At the stagnant amplitude x0, the phase may drift in a positive or negative direction, depending on the initial mismatch ϕ0. The zero drift solution starting at (ϕ0, x0) = (0.8, 0.2) is in a closed contour in ϕ − x, about the elliptic point C0, and separates positive from negative drift solutions for the third wave.
Figure 4 :
4Evolution of the third phase for a single triad (dotted) and a triad with topography (continuous), for the same initial value of the amplitudes different values of the initial mismatch ϕ(τ = 0).
Two-dimensional turbulence above topography. F P Bretherton, D B Haidvogel, Journal of Fluid Mechanics. 781129F. P. Bretherton and D. B. Haidvogel. Two-dimensional turbulence above topography. Journal of Fluid Mechanics, 78(1):129, 1976.
The influence of atmospheric blocking on the rossby wave propagation in southern hemisphere winter flows. E De Lima Nascimento, T Ambrizzi, Journal of the Meteorological Society of Japan, Ser. II. 80139E. de Lima Nascimento and T. Ambrizzi. The influence of atmospheric blocking on the rossby wave propagation in southern hemisphere winter flows. Journal of the Meteo- rological Society of Japan, Ser. II 80.2:139, 2002.
Skill of the mjo and northern hemisphere blocking in gefs medium range reforecasts. T M Hamill, G N Kiladis, Monthly Weather Review. 1422868T. M. Hamill and G. N. Kiladis. Skill of the mjo and north- ern hemisphere blocking in gefs medium range reforecasts. Monthly Weather Review, 142(2):868, 2014.
Quantifying the relevance of atmospheric blocking for co-located temperature extremes in the northern hemisphere on (sub-) daily time scales. S Pfahl, H Wernli, Geophysical Research Letters. 3912S. Pfahl and H. Wernli. Quantifying the relevance of at- mospheric blocking for co-located temperature extremes in the northern hemisphere on (sub-) daily time scales. Geophysical Research Letters, 39(12), 2012.
Enso-related variability in the frequency of south pacific blocking. J A Renwick, Monthly Weather Review. 126123117J. A. Renwick. Enso-related variability in the frequency of south pacific blocking. Monthly Weather Review, 126(12):3117, 1998.
Improved atlantic winter blocking in a climate model. A A Scaife, D Copsey, C Gordon, C Harris, T Hinton, S Keeley, A O'neill, M Roberts, K Williams, Geophysical Research Letters. 3823A. A. Scaife, D. Copsey, C. Gordon, C. Harris, T. Hinton, S. Keeley, A. O'Neill, M. Roberts, and K. Williams. Im- proved atlantic winter blocking in a climate model. Geo- physical Research Letters, 38(23), 2011.
Extreme cold winter temperatures in europe under the influence of north atlantic atmospheric blocking. J Sillmann, M Croci-Maspoli, M Kallache, R W Katz, Journal of Climate. 24225899J. Sillmann, M. Croci-Maspoli, M. Kallache, and R. W. Katz. Extreme cold winter temperatures in europe under the influence of north atlantic atmospheric blocking. Journal of Climate, 24(22):5899, 2011.
|
[] |
[
"Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS",
"Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS"
] |
[
"Anonymous Author "
] |
[] |
[] |
Due to its high expressiveness and speed, Deep Learning (DL) has become an increasingly popular choice as the detection algorithm for Network-based Intrusion Detection Systems (NIDSes). Unfortunately, DL algorithms are vulnerable to adversarial examples that inject imperceptible modifications to the input and causes the DL algorithm to misclassify the input. Existing adversarial attacks in the NIDS domain often manipulate the traffic features directly, which hold no practical significance because traffic features cannot be replayed in a real network. It remains a research challenge to generate practical and evasive adversarial attacks.This paper presents the Liuer Mihou attack that generates practical and replayable adversarial network packets that can bypass anomaly-based NIDS deployed in the Internet of Things (IoT) networks. The core idea behind Liuer Mihou is to exploit adversarial transferability and generate adversarial packets on a surrogate NIDS constrained by predefined mutation operations to ensure practicality. We objectively analyse the evasiveness of Liuer Mihou against four ML-based algorithms (LOF, OCSVM, RRCF, and SOM) and the state-of-the-art NIDS, Kitsune. From the results of our experiment, we gain valuable insights into necessary conditions on the adversarial transferability of anomaly detection algorithms. Going beyond a theoretical setting, we replay the adversarial attack in a real IoT testbed to examine the practicality of Liuer Mihou. Furthermore, we demonstrate that existing feature-level adversarial defence cannot defend against Liuer Mihou and constructively criticise the limitations of feature-level adversarial defences.
|
10.48550/arxiv.2204.06113
|
[
"https://arxiv.org/pdf/2204.06113v1.pdf"
] | 248,157,259 |
2204.06113
|
d5439d101906e9e52ccb7e7b83fb9c8b050dc48d
|
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS
Anonymous Author
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS
10.1145/nnnnnnn.nnnnnnnCCS CONCEPTS • Security and privacy → Network securityIntrusion detec- tion systems• Computing methodologies → Artificial intelli- genceKEYWORDS NIDS, Deep Learning, Adversarial Attacks
Due to its high expressiveness and speed, Deep Learning (DL) has become an increasingly popular choice as the detection algorithm for Network-based Intrusion Detection Systems (NIDSes). Unfortunately, DL algorithms are vulnerable to adversarial examples that inject imperceptible modifications to the input and causes the DL algorithm to misclassify the input. Existing adversarial attacks in the NIDS domain often manipulate the traffic features directly, which hold no practical significance because traffic features cannot be replayed in a real network. It remains a research challenge to generate practical and evasive adversarial attacks.This paper presents the Liuer Mihou attack that generates practical and replayable adversarial network packets that can bypass anomaly-based NIDS deployed in the Internet of Things (IoT) networks. The core idea behind Liuer Mihou is to exploit adversarial transferability and generate adversarial packets on a surrogate NIDS constrained by predefined mutation operations to ensure practicality. We objectively analyse the evasiveness of Liuer Mihou against four ML-based algorithms (LOF, OCSVM, RRCF, and SOM) and the state-of-the-art NIDS, Kitsune. From the results of our experiment, we gain valuable insights into necessary conditions on the adversarial transferability of anomaly detection algorithms. Going beyond a theoretical setting, we replay the adversarial attack in a real IoT testbed to examine the practicality of Liuer Mihou. Furthermore, we demonstrate that existing feature-level adversarial defence cannot defend against Liuer Mihou and constructively criticise the limitations of feature-level adversarial defences.
INTRODUCTION
Deep Learning (DL) has gained notable popularity over recent years as it outperforms traditional Machine Learning (ML) algorithms across various domains such as Natural Language Processing (NLP) Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference'17, July 2017, Washington, DC, USA © 2022 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn [43] and Computer Vision (CV) [10,22]. The success of DL is primarily due to its ability to utilise large volumes of data to learn highly abstract representations that are incomprehensible to traditional ML algorithms and humans [30], making them a perfect fit for Network-based Intrusion Detection Systems (NIDSes) where large volumes of data are generated each day.
Unfortunately, a significant weakness of DL is that it is vulnerable to adversarial examples. Adversarial examples are created by purposely adding an imperceptible perturbation to an input of a DL algorithm that causes misclassification of the input by the DL algorithm. Various adversarial attacks against DL algorithms have been developed across multiple domains such as images [40,48], physical world [29], and malware [24].
The use of Deep Neural Networks (DNNs) in NIDS exposes a new attack surface for attackers to exploit and is particularly devastating because they are under constant adversarial threats [46]. Most adversarial attacks against NIDS have focused on modifying the network features directly [13,25,31,42]. However, feature-level adversarial attacks are not practical since they only modify the traffic features, which cannot be replayed directly in the network to conduct the intended malicious activities. Hence, practical adversarial attacks against NIDS should consider problem/packet space [41] modifications that directly alter the packets.
Previous works have made limited progress towards practical packet-level adversarial attacks. Early packet-level attacks often rely on randomly applying predefined mutations with vigorous trial-and-error to find a feasible solution [21,23], which provide little to no theoretical guidance and insight. Recent works in packetlevel adversarial attacks [20,28] formulate the adversarial attack as a bi-level optimisation problem that first searches for the adversarial features and then modifies the packets to mimic the adversarial features. However, the bi-level design is overly complicated and introduces numerical instabilities that make finding the optimal solution challenging.
In this paper, we present a novel and practical transfer-based [39] adversarial attack called Liuer Mihou 1 to attack NIDS. Liuer Mihou first trains a surrogate NIDS that mimics the decision boundaries of the target NIDS. Next, the attack finds the optimal set of semantic preserving packet mutations with a hybrid heuristic search algorithm to minimise the anomaly score of any packets identified as malicious by the surrogate NIDS. As a result, Liuer Mihou produces a realistic sequence of packets that can be replayed in real-time in the network.
To fully understand the strength and weaknesses of Liuer Mihou, we comprehensively evaluate the evasiveness of Liuer Mihou in a real Internet of Things (IoT) network against a wide range of ML/DL based NIDS. Going beyond the theoretical setting, we replayed the generated adversarial traffic in the original IoT network to evaluate its evasiveness and maliciousness. Finally, we examine the strength of our attack by launching our attack against adversarial defences such as feature squeezing [51] and Mag-Net [33].
Contributions. In summary, our key contributions include: (1) We design a novel and practical adversarial attack tailored to attack anomaly-based NIDS, called Liuer Mihou, and provide source code for download [4]. (2) We conduct a comprehensive evaluation of Liuer Mihou in a real IoT testbed against four ML based anomaly detection algorithms (SOM, RRCF, LOF, and OCSVM), and the stateof-the-art DL based IoT NIDS, Kitsune [36]. (3) We demonstrate the strength of Liuer Mihou by assessing our attacks on Kitsune with adversarial detection defences such as Feature Squeezing [51] and Mag-Net [33]. (4) We provide empirical results and findings of our experiments, which provides insights for future adversarial attacks against NIDS. Organisation. The rest of this paper is organised as follows. Section 2 presents the Liuer Mihou framework along with definitions and the threat model used. Section 3 describes the experiment setup up, followed by the experiment results presented in Section 4. Next, Section 5 provides background and related work on anomaly-based NIDS, adversarial attacks and defences in the NIDS domain. Then, we discuss our findings, limitations, and future work in Section 6. Finally, we conclude the paper in Section 7. We have also provided additional information in the Appendix to provide more details of our experiments.
LIUER MIHOU
This section first provides definitions of the terms used throughout the paper and explicitly defines the threat model of Liuer Mihou. Following that, we present an overview of Liuer Mihou and provide details of the critical components.
Definitions
Definition 2.1 (Network Traffic Space). We refer to the Network Traffic Space as a set that contains all the possible packets that NIDS can capture, denoted as . Depending on the nature of the network traffic, can be further classified into five categories: benign, malicious, clean, adversarial, and replay. Definition 2.1.1 (Benign Traffic Space ( )).
contains all packets captured during the normal operational time. Definition 2.1.2 (Malicious Traffic Space ( )).
contains all packets captured when the network is under attack. Note that not all packets in malicious traffic space are directly responsible for the malicious activities and may contain packets in common with benign traffic (e.g., packets generated with TCP three-way handshake). Definition 2.1.3 (Clean Traffic Space ( )).
contains all packets that has not been adversarially modified, i.e. all of the benign and malicious traffic ( = ∪ ). Definition 2.1.4 (Adversarial Traffic Space ( )).
contains all theoretically crafted packets that are generated by applying mutation operations on malicious traffic. contains all packets captured during the replay of the adversarial traffic. Note that we make an distinction between and because inherent transmission and processing delays during replay causes the traffic pattern for to be different compared to , shown later in Figure 4.
Definition 2.2 (Feature Extraction). Feature extraction ( : −→ ∈ R ) is a function that extracts dimensional features ∈ from a network packet such that ( ) = . Due to the sheer size of network traffic, NIDS often extracts aggregate information from the network traffic and uses the network features instead of network packets for detection.
Definition 2.3 (NIDS). The NIDS ( : −→ ∈ R)
can be modelled as a function that produces a one-dimensional anomaly score ( ∈ ) based on the input features ( ). If the anomaly score is above a predefined threshold ( ), classifies the corresponding as malicious, otherwise is benign.
Definition 2.4 (Mutation Operations). Mutation operations (ℳ :
−→ ) take a clean packet ∈ and modify it to adversarial packets ′ ∈ . Note that the adversarial packets ′ is a set because the mutation operations may inject several redundant packets before .
Threat Model
We strictly target outlier detection based NIDS instead of classification based NIDS because they are more practical. The abundance of network traffic generated every day makes labelling timeconsuming, and correctly labelling the traffic requires expert knowledge. Furthermore, we only consider packet-level features because flow-level features need to wait for the connection to finish before producing the features, which is not practical for real-time detection.
The attacker's goal, knowledge, and capability are defined in the following and are in line with prior works in adversarial attacks against NIDS [20,23,31]: Goal The attacker's goals are two-fold: the attacker wishes to fully/partially maintain the security violations caused by the malicious attack and have the malicious attack being classified as benign by the target NIDS. Knowledge The attacker operates under a grey-box setting with complete knowledge of the network features extracted ( ) but knows nothing about the classifier ( ), except that it uses an outlier detection algorithm. Knowing the features extracted is a reasonable assumption, as feature extractors used in NIDS often extract similar features, such as statistics of arrival time and payload size over various time intervals [20]. Moreover, feature extractors for most recent datasets are publicly available online, e.g., CIC-FlowMeter [45] and AfterImage [36]. Capability We assume the attacker is inside the network and can sniff both benign and malicious traffic. In IoT networks where devices communicate wirelessly with little to no encryption, packets can be sniffed easily with a wireless sniffer. The attacker can also modify and inject crafted packets in malicious traffic and replay the modified traffic in the network. Figure 1: Overview of the Liuer Mihou attack. The surrogate model classifies each packet as either benign (green) or malicious (blue). For each malicious packet, Liuer Mihou finds an optimal mutation operation that minimises the anomaly scores produced by the surrogate model. Next, the mutation operation is applied to the malicious packet to generate a set of adversarial packets, including the original packet (red) and redundant packets (grey). Finally, the benign and adversarial packets are written in place to the output file. Under our threat model, the attacker can efficiently train a surrogate NIDS, ′ , with a threshold, ′ , with arbitrary architecture based on benign traffic ( ).
Overview of Liuer Mihou
Liuer Mihou is a practical adversarial generation algorithm tailored specifically for NIDS. It operates iteratively on each packet of the malicious traffic, illustrated in Figure 1. For each packet, the surrogate NIDS first classifies the packet. Suppose the packet is classified as benign by the surrogate. In that case, the packet is likely to be classified as benign by the target NIDS, so no modifications are needed, and Liuer Mihou writes the packet to the output straight away (green). On the other hand, if the surrogate classifies the packet as malicious, it is likely to be classified as malicious by the target model (blue). For each malicious packet identified by the surrogate, Liuer Mihou first searches for an optimal set of mutation operations on the packet that minimises the anomaly score produced by the surrogate NIDS (which also reduces the anomaly scores produced by the target NIDS). Next, the optimal mutation operations are applied to the malicious packet, transforming it into adversarial packets containing the modified malicious packet (red) and some redundant packets (grey) as byproducts. Finally, the adversarial packets are written to the output file in place.
Mutation Operations
In order to manipulate the packets, we have to define a set of mutation operations, ℳ that can be applied to a malicious packet to change the extracted features of the packet with minimal change of the content. The choice of ℳ will depend mainly on the feature extractor, and by inspecting existing open-source traffic feature extractors [36,45], we have found that most feature extractors measure statistics of the inter-arrival time and packet size. Therefore, we propose two simple mutation operations that change these two features: Packet Delay Delay the arrival time of a packet, which changes the inter-arrival time distribution. Packet Injection Inject redundant packets before a packet in the same connection, which changes packet-size distribution. The maximum time delay and the maximum number of redundant packets injected are constrained to ensure the adversarial traffic does not differ too much from the malicious traffic and reduces the search space's size. Full set of Liuer Mihou hyperparameters is presented in Appendix A.
Pierazzi et al. [41] proposed four general constraints for problem space modification, and we show our mutation operations satisfies these constraints.
(1) Available Transformations. It is trivial to see that an attacker can easily delay and inject redundant packets under our threat model. (2) Preserved Semantics. Our mutation operations do not modify the malicious packets' payload, preserving the original intended malicious activity. For attacks that rely on the packets' inter-arrival time, such as DoS attacks, we place constraints on the mutation operations to adjust the maliciousness of the adversarial attack. (3) Plausibility. We have ensured the plausibility of the mutation operation by manually checking the adversarial packets in Wireshark and replaying the packet in the same network, and there are no packets that seem blatantly abnormal. (4) Robustness to Preprocessing. A common non-ML based preprocessing in NIDS is to block the attacker's IP by the victim. Under such circumstances, the attacker can spoof its IP address to bypass blocking.
Attack Objective Function
The objective function of Liuer Mihou is formulated as the following optimisation problem, done for each malicious packet ( ):
min ∈ℳ max{ ′ ( ( ′ )) : ′ ∈ ( )}(1)
where ℳ is the space of all possible mutation operations, is a specific mutation operation, is the ℎ malicious packet, ′ is the set of adversarial packets after mutation. For simplicity and brevity, we refer to the value of max{ ′ ( ( ′ )) : ′ ∈ ( )} as the cost value, denoted by . Intuitively, Equation Equation (1) aims to directly minimise the maximum anomaly score of the adversarial packets that are obtainable with mutation operations on the malicious packet. Since Liuer Mihou operates under the grey-box scenario, we calculate the anomaly scores based on a surrogate NIDS ( ′ ) that uses an arbitrary outlier detection algorithm. Although the surrogate NIDS will have a slightly different decision function and anomaly threshold, we still expect adversarial examples that bypass the surrogate NIDS to also bypass the target NIDS due to adversarial transferability [39].
Notice that the adversarial packets contain the modified malicious packet and several redundant packets. We wish to reduce the maximum anomaly score of the adversarial packets so that we are not introducing more malicious packets. Moreover, we only minimise the cost value in our attack formulation without explicitly placing the constraint < ′ to allow a more flexible generation of the adversarial traffic. Consider a scenario where the search algorithm failed to find a mutation operation with the cost value above the surrogate threshold due to tight boundary constraints or limited search time. If we have introduced the constraint < ′ , we will have no feasible solution. Thus, we only search for the mutation operation with the lowest anomaly scores to make the attack more flexible. In the case where there are still malicious packets above the surrogate threshold, we can recursively run Liuer Mihou on the output adversarial packets until Liuer Mihou reduces all packets below the surrogate threshold.
Packet Vectorisation
To efficiently search for the mutation operations, we abstract the representation of mutation operations into low-dimensional vectors in the mutation space (Ψ). Each vector in the mutation space represents a set of mutation operations applied to the malicious packet, and optimisation of Equation Equation (1) is done by moving the vectors in the mutation space.
With the mutation operations defined in Section 2.4, we define Ψ as a two-dimensional space with ( , ):
• Modified arrival time of the packet ( ), which represents packet delay. • Number of redundant packets inserted before the packet ( ), which represents packet injection. For example, suppose the optimal vector is (0.4, 4) in the mutation space. In that case, Liuer Mihou will delay the malicious packet by 0.4 seconds and place four redundant packets before the malicious packet.
Applying packet delay can be trivially achieved by changing the packet's arrival time. However, packet injection is more complicated because we must define each redundant packet's arrival time and payload size. We have experimented with the following three methods of assigning arrival time and payload size of the redundant packets.
Random Assignment (RA). RA randomly assigns the arrival time and payload size of each redundant packet. We have found this method causes the cost function to be non-deterministic because the arrival time and payload size of the redundant packets at the same position are different in each iteration, as shown in Figure 2a. Calculate the cost of each particle ; 5 if random.uniform() < mutation probability then 6 foreach particle in population do 7 Identify the best position; 8 Find the neighbourhood best position; Calculate the velocity by adding inertia weight, cognitive term, and social term; 10 Update position with velocity; 11 else 12 foreach particle in population do 13 Calculate the mutant ; 14 Calculate the candidate particle ; 15 Calculate the cost of ; 16 Replace with if has a lower cost; 17 Transform the best particle position back to ℳ and return the mutation operation;
The non-deterministic nature of the cost function makes it difficult for the search algorithm to find the optimal solution. Seeded Assignment (SA). SA seeds the random number generator with the value of before generating the payload sizes for each redundant packet. The goal of seeding is so that the cost function for any real value of will be deterministic and continuous in the interval between [ − 0.5, + 0.5), shown in Figure 2b. Intuitively, this allows flexible assignment of packet sizes and payload while making the cost function deterministic. The arrival time of the redundant packets is assumed to be evenly spaced between the arrival time of the previous packet and .
Uniform Assignment (UA). A new dimension is introduced in the mutation space, , which governs the payload size of all redundant packets. As a result, the cost value is deterministic and piece-wise constant around each integer value of (shown in Figure 2c). Similar to the seeded assignment, the arrival time of crafted packets is uniformly distributed.
We have experimented with assigning redundant packets with random arrival time, resulting in worse performance. We hypothesise that consistent inter-arrival time of packets is characteristic of benign traffic in our dataset. Therefore, redundant packets with consistent inter-arrival time will always produce lower costs. Similarly, our auxiliary experiments in Appendix B.2 showed UA is the best method for creating redundant packets, which suggest that uniform payload sizes are also characteristic of benign traffic in our dataset.
Search Algorithms
The standard methods for optimising feature-level attack formulation are gradient descent algorithms [32,37,40,48]. However, gradient descent algorithms cannot be applied to Equation Equation (1) because is non-differentiable and non-invertible, commonly known as the inverse feature mapping problem [41]. In addition, our mutation operations may inject redundant packets and further complicate gradient descent. The typical approach to solve optimisation problems involving non-differentiable and non-invertible functions is to use meta-heuristic algorithms [9]. Meta-heuristic algorithms often have a master strategy that iteratively generates solutions, and the optimal solution is discovered by continuously evolving the solutions according to the fitness function. All meta-heuristic algorithms have an inevitable trade-off between exploration and exploitation, and finding a balance between the trade-off will usually ensure global optimality is achievable. Some of the well-known meta-heuristic algorithms include Differential Evolution (DE) [47], Ant Colony Optimisation (ACO) [14], PSO [26], and Grey Wolf Optimiser (GWO) [34].
Liuer Mihou utilises a hybrid heuristic search algorithm that combines PSO and DE to search for the optimal mutation, which we refer to as PSO-DE. Our auxiliary experiments in Appendix B.1 show vanilla PSO is particularly prone to get stuck in local optima, known as the stagnating particles problem [15]. Hence, DE is introduced to increase the exploration ability of the search algorithm. The pseudocode for PSO-DE is presented in Algorithm 1. The mutation vectors are called particles, which is conventional in heuristic optimisation literature. The initial position of the particles are generated randomly within the mutation space (lines 1 to 2). In each iteration, each particle's cost is calculated with Equation Equation (1), and the particle position is stochastically updated using either PSO or DE (lines 3 to 5), governed by mutation probability.
PSO updates its particles by moving each particle according to their velocity (lines 6 to 10). The velocity is calculated additively with three components: Inertia weight Percentage of the original velocity kept unchanged, calculated with × . Cognitive term Velocity towards the best solution, calculated with 1 1 ( − ) (exploitation). Social term Velocity towards the neighbourhood best solution, calculated with 2 2 ( − ) (exploration). where , 1 , 2 are hyperparameters, is the current velocity, 1 , 2 are random values between 0 to 1, and , , are position of personal best, neighbourhood best, and current position, respectively.
DE updates its particles via a sequence of mutation, crossover and update operations (lines 12 to 16): Mutation Choose three other particles, , , and calculate the mutant with mutation factor , = + ( − ). Crossover Calculate the candidate particle by combining mutant with original particle. Each dimension has probability to have the mutant value and 1 − to have the original particle. Update If the original particle has a higher cost, it is replaced by the candidate particle.
R o u t e r I n t e r n e t S w i t c h W i r e l e s s C o n n e c t i o n W i r e d C o n n e c t i o n T P - L i n k T a p o C 2 0 0 E Z V I Z M i n i O P l u s S K T N U 1 0 0 G o o g l e H o m e M i n i P h i l l i p s H u e B r i d g e M o n i t o r L a p t o p A t t a c k L a p t o p
EXPERIMENTAL SETUP
This section describes the experimental setup we used to evaluate Liuer Mihou. We begin with a description of the dataset, followed by the details of the target ML/DL algorithms and metrics used.
Dataset
We evaluate Liuer Mihou against IoT networks. IoT is one of the fastest-growing technologies in the history of computing and is becoming prevalent in smart cities and smart homes [2]. In addition, IoT devices have low computation power and transmit unencrypted traffic, making them highly vulnerable to attacks. Therefore, it is of great interest to both adversarially attack and defend IoT networks.
We use two IoT networks in our experiments, one being the benchmark Kitsune dataset [36] and another is a dataset captured from our own IoT testbed [3]. Measuring the maliciousness of the adversarial traffic is a crucial part of our evaluation. However, replaying the adversarial attack generated with any publicly available network datasets is impossible because we cannot fully replicate the testbed used to capture the data. Hence, we have gathered our dataset to evaluate the maliciousness of the adversarial attacks. The testbed consists of various IoT devices connected wirelessly, including AI Speakers (Google Home, NUGU), Security Cameras (EZVIZ, Tapo), and a Smart Hub (Hue Bridge). The testbed also includes an attacker laptop used to conduct attacks and a monitor laptop used to capture network packets. Each device can be controlled or monitored by a cell phone, PC, or other connected IoT devices. Figure 3 illustrates the setup of the IoT testbed.
The benign data consists of packets generated via commands such as turning off light bulbs, playing music on Google Home, etc, over 30 minutes. The malicious data consists of two main types of malicious attacks, probing and Denial of Service (DoS), all targeted at Google Home Mini. Distributed attacks such as Mirai Botnet were excluded since it requires compromising other IoT devices, which is outside the scope of our threat model. Probing attacks include Port Scan (PS) and OS Detection (OD), and we repeat PS twice and OD four times. DoS attacks include HTTP Flooding (HF) with LOIC at the highest intensity for 6 minutes. The attacks were chosen because they are frequently conducted and can be successfully detected by anomaly-based NIDS 2 . To reduce the training time, we have filtered the packets so that Liuer Mihou only processes packets with Google Home as sender or receiver, which results in 14,400 benign packets.
The adversarial traffic is generated with Liuer Mihou with the default, rule-of-thumb parameters that show good performance in general, shown in Appendix A. We intentionally chose not to fine-tune any of the hyperparameters to suit our dataset in order to remove any selective data snooping [6]. To reduce the computational time of generating adversarial HTTP Flooding traffic, we only process approximately the first 6% of the original HTTP Flooding traffic, which generates around 36,000 packets. To evaluate the maliciousness of the adversarial traffic, we have replayed the adversarial packets in the same IoT testbed with tcpreplay [5] in real-time and captured the replayed packets. The number of malicious, adversarial, and replayed packets gathered for each attack can be found in Table A4.
Target DL/ML Algorithms
We evaluate our attack on a wide range of DL/ML based anomaly detectors. In particular, the following anomaly detection algorithms are evaluated.
• Kitsune [36], a state-of-the-art NIDS for IoT network, implemented with Github implementation [35]. • Self-Organising Maps (SOM) [27] implemented with the python package MiniSOM [50]. • Robust Random Cut Forest (RRCF) [18] implemented with the python package rrcf [8]. • Local Outlier Factor (LOF) implemented with sklearn.
• One-Class SVM (OCSVM) implemented with sklearn.
• Isolation Forest and Elliptical Envelope implemented with sklearn. However, these two algorithms did not perform well on our dataset, see details in Appendix B.4. All algorithms were trained on entire benign samples and used the default parameters. Where applicable, the upper bound on the fraction of training errors is set to 0.001, i.e., the operating point of FPR is 0.001 or less.
The surrogate NIDS is a vanilla autoencoder written in Tensor-Flow 2 [17]. The encoder consists of three dense layers with 32, 8, and 2 neurons, respectively, and the Decoder consists of three dense layers with 8, 32, and 100 neurons. All layers use the ReLU activation function except for the last layer of Decoder, where it uses Sigmoid. The architecture of the surrogate model is arbitrarily chosen, and we intentionally did not conduct any hyperparameter search to find the optimal structure. The surrogate model is trained on all benign packets with one epoch to mimic online detection. The threshold of the surrogate model is determined as three standard deviations away from the mean of the anomaly scores on benign data.
The detection results of all traffic are generated offline. This is due to the open-source implementation of the feature extractor, AfterImage [35], is not optimised and is too slow for real-time feature extraction. Nonetheless, the detection speed of the NIDS is outside the scope of our work, and the accuracy will be the same regardless of online or offline detection.
Metrics
Prior studies in adversarial attacks mainly target classifiers and measure the precision and recall of the model with ground truth labels. However, since our attack targets anomaly detectors with no ground truth labels, we cannot use precision and recall as metrics. Instead, we propose three categories of metrics to evaluate various aspects of the NIDS under the influence of Liuer Mihou: performance, evasion, and semantic. A summary of metrics and symbols is provided in Appendix C.2.
Performance
Evasion
Metrics. Evasion metrics measure the accuracy of the NIDS under adversarial and replay traffic and consist of two metrics, Detection Rate (DR) and Evasion Rate (ER), with each metric measured for adversarial traffic (ADR and AER) and replayed traffic (RDR and RER). DR measures the ratio of the adversarial/replayed packets that have been classified as malicious and gives an indication of the robustness of NIDS under adversarial/replayed traffic. However, using DR alone can sometimes be misleading. Consider an attack with an MDR of 0.1, and after adversarial modification, its ADR is reduced to 0.09. Judging by ADR alone, we might conclude that the perturbation is working very well, but since the MDR is 0.1, our attack has only reduced 10% of the initially detected packets. Hence, we require ER that measures the percentage of the adversarial/replayed packets that evade detection compared to the original attack, indicating the effectiveness of the adversarial attack.
EXPERIMENT RESULTS
This section presents the results and findings of the following three main aspects of Liuer Mihou: evasiveness, maliciousness, and evasiveness against adversarial defences.
Evasiveness of Adversarial Traffic
4.1.1 Our Dataset. We first measure the ability of adversarial traffic to bypass NIDS detection on our dataset by comparing the performance of various ML/DL based NIDS under the adversarial traffic generated by Liuer Mihou compared to the original attack 3 . Table 1 shows the performance and evasion metrics of adversarial and replayed traffic on various NIDS.
We have found several interesting aspects of the result from our evasiveness analysis. Firstly, replayed traffic is less evasive compared to adversarial traffic, indicated by RER being higher than AER for all attacks. By inspecting the adversarial and replay traffic, we have found that the inherent processing delays in the attacker's machine and propagation delays in the transmission medium cause the packets' arrival time at the victim to be slightly different from the arrival time specified in the adversarial traffic. Such delays will 3 We have tried to compare our work with that of Han et al. [20]. However, their open-source implementation did not show good results on our dataset (see Appendix B.3 for more details). Figure 4. Secondly, Kitsune is highly vulnerable to Liuer Mihou, but other anomaly detectors are not. The ADR of Liuer Mihou with Kitsune is 0 for all three attacks, while other detection algorithms have ADR higher than 0. We have two hypothesises to explain this phenomenon. First, Kitsune is designed to have a relatively high threshold value compared to other domains to reduce false alarms. Even a seemingly low false positive rate of 0.01 can result in thousands of false alarms when millions of packets are processed in the NIDS domain. Hence, the high threshold value causes Kitsune to be more vulnerable to adversarial attacks. Second, the internal feature representation of the surrogate model is not similar to other anomaly detectors. Different anomaly detectors have different design structures that favour different internal representations learned from benign traffic. The surrogate model used in our experiments is a vanilla autoencoder which has the same underlying detection algorithm as Kitsune, and they will learn similar feature representations. Other ML algorithms are not based on autoencoders, which means they will learn different features than the surrogate model and reduce the transferability of adversarial examples.
In order to gain more insight into the transferability of adversarial traffic, we conduct further experiments to measure the similarity of anomaly scores between the surrogate and target NIDS and the relative threshold values.
For a fair comparison between the models, we normalise the anomaly scores of the malicious attack and threshold values for each model to be between 0 and 1. The similarity between the target and surrogate NIDSes is measured by the average Euclidean Distance (ED) between each pair of anomaly scores. Table 2 shows the ED and relative threshold difference between the autoencoder surrogate and various target NIDS, Figure 5 illustrates the difference in anomaly scores. Table 2 reveals that Kitsune and LOF have low ED compared to the surrogate model, indicating they have learned similar decision functions. However, the relative threshold value of LOF is lower than the surrogate model, so that transferability is limited. On the other hand, Kitsune has a higher relative threshold value, and together with its high similarity, makes it highly vulnerable to autoencoder surrogates. RRCF, SOM and OCSVM have a larger ED than the surrogate model mainly due to the design of the algorithm tends to learn different feature representations compared to autoencoders. For example, the tree structure of RRCF causes the output to be discrete, and SOM and OCSVM uses an RBF Kernel and Gaussian topology that makes the anomaly scores conform to Gaussian distribution. Therefore, the AER of adversarial traffic is low regardless of the difference threshold, and the adversarial traffic targeted at the surrogate model is not transferable to the target model.
Kitsune
Dataset. In addition to our dataset, we measure the evasiveness of Liuer Mihou on the Kitsune dataset [36]. We ran the attack with the same configuration described in Section 3.2, and compare the reported AER of Liuer Mihou to adversarial traffic generated with Traffic Manipulator with similar constraints [20] (Traffic Manipulator have only reported AER for three attacks). Table 3 shows the ADR and AER of the adversarial traffic generated by Liuer Mihou as well as the AER of Traffic Manipulator (TM). The RDR and RER were not measured since we cannot replicate the same IoT testbed as Kitsune. Results show that most attacks have a high AER, indicating Liuer Mihou can successfully evade Kitsune in general. In addition, our attack outperforms TM in Mirai and SSDP Flooding and is slightly less evasive for Fuzzing attacks. However, none of the adversarial traffic can entirely bypass Kitsune, similar to what we have observed in our dataset.
Upon inspecting the Kitsune dataset in detail, we have found that the Kitsune dataset contains packets with abnormally high anomaly scores compared to the previous packet in all traffic files, creating sudden spikes in the traffic patterns. When there is a large spike in the anomaly score, Liuer Mihou cannot reduce the anomaly score below the threshold with the predefined boundaries. However, it is still able to lower the anomaly scores of the malicious packet, illustrated in Figure 6. We suspect the large spikes in anomaly scores are caused by anonymisation. The authors of Kitsune [36] have truncated payload sizes to 200 bytes for privacy reasons, which results in a large number of malformed packets that are not representative of real IoT traffic. Nevertheless, Liuer Mihou can still increase the evasiveness of the attack, and the adversarial traffic can be made more evasive by recursively running Liuer Mihou on adversarial traffic.
Summary. The evasiveness of Liuer Mihou largely depends on the similarity of learned representations and relative threshold values between the surrogate and target model. The evasiveness of the replayed attack will be lower due to inherent transmission and processing delays. Liuer Mihou is less effective when the attack contains large spikes in anomaly scores, but it can be mitigated by recursively running Liuer Mihou on the adversarial attack.
Maliciousness of Adversary
Under our threat model, the adversarial attack must also fully/partially maintain its original malicious functionality. We objectively compare the maliciousness of the replayed packets for each attack to the The goal of HTTP Flooding is to make the Google Home device unresponsive, so we measure the average round trip time using ping, with results shown in Table 4. As expected, the delay in RTT caused by the replayed HTTP flooding attack is only a small proportion of the unmodified HTTP flooding, but it is still larger than normal. We have tried using Google Home during the adversarial flooding attacks, and from our experiences, the increase of RTT of the adversarial traffic generated with Liuer Mihou was barely noticeable.
Probing attacks, such as Port Scan and OS Detection attacks, aim to detect open ports of the device. We compare the number of well-known ports (port number up to 1024) detected with the original attack to the adversarial attack and compare the relative time needed to get the results. Table 5 shows the attack metrics of OS Detection and Port Scan, respectively. Results show that adversarial OS Detection attacks can fully scan all the ports scanned by the unmodified attack with less than a ten percent increase in RTD for Liuer Mihou. Port Scan attacks can scan over 90% of the original ports but takes ten times more time than the original Port Scan attack.
We hypothesise that the failure of adversarial HTTP flooding is mainly due to the maximum time delay being set very high, which causes the intensity to be overly reduced, and the adversarial attack is no longer malicious. Therefore, when generating adversarial traffic for DoS attacks, the maximum time delay of a packet should be carefully set to preserve its maliciousness. On the other hand, probing attacks rely on the payload content to conduct malicious activities, and the maximum time delay of a packet will not significantly impact the malicious functionalities. Another contributing factor to maliciousness is the surrogate threshold. Intuitively, the surrogate threshold measures how strict the traffic has to conform to the learned representation of normality. Since the surrogate model has a relatively low threshold, it forces the adversarial traffic to conform to a stricter notion of normality and potentially remove DoS attacks' maliciousness. Interestingly, probing attacks did not have much reduction in maliciousness with the same threshold. This is because the features extracted by Kitsune focus heavily on packet inter-arrival times and are most sensitive to inter-arrival times. Hence, probing attacks are detected not because of the payload contents but because the inter-arrival times are shorter than usual (this also explains why Kitsune cannot detect ARP spoofing and poisoning since the two attacks transmit packets at a low frequency). Since the probing attacks' inter-arrival time has no significant effect on the maliciousness, the adversarial probing attacks are still malicious.
Summary. The maximum time delay can significantly affect the maliciousness of DoS attacks that relies on packet arrival time. Therefore, the maximum time delay should be carefully set to retain the malicious functionality. The maliciousness of probing attacks that mainly relies on packet payload is less affected by the mutation operations.
Evasiveness against Defences
We evaluate the strength of Liuer Mihou against Kitsune with adversarial defence mechanisms deployed because it is most vulnerable. Common adversarial defences include adversarial training [32], where the model is trained with adversarial examples with correct labels to increase its robustness, and adversarial detection [33,51], where a secondary classifier is trained to detect adversarial examples. Although adversarial training have shown great potential in defending against adversarial attacks, it is not applicable under our threat model, because our NIDS is trained in an unsupervised manner without any label information. Thus, we choose two adversarial detection defences that are model and domain agnostic: Feature Squeezing [51], and Mag-Net [33]. With the addition of adversarial defences, the adversarial defences will first determine whether the input feature is adversarial or clean. Next, Kitsune will detect whether the input example is benign or malicious.
Feature Squeezing.
Feature Squeezing (FS) [51] aims to reduce the precision of the input feature space and limit the degree of perturbations available to the attacker. The general strategy is to compare the classifier's prediction between original and squeezed inputs, and if there is a significant difference in prediction, the sample is alerted as adversarial. We modify Kitsune to squeeze the features with four precision levels, 1, 2, 3, 4 decimal places, and a threshold value is based on the absolute difference between the unsqueezed and squeezed data on benign traffic for each precision level. Table 6 shows the detection result of FS and Kitsune on benign, malicious, adversarial and replayed traffic. Results show FS fails to classify any adversarial and replay traffic as adversarial. Instead, it classifies malicious packets as adversarial, which suggest FS is not a valid defence against Liuer Mihou.
Mag-Net.
Mag-Net [33] utilises a group of detectors and a reformer to remove any adversarial perturbation. The detectors perform a preliminary check that determines whether the input sample looks clean and raise an alert if it does not. Next, all inputs, regardless of the detector's classification, are passed to the reformer that reconstructs the input to remove any adversarial perturbation. The final detection is performed on the reconstructed input.
We have used the two default detectors provided by the GitHub implementation of Mag-Net [49]. We first train the two detectors and the reformer on benign traffic and set the detector's threshold as the maximum RMSE value on benign traffic. Next, we record the output by Mag-Net's detector and the output of Kitsune on reformed input for benign, malicious, adversarial and replayed traffic. Table 7 presents the experiment result. From the table, we notice that detectors of Mag-Net have detected both malicious and adversarial traffic as adversarial. However, the reconstruction by the reformer removes the malicious and adversarial characteristics of the feature, which causes Kitsune to classify all traffic as benign. Therefore, Mag-Net is also not a suitable adversarial defence for NIDS.
Summary. Feature-level adversarial detectors are not applicable for NIDS because adversarial attack in NIDS makes packet-level modifications that can potentially have a large difference compared to the original attack. Moreover, the detectors are trained on purely benign data, which does not allow the detectors to distinguish between adversarial and malicious traffic.
RELATED WORK 5.1 Anomaly-based NIDS
The target NIDS of Liuer Mihou is anomaly-based NIDS, where an anomaly detection algorithm is deployed as its detection engine as opposed to classification. The advantage of using anomaly detection instead of classification is that they are unsupervised and do not require any labelled data, which is more suitable for NIDS since labelling millions of packets each day is not feasible.
The general process of anomaly-based NIDS comprises three main stages: packet capturing, feature extraction and anomaly detection. When a packet arrives, the NIDS uses packet capturing libraries (e.g., Scapy and tshark) to capture the packets. The packets are passed on to the feature extractor, which extracts a wide range of aggregate features about the packets. To allow fast extraction of features, common feature extractors [36,45] extract statistics provided in the header such as payload and arrival time without inspecting the payload.
The extracted features form the input to the anomaly detector. The output of the anomaly detector is often a number indicating its normality, which we will refer to as the anomaly score of the packet. During training, the NIDS calculates a suitable threshold based on anomaly scores of benign data with a predefined heuristic. For example, using the maximum anomaly score or three standard deviations away from the mean. During execution, the NIDS classifies any packets with an anomaly score above the threshold as malicious.
Adversarial Attacks bypassing NIDS
Existing adversarial attacks in the NIDS domain typically alter the input in the feature space, i.e., assume a white-box scenario and apply adversarial attacks designed for images (e.g., FGSM [16], JSMA [40], C & W attack [12], and DeepFool [37]) to generated adversarial network features. However, having just the adversarial network features have no practical significance as the features alone cannot be used to conduct any malicious network attacks. Hence, problem space attacks [41] are needed to enhance the practicality of adversarial attacks.
A handful of studies have considered adversarial attacks that modify the raw packets instead of network features in the NIDS domain. Overall, there are two main approaches to packet-level modification. One method is to predefine a set of packet-level mutations at various levels, for example, fragmenting the packet or delaying the packet. Then randomly apply the mutation and test if it can bypass detection [21,23]. These methods require vigorous trial-and-error and provide little to no theoretical guidance and insight. A more formal approach formulates the adversarial attack as a bi-level optimisation problem. These attacks first generate realistic adversarial features with Generative Adversarial Networks [20] or Manifold Approximation [28]. Next, the packets are modified to minimise the distance between the adversarial and malicious features. However, the bi-level optimisation formulation complicates the search space and is difficult for the search algorithm to find a solution.
Adversarial Defences
Adversarial attacks have urged researchers to develop countermeasures to detect or mitigate the effect of adversarial examples. Multiple defence mechanisms and paradigms have been proposed targeting classifiers in CV. However, most of them can be broken by adaptive adversaries that have knowledge of the defence mechanisms in place [7,11]. The most effective and promising defence mechanisms are adversarial training [32] and adversarial detection [33,51]. Adversarial training aims to smooth out discontinuities in the feature space to correctly classify the adversarial examples, which requires retraining the model. In contrast, adversarial detection recognises that adversarial examples are synthetically created and contain different characteristics than clean examples. Hence, adversarially perturbed examples can be effectively detected.
Most adversarial defences are evaluated in the CV domain and on classification algorithms. To the best of our knowledge, we could not find any adversarial defence explicitly designed for NIDS, possibly due to the lack of practical adversarial attacks that are available in the first place. Furthermore, CV and NIDS have drastically different feature space structures, and CV often uses supervised learning algorithms, whereas NIDS uses unsupervised algorithms. Hence, it is an open question whether existing adversarial defences designed for CV can be directly deployed in the NIDS domain.
DISCUSSION 6.1 Evasiveness and Maliciousness Tradeoff
Liuer Mihou is a transfer-based attack that leverages adversarial transferability across ML/DL algorithms. Adversarial transferability has been studied mainly with regards to supervised, multi-class classification [1,38,39], but for unsupervised binary classification problems such as NIDS, it is a mostly untouched topic.
From our experiment results in Section 4.1, we have empirically shown that adversarial transferability across anomaly detection based NIDS largely depends on the similarity of the decision boundary and relative threshold value between the target and surrogate NIDSes. The similarity of the decision boundary ensures the packetlevel modifications on the surrogate have the same effect on the target and a lower threshold value ensures the surrogate forms a stricter benign profile. Since knowing the actual detection model is infeasible in practice, the similarity of the decision boundary cannot be increased. Therefore, to increase evasiveness, the attacker have to rely on lowering the threshold value of the surrogate model.
Analysis of the replayed adversarial traffic shows that the replayed packets' anomaly scores will be slightly higher than the theoretically generated adversarial packets due to inherent propagation delays and process delays that cause the packets to arrive at slightly different times. Therefore, surrogate NIDS's threshold is encouraged to be low and create a buffer zone for inherent delays.
For the adversarial and replayed attack to be evasive, the surrogate threshold is encouraged to be low. However, the surrogate threshold represents how similar the packet is to the normal traffic, and lowering the surrogate threshold will inevitably lower the maliciousness of the adversarial traffic. For example, our adversarial HTTP Flooding did not significantly increase the target device's response time, indicating that the threshold value of the surrogate model is too low to preserve the maliciousness of HTTP Flooding attacks. Hence, adversarial attacks on NIDS face a fundamental trade-off between evasiveness and maliciousness that can be adjusted via the surrogate threshold. It is crucial to find a balance between them to conduct successful adversarial attacks.
Weakness of Adversarial Defence
We have evaluated two plug-and-play adversarial defence methods: Feature Squeezing [51] and Mag-Net [33]. These defences were designed initially for classification algorithms, and the results from our experiment in Section 4.3 have shown that both methods are unsuitable in the NIDS domain.
There are two main research challenges for applying adversarial detectors in CV to NIDS. First, the design principle of adversarial detectors relies on the fact that adversarial features are synthetically created and have small distributional differences from clean, natural input. However, packet-level attacks such as Liuer Mihou modify the packets directly rather than the features, and featurelevel changes do not have to be minimal as long as the maliciousness of the original attack is preserved. Hence, packet-level attacks make large changes in the input feature and have realistic distribution as benign traffic, and adversarial detectors that intentionally ignore small perturbations, such as Feature Squeezing, fail to detect the adversarial examples. Second, only benign traffic is available to train the adversarial detectors under a realistic NIDS threat model. Therefore, the adversarial detector cannot distinguish between adversarial and malicious traffic, classifying all attacks as adversarial. In essence, the adversarial detector becomes another NIDS, which is redundant. Some adversarial detectors, such as Mag-Net, attempts to remove adversarial perturbation. Since the detector cannot distinguish between whether the perturbation comes from malicious or adversarial traffic, it will remove both types of perturbations and make all inputs benign, which helps the attack evade detection.
Due to the limitation of adversarial detectors in the NIDS setting, new forms of adversarial defence have to be developed. Our experiment results in Section 4.1 shown Liuer Mihou relies on adversarial transferability to be successful, which requires training a suitable surrogate NIDS to mimic the target NIDS. Therefore, to block adversarial transferability the defender can introduce randomness in the network by stochastically changing the NIDS structure [44] to force the attacker to generate universal adversarial examples against a wide range of detection algorithms.
Limitations and Future Work
We evaluate Liuer Mihou under a grey-box scenario where the attacker is assumed to have complete knowledge of the feature extractor. Although these assumptions are justified and aligned with previous works, future work could investigate the performance of Liuer Mihou under a black-box threat model with a surrogate feature extractor.
The number of mutation operations we have defined is limited and rather simple. Future studies could expand the mutation operations, such as fragmentation and reordering, to generate more evasive adversarial examples. Moreover, the redundant packets were set to have the same type as malicious packets with randomly generated payloads. For attacks that depend heavily on the payload content, poorly crafted packets may result in unexpected behaviour of the adversarial traffic. Future studies can investigate ways to design redundant packets that are more evasive and investigate the effect of mutation operations on the extracted features.
We have only evaluated the maliciousness of Liuer Mihou on a limited number of network attacks and IoT devices. Future work can extend the number of network attacks to include Man-in-the-Middle, brute force, and fuzzing. The target device can also include broader range of IoT devices, such as security cameras, smart lightbulbs, and smart TVs.
Another limitation of our work is that we have evaluated Liuer Mihou in a rather simple IoT testbed that assumes no packets are dropped and devices are not congested. Future work could consider evaluating our attack under complicated network environments where high packet loss rate and processing delay are observed.
CONCLUSION
We have proposed a novel, practical adversarial attack targeted at NIDS called Liuer Mihou. The attack leverages adversarial transferability to train a surrogate NIDS to mimic the decision boundary of the target NIDS. The adversarial packets are generated by finding a set of mutation operations that minimises the anomaly score produced by the surrogate NIDS, which is transferable to the target NIDS.
We have evaluated the evasiveness of the adversarial traffic against four ML-based algorithms (SOM, RRCF, LOF, and OCSVM) and the state-of-the-art IoT NIDS, Kitsune. The results of our experiments show Liuer Mihou will be highly evasive if the surrogate NIDS has similar decision boundaries and a relatively low threshold compared to the target NIDS. The maliciousness of our attack is largely affected by the boundaries of mutation operation and the surrogate threshold, which leads to the trade-off between evasiveness and maliciousness. Hence the boundaries of mutation operations and surrogate threshold should be set carefully to ensure the original maliciousness is retained. Finally, we demonstrated that adversarial detection defences such as Feature Squeezing and Mag-Net could not defend against Liuer Mihou and were unsuitable for NIDS due to the lack of malicious and adversarial data available during training and that Liuer Mihou can create a significant difference in feature space compared to the original feature.
Our work provides a solid theoretical foundation for generating transfer-based practical adversarial traffic against NIDS and provides insightful discussion on adversarial transferability and defences in the NIDS domain.
A EXPERIMENT HYPERPARAMETERS
We have used parameters shown in Table A1 in our experiments.
B AUXILIARY EXPERIMENTS
In this section, we provide the results of our auxiliary experiments.
Our auxiliary experiments include optimisation with PSO, finding optimal combinations of search algorithm and payload assignment, experiments with Traffic Manipulator [20], and performance of common outlier detection algorithms in NIDS.
B.1 PSO Optimisation
We first conducted experiments to measure the performance of Liuer Mihou using vanilla PSO. We found that PSO causes the particles' position to stagnate and move only around its neighbourhood, reducing its exploration capabilities and finding locally optimal solutions. The stagnating particle problem is a well-known problem caused by the particle's current position being identical to its personal best position and neighbourhood best position. Under such circumstances, the cognitive and social terms are close to zero, and after few iterations, the inertia weight will tend to 0, resulting in a stagnating particle [15].
To overcome this effect, we combine DE with PSO. In each iteration, the particle has 50% chance of moving according to PSO and 50% chance of moving according to DE. As a result, the social term of velocity changes stochastically in each iteration and reduces the probability of stagnating particles. Our experiments in the next section objectively compare the performance between pure PSO, pure DE and PSO-DE.
B.2 Optimal Combination Search
We wish to know which combination of payload assignment (Random Assignment (RA), Seeded Assignment (SA), and Uniform Assignment (UA)) and search algorithm (DE, PSO, PSO-DE) yields the best solution to our optimisation problem (Equation 1).
We have used three metrics to objectively measure the performance of each combination of search algorithm and payload and arrival assignment: Reduction Percentage (RP) RP measures the average amount of reduction in anomaly score for each malicious packet. A higher reduction percentage means the search algorithm can find adversarial packets with a lower cost value.
Packet Impact (PI) PI measures the effect of adversarial mutation on the malicious packet as well as subsequent attack packets. The sequential relationships between Kitsune's features mean that lowering one packet's anomaly score will reduce the subsequent packets' anomaly scores until the anomaly scores eventually climb back up over the threshold. The number of subsequent attack packets that an adversarial packet lowers below the threshold is defined as the impact of the packet. The packet impact metric measures the average packet impact of all malicious packets for an attack file. A high packet impact indicates fewer malicious packets modified overall. Percentage Change (PC) PC measures the number of changes that the Liuer Mihou attack has to modify to make all attack packets bypass detection. We did not explicitly minimise the number of packets modified in the attack formulation, so this metric is mainly used as a tie-breaker when models have similar RP and PI. Figure A2 shows the RP, PI, and PC for all three attacks. Across all the attacks, the assignment method does not significantly impact the PI and RP given a particular search algorithm. However, RA has a significantly higher PC compared to all other assignment methods. PSO has the worst RP and PI in terms of the search algorithms; while. PSO-DE and DE have similar performance. Overall, UA assignment with the PSO-DE algorithm has the best result because of its high RP, high PI, and low PC for all attacks.
An interesting finding from our results is that RA has similar RP and PI compared to other assignment methods but have significantly higher PC. To better understand this phenomenon, we have compared the history of the particles' position during the same optimisation problem between RA and UA with PSO in Figure A1.
With RA, the particles move near the local neighbourhood but never converge to a single point, leaving much of the search space unexplored. On the other hand, particles with UA have searched a larger area and converged to a point within its local neighbourhood. The stagnation of particles reveals a weakness of RA, which is the lack of total order between any pair of positions. To illustrate, consider an extremely simplified scenario involving two particles 1 , 2 within the same neighbourhood that has cost values ( 1 ) = 0.3 and ( 2 ) = 0.5 at iteration . Under such circumstances, the PSO algorithm will move 2 towards 1 . In the next iteration, + 1, the cost values may be completely different due to non-deterministic nature of RA, say, ( 1 ) = 0.5 and ( 2 ) = 0.2. The PSO algorithm will now move 1 towards 2 . This process may repeat several times, resulting in particles moving around in circles and never leaves their local neighbourhood.
Despite the low exploration capability of RA, it still finds a "good enough" solution compared to other assignments. The reason is also due to the stochastic nature of RA. When the particles move within a local neighbourhood, they repeatedly sample the distribution to find the lowest cost value. Since multiple optimal solutions exist, the search algorithm will eventually find an optimal solution with a high number of crafted packets.
B.3 Traffic Manipulator Experiments
We have experimented with Traffic Manipulator (TM) downloaded the open source implementation of TM on GitHub [19]. Following the instructions on the GitHub page but using our own test data, we have obtained the results with Port Scan (PS), OS & Service Detection (OD) and HTTP Flooding (HF) attacks, shown in Table A2.
The results show that after TM's manipulation, the malicious packets are less evasive. We suspect this might be due to our poor choice of mimic features since we naively chose 1000 random benign features as the mimic set. The authors claim that using a GAN to generate mimic features yield better results. However, the code for GAN is not available on GitHub.
B.4 Outlier Detection Algorithms Evaluation
We have conducted experiments to find suitable outlier detection algorithms for conducting transferability analysis in Section 4.1. Table A3 shows the performance metrics of different outlier detection algorithms. We have used the sklearn package to create the models and used the default parameters for all of the algorithms and, where applicable, set the upper bound on the fraction of training errors to be 0.001.
Figure 2 :
2Illustration of cost values for three assignment methods, Random (RA), Seeded (SA) and Uniform (UA). The y-axis represents the cost function, , and the x-axis represents . All other variables (i.e., , ) are held constant.
Algorithm 1 :
1The PSO-DE algorithm. Data: A packet Result: Mutation operation ( ∈ ℳ) that generate minimum cost in Equation Equation (1) 1 Define the bounded mutation space, Ψ; 2 Randomly initialise the particle position x and velocity v of the population within Ψ; 3 for ← 0 to max_iter do 4
Figure 3 :
3The IoT testbed used in our experiments. We have conducted comprehensive experiments on finding optimal combination of search algorithm and payload assignment strategies and have found UA with PSO-DE performs best overall. Details of the experiments are provided in Appendix B.2.
Figure 4 :
4Metrics. Semantic metrics compare the severity of the adversarial traffic on the target system to the original, unmodified attack. Different network attacks have different goals so the semantic metrics will depend largely on the network attack. For DoS attacks, we measure the Round-Trip Time (RTT) and calculate the Relative Round-trip Delay (RRD) of the device under flooding attacks compared to the normal environments. For Probing attacks, Comparison of Kitsune's anomaly scores between Original, Adversarial and Replayed traffic for Port Scan (PS), OS Detection (OD) and HTTP Flooding (HF). The anomaly scores are normalised with respect to the threshold (red line). we compare the Relative Ports Scanned (RPS) and Relative Time Delay (RTD) between the adversarial and unmodified attacks.
Figure 5 :
5Anomaly scores of malicious traffic between the surrogate (orange) and various outlier detection algorithms (blue).
Figure 6 :
6How Liuer Mihou handles sudden spikes. Left: original traffic with sudden spike (red dot) above the threshold (red line). Right: the spike is reduced by injecting redundant packets (grey dots), but is still above the threshold.
Figure A1 :Figure A2 :
A1A2Particle positions ( , ) during the search algorithm. Light blue points are visited positions of the particles, red points are the position of the last iteration, and black points are locations of current and previous global best positions: (a) RA with PSO search algorithm; (b) UA with PSO Summary of experimental results: Rows represent the three attacks and columns represent the three model metrics.
Metrics. Performance metrics measure the accuracy of NIDS under clean traffic. Following the conventions in security literature, benign examples are referred to as negatives and malicious examples as positives. Performance metrics include True Negative Rate (TNR), which measures the ratio of packets in benign traffic correctly identified as benign packets, and Malicious Detection Rate (MDR), which measures the ratio of packets classified as malicious in the malicious traffic. Note that by Definition 2.1.2, not all packets in malicious traffic are malicious and may contain benign packets. Since we do not have the ground truth labels for each malicious packet, we use MDR instead of True Positive Rate. A high MDR and TNR indicate that the NIDS can effectively separate malicious and benign traffic.
Table 1 :
1Adversarial detection and evasion rates of the adversarial traffic generated with Liuer Mihou against various NIDS.Attack ML
TNR MDR ADR RDR AER RER
PS
Kitsune 1.000 0.74
0.00
0.00
1.00
1.00
PS
SOM
0.993 0.89
0.05
0.15
0.94
0.83
PS
LOF
0.999 0.98
0.98
0.98
0.00
0.00
PS
RRCF
0.992 0.93
0.71
0.71
0.24
0.23
PS
OCSVM 0.999 0.99
0.98
0.99
0.01
0.00
OD
Kitsune 1.000 0.41
0.00
0.25
1.00
0.39
OD
SOM
0.993 0.66
0.29
0.53
0.55
0.19
OD
LOF
0.999 0.95
0.95
0.97
0.00 -0.01
OD
RRCF
0.995 0.64
0.46
0.55
0.28
0.15
OD
OCSVM 0.999 0.84
0.83
0.88
0.00 -0.05
HF
Kitsune 1.000 1.00
0.00
0.33
1.00
0.67
HF
SOM
0.993 1.00
0.91
0.77
0.09
0.23
HF
LOF
0.999 1.00
1.00
1.00
0.00
0.00
HF
RRCF
0.978 1.00
1.00
1.00
0.00
0.00
HF
OCSVM 0.999 1.00
1.00
1.00
0.00
0.00
cause a slight increase in the adversarial packet's anomaly scores,
potentially making the attack detectable. Nevertheless, the magni-
tude of the anomaly scores in both adversarial and replayed attacks
have been reduced significantly, illustrated in
Table 2 :
2Comparison of Euclidean Distance (ED) and threshold values ( ) between the surrogate NIDS and different outlier detection algorithms.Attack Algorithm ED
PS
Kitsune
0.1416
0.2498
0.0684
-0.1814
PS
SOM
0.1961
0.1696
0.0684
-0.1011
PS
LOF
0.1780
0.0221
0.0684
0.0463
PS
RRCF
0.3990
0.0143
0.0684
0.0541
PS
OCSVM
0.4184
0.0214
0.0684
0.0470
OD
Kitsune
0.1376
0.2398
0.0711
-0.1687
OD
SOM
0.1468
0.1663
0.0711
-0.0952
OD
LOF
0.1132
0.0217
0.0711
0.0494
OD
RRCF
0.1598
0.0080
0.0711
0.0632
OD
OCSVM
0.4417
0.1605
0.0711
-0.0893
HF
Kitsune
0.1210
0.0096
0.0005
-0.0091
HF
SOM
0.1527
0.0160
0.0005
-0.0155
HF
LOF
0.1489
0.0018
0.0005
-0.0013
HF
OCSVM
0.5405
0.0293
0.0005
-0.0288
HF
RRCF
0.4329
0.0059
0.0005
-0.0053
Table 3 :
3The evasiveness of Liuer Mihou against Kitsune on the Kitsune dataset[36] compared to the AER of Traffic Manipulator (TM).Attacks
MDR
AER-LM
AER-TM
Active Wiretap
0.921
0.980
N/A
ARP MITM
0.798
0.512
N/A
Fuzzing
0.912
0.905
0.955
Mirai
0.882
0.823
0.721
OS
0.990
0.110
N/A
SSDP Flooding
1.000
0.791
0.532
SSL Renegotiation
0.987
0.450
N/A
SYN DoS
0.379
0.974
N/A
Video Injection
0.984
0.461
N/A
Table 4 :
4Comparison between original and adversarial HTTP Flooding attacks.Traffic Type
RTT (ms)
RRD
Normal
6.602
1.000
Original HF
173.393
26.264
Liuer Mihou HF
10.303
1.561
Table 5 :
5Comparison between original and adversarial scanning attacks.Traffic
Ports
Scanned
RPS
Time(s)
RTD
Original OD
155
1
1,654.55
1.00
Liuer Mihou OD 155
1
1,792.52
1.08
Original PS
155
1
1.99
1.00
Liuer Mihou PS 150
0.968
19.99
10.04
unmodified attack with malicious metrics with the metrics defined
in Section 3.3.3.
Table 6 :
6Detection results of Kitsune with Feature Squeezing.FS Output
Kitsune Output
Traffic
Clean
Adv.
Benign Malicious Total
Benign
14400
0
14400
0
14400
PS mal
1975
167
565
1577
2142
PS adv
2142
0
2142
0
2142
PS rep
2252
0
2252
0
2252
OD mal
27075
949
16651
11373
28024
OD adv
28074
0
28074
0
28074
OD rep
33571
0
33571
8313
33571
HF mal
33462
607305
935
639832
640767
HF adv
37440
0
37440
0
37440
HF rep
36653
0
36653
12089
36653
Table 7 :
7Detection results of Kitsune with Mag-Net.Mag-Net Output
Kitsune Output
Traffic
Clean
Adv.
Benign Malicious Total
Benign
14400
0
14400
0
14400
PS mal
126
2016
2142
0
2142
PS adv
750
1392
2142
0
2142
PS rep
702
1550
2252
0
2252
OD mal
7400
20624
27789
235
28024
OD adv
9662
18412
28074
0
28074
OD rep
7674
25897
33571
0
33571
HF mal
322
640445
640118
649
640767
HF adv
690
36750
37440
0
37440
HF rep
728
35925
36653
0
36653
Table A1 :
A1Liuer Mihou's hyperparameters used in all of the experiments.Variable
Value Description
n_particles
20
Population size of search algorithm
iterations
30
Maximum number of evolutions of
search algorithm
mutation_factor
0.8
used in differential evolution
cross_p
0.7
used in differential evolution
mutate_prob
0.5
Probability to apply differential evolu-
tion instead of PSO
max_time_window
1 s
Upper bound for packet delay
max_packet_size
1514 B Upper bound for packet size
max_craft_pkt
5
Upper bound for packet injection
mutation probability
0.5
Probability of updating with PSO
Table A2 :
A2Evasiveness of Traffic Manipulator (TM) on our dataset.Attack
MDR
ADR
AER
PS
0.807
0.947
-0.174
OD
0.389
0.925
-1.376
HF
0.980
0.991
-0.011
Table A3 :
A3Performance of the different NIDSes. Results show IF and EE have a high TNR of close to 1 but a low MDR rate of 0 for all three attacks, which suggests IF and EE classify all traffic as benign and not suitable for our dataset. C SUPPLEMENTARY MATERIAL C.1 Number of PacketsAttack
Algorithm
TNR
MDR
PS
Kitsune
1.000
0.736
PS
SOM
0.993
0.894
PS
LOF
0.999
0.982
PS
RRCF
0.992
0.929
PS
OCSVM
0.999
0.986
PS
IF
0.999
0.000
PS
EE
1.000
0.000
OD
Kitsune
1.000
0.406
OD
SOM
0.993
0.656
OD
LOF
0.999
0.954
OD
RRCF
0.995
0.642
OD
OCSVM
0.999
0.835
OD
IF
0.999
0.000
OD
EE
1.000
0.000
HF
Kitsune
1.000
0.999
HF
SOM
0.993
0.999
HF
LOF
0.999
1.000
HF
RRCF
0.978
1.000
HF
OCSVM
0.999
1.000
HF
IF
0.999
0.000
HF
EE
1.000
0.000
Table A4 :
A4The total number of malicious (C.2 Symbols and Equations used in Metrics), adversarial (
) and
replay (
) packets for each attack.
Attack
PS
2142
2142
2252
OD
28024
28074
33571
HF
640767
37440
36653
Table A5 :
A5Summary of metrics used in experiments.Metric Name
Formula
Type
True Negative Rate (TNR)
ben − ben
ben
Performance
Malcious Detection Rate (MDR)
mal / mal
Performance
Adversarial Detection Rate (ADR)
adv / adv
Evasion
Replay Detection Rate (RDR)
rep / rep
Evasion
Adversarial Evasion Rate (AER)
MDR − ADR
MDR
Evasion
Replay Evasion Rate (RER)
MDR − RDR
MDR
Evasion
Relative Round-trip Delay (RRD)
/
Semantic
Relative Ports Scanned (RPS)
/
Semantic
Relative Time Delay (RTD)
/
Semantic
Table A6 :
A6Summary of symbols used in metrics. ben # of positive packets detected in benign packets ben Total number of benign packets mal # of positive packets detected in malicious packets mal Total number of malicious packets adv # of positive packets detected in adversarial packets adv Total number of adversarial packets rep # of positive packets detected in replay packets rep Total number of replayed packets , Response time of original and adversarial attack , Ports detected by original and adversarial attacks , Time taken for original and adversarial attacksSymbol
Meaning
Liuer Mihou, also known as the six-eared macaque, is one of the antagonists in the classic Chinese novel Journey to the West and is best known for imitating Sun Wukong, the monkey king.
Other common attacks such as ARP poisoning and ARP host discovery were also conducted, but our both victim and surrogate NIDS failed to detect such attacks and is ignored in our experiments
A George, Petr Adam, David Smirnov, Benjamin Duvenaud, Anna Haibe-Kains, Goldenberg, arXiv:1808.06645Stochastic combinatorial ensembles for defending against adversarial examples. arXiv preprintGeorge A Adam, Petr Smirnov, David Duvenaud, Benjamin Haibe-Kains, and Anna Goldenberg. 2018. Stochastic combinatorial ensembles for defending against adversarial examples. arXiv preprint arXiv:1808.06645 (2018).
A survey of machine and deep learning methods for internet of things (IoT) security. Mohammed Ali Al-Garadi , Amr Mohamed, Abdulla Khalid Al-Ali , Xiaojiang Du, Ihsan Ali, Mohsen Guizani, IEEE Communications Surveys & Tutorials. 22Mohammed Ali Al-Garadi, Amr Mohamed, Abdulla Khalid Al-Ali, Xiaojiang Du, Ihsan Ali, and Mohsen Guizani. 2020. A survey of machine and deep learning methods for internet of things (IoT) security. IEEE Communications Surveys & Tutorials 22, 3 (2020), 1646-1685.
IoT intrusion detection dataset. to be disclosed later. Anonymous, Anonymous. 2020. IoT intrusion detection dataset. to be disclosed later. (2020).
Tcpreplay -Pcap editing and replaying utilities. Appneta, Appneta. 2020. Tcpreplay -Pcap editing and replaying utilities. https://tcpreplay. appneta.com/. (2020). Accessed: 2020-9-15.
Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, Konrad Rieck, arXiv:2010.09470Dos and Don'ts of Machine Learning in Computer Security. arXiv preprintDaniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, and Konrad Rieck. 2020. Dos and Don'ts of Machine Learning in Computer Security. arXiv preprint arXiv:2010.09470 (2020).
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Anish Athalye, Nicholas Carlini, David Wagner, arXiv:1802.00420arXiv preprintAnish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).
Implementation of the Robust Random Cut Forest algorithm for anomaly detection on streams. Matthew Bartos, Abhiram Mullapudi, Sara Troutman, Journal of Open Source Software. 41336Matthew Bartos, Abhiram Mullapudi, and Sara Troutman. 2019. Implementation of the Robust Random Cut Forest algorithm for anomaly detection on streams. Journal of Open Source Software 4, 35 (2019), 1336.
Metaheuristics in combinatorial optimization: Overview and conceptual comparison. Christian Blum, Andrea Roli, ACM computing surveys (CSUR). 35Christian Blum and Andrea Roli. 2003. Metaheuristics in combinatorial optimiza- tion: Overview and conceptual comparison. ACM computing surveys (CSUR) 35, 3 (2003), 268-308.
YOLOv4: Optimal Speed and Accuracy of Object Detection. Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, arXiv:2004.10934arXiv preprintAlexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv preprint arXiv:2004.10934 (2020).
Adversarial examples are not easily detected: Bypassing ten detection methods. Nicholas Carlini, David Wagner, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityNicholas Carlini and David Wagner. 2017. Adversarial examples are not eas- ily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 3-14.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 ieee symposium on security and privacy (sp). IEEENicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39-57.
Rallying adversarial techniques against deep learning for network security. Joseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, Yingjie Lao, arXiv:1903.11688arXiv preprintJoseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, and Yingjie Lao. 2019. Rallying adversarial techniques against deep learning for network security. arXiv preprint arXiv:1903.11688 (2019).
Ant colony optimization. Marco Dorigo, Mauro Birattari, Thomas Stutzle, IEEE computational intelligence magazine. 14Marco Dorigo, Mauro Birattari, and Thomas Stutzle. 2006. Ant colony optimiza- tion. IEEE computational intelligence magazine 1, 4 (2006), 28-39.
Particle swarm optimization: Global best or local best. Andries Petrus Engelbrecht, BRICS congress on computational intelligence and 11th Brazilian congress on computational intelligence. IEEEAndries Petrus Engelbrecht. 2013. Particle swarm optimization: Global best or local best?. In 2013 BRICS congress on computational intelligence and 11th Brazilian congress on computational intelligence. IEEE, 124-135.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
Intro to Autoencoders. Google, Google. 2020. Intro to Autoencoders. https://www.tensorflow.org/tutorials/ generative/autoencoder#third_example_anomaly_detection. (2020). Accessed: 2020-9-15.
Robust random cut forest based anomaly detection on streams. Sudipto Guha, Nina Mishra, Gourav Roy, Okke Schrijvers, PMLRInternational conference on machine learning. Sudipto Guha, Nina Mishra, Gourav Roy, and Okke Schrijvers. 2016. Robust random cut forest based anomaly detection on streams. In International conference on machine learning. PMLR, 2712-2721.
Traffic Manipulator. Dongqi Han, Dongqi Han. 2020. Traffic Manipulator. https://github.com/dongtsi/
Last Accessed. TrafficManipulator/tree/99807048c61dea548c55bc2720ea31369bc90fd1. TrafficManipulator/tree/99807048c61dea548c55bc2720ea31369bc90fd1. (2020). Last Accessed: 2021-11-10.
Dongqi Han, Zhiliang Wang, Ying Zhong, Wenqi Chen, Jiahai Yang, Shuqiang Lu, arXiv:2005.07519Xingang Shi, and Xia Yin. 2020. Practical Traffic-space Adversarial Attacks on Learning-based NIDSs. arXiv preprintDongqi Han, Zhiliang Wang, Ying Zhong, Wenqi Chen, Jiahai Yang, Shuqiang Lu, Xingang Shi, and Xia Yin. 2020. Practical Traffic-space Adversarial Attacks on Learning-based NIDSs. arXiv preprint arXiv:2005.07519 (2020).
Towards Evaluation of NIDSs in Adversarial Setting. J Mohammad, Greg Hashemi, Eric Cusack, Keller, Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks. the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication NetworksMohammad J Hashemi, Greg Cusack, and Eric Keller. 2019. Towards Evaluation of NIDSs in Adversarial Setting. In Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks. 14-21.
Identity mappings in deep residual networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, European conference on computer vision. SpringerKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. In European conference on computer vision. Springer, 630-645.
Improving network intrusion detection classifiers by non-payload-based exploit-independent obfuscations: An adversarial approach. Ivan Homoliak, Martin Teknos, Martín Ochoa, Dominik Breitenbacher, Saeid Hosseini, Petr Hanacek, arXiv:1805.02684arXiv preprintIvan Homoliak, Martin Teknos, Martín Ochoa, Dominik Breitenbacher, Saeid Hosseini, and Petr Hanacek. 2018. Improving network intrusion detection clas- sifiers by non-payload-based exploit-independent obfuscations: An adversarial approach. arXiv preprint arXiv:1805.02684 (2018).
Generating adversarial malware examples for black-box attacks based on gan. Weiwei Hu, Ying Tan, arXiv:1702.05983arXiv preprintWeiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983 (2017).
Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. Olakunle Ibitoye, Omair Shafiq, Ashraf Matrawy, 2019 IEEE Global Communications Conference (GLOBECOM). IEEEOlakunle Ibitoye, Omair Shafiq, and Ashraf Matrawy. 2019. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. In 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, 1-6.
Particle swarm optimization. James Kennedy, Russell Eberhart, Proceedings of ICNN'95-International Conference on Neural Networks. ICNN'95-International Conference on Neural NetworksIEEE4James Kennedy and Russell Eberhart. 1995. Particle swarm optimization. In Proceedings of ICNN'95-International Conference on Neural Networks, Vol. 4. IEEE, 1942-1948.
Visual feature analysis by the self-organising maps. Teuvo Kohonen, Erkki Oja, Neural Computing & Applications. 7Teuvo Kohonen and Erkki Oja. 1998. Visual feature analysis by the self-organising maps. Neural Computing & Applications 7, 3 (1998), 273-286.
Black Box Attacks on Deep Anomaly Detectors. Aditya Kuppa, Slawomir Grzonkowski, Nhien-An Muhammad Rizwan Asghar, Le-Khac, 10.1145/3339252.3339266Proceedings of the 14th International Conference on Availability, Reliability and Security. the 14th International Conference on Availability, Reliability and SecurityARES 2019, Canterbury, UK; Canterbury, UKACM21Aditya Kuppa, Slawomir Grzonkowski, Muhammad Rizwan Asghar, and Nhien- An Le-Khac. 2019. Black Box Attacks on Deep Anomaly Detectors. In Proceedings of the 14th International Conference on Availability, Reliability and Security, ARES 2019, Canterbury, UK, August 26-29, 2019. ACM, Canterbury, UK, 21:1-21:10. https://doi.org/10.1145/3339252.3339266
Alexey Kurakin, Ian Goodfellow, Samy Bengio, arXiv:1607.02533Adversarial examples in the physical world. arXiv preprintAlexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, nature. 521Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436-444.
Idsgan: Generative adversarial networks for attack generation against intrusion detection. Zilong Lin, Yong Shi, Zhi Xue, arXiv:1809.02077arXiv preprintZilong Lin, Yong Shi, and Zhi Xue. 2018. Idsgan: Generative adversarial networks for attack generation against intrusion detection. arXiv preprint arXiv:1809.02077 (2018).
Towards deep learning models resistant to adversarial attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, arXiv:1706.06083arXiv preprintAleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
Magnet: a two-pronged defense against adversarial examples. Dongyu Meng, Hao Chen, Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. the 2017 ACM SIGSAC conference on computer and communications securityDongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 135-147.
Grey wolf optimizer. Seyedali Mirjalili, Mohammad Seyed, Andrew Mirjalili, Lewis, Advances in engineering software. 69Seyedali Mirjalili, Seyed Mohammad Mirjalili, and Andrew Lewis. 2014. Grey wolf optimizer. Advances in engineering software 69 (2014), 46-61.
Kitsune-py. Yisroel Mirsky, Yisroel Mirsky. 2020. Kitsune-py. https://github.com/ymirsky/Kitsune-py. (2020). Accessed: 2020-9-15.
Kitsune: an ensemble of autoencoders for online network intrusion detection. Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, Asaf Shabtai, arXiv:1802.09089arXiv preprintYisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: an ensemble of autoencoders for online network intrusion detection. arXiv preprint arXiv:1802.09089 (2018).
Deepfool: a simple and accurate method to fool deep neural networks. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Pascal Fawzi, Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceed- ings of the IEEE conference on computer vision and pattern recognition. 2574-2582.
Improving adversarial robustness via promoting ensemble diversity. Tianyu Pang, Kun Xu, Chao Du, Ning Chen, PMLRInternational Conference on Machine Learning. Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. 2019. Improving adver- sarial robustness via promoting ensemble diversity. In International Conference on Machine Learning. PMLR, 4970-4979.
Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Nicolas Papernot, Patrick Mcdaniel, Ian Goodfellow, arXiv:1605.07277arXiv preprintNicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016).
The limitations of deep learning in adversarial settings. Nicolas Papernot, Patrick Mcdaniel, Somesh Jha, Matt Fredrikson, Ananthram Berkay Celik, Swami, 2016 IEEE European symposium on security and privacy. EuroS&PNicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P).
. IEEE. IEEE, 372-387.
Intriguing properties of adversarial ml attacks in the problem space. Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, Lorenzo Cavallaro, 2020 IEEE Symposium on Security and Privacy (SP). IEEEFabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing properties of adversarial ml attacks in the problem space. In 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 1332-1349.
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion. arXiv:2002.08527Aritran Piplai, Sai Sree Laya Chukkapalli, and Anupam JoshiarXiv preprintAritran Piplai, Sai Sree Laya Chukkapalli, and Anupam Joshi. 2020. NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion. arXiv preprint arXiv:2002.08527 (2020).
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. Haşim Sak, Andrew Senior, Françoise Beaufays, arXiv:1402.1128arXiv preprintHaşim Sak, Andrew Senior, and Françoise Beaufays. 2014. Long short-term mem- ory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014).
MT-Deep: boosting the security of deep neural nets against adversarial attacks with moving target defense. Sailik Sengupta, Tathagata Chakraborti, Subbarao Kambhampati, International Conference on Decision and Game Theory for Security. SpringerSailik Sengupta, Tathagata Chakraborti, and Subbarao Kambhampati. 2019. MT- Deep: boosting the security of deep neural nets against adversarial attacks with moving target defense. In International Conference on Decision and Game Theory for Security. Springer, 479-491.
Toward generating a new intrusion detection dataset and intrusion traffic characterization. Iman Sharafaldin, Arash Habibi Lashkari, Ali A Ghorbani, Iman Sharafaldin, Arash Habibi Lashkari, and Ali A Ghorbani. 2018. Toward gen- erating a new intrusion detection dataset and intrusion traffic characterization..
. Icissp In, In ICISSP. 108-116.
Outside the closed world: On using machine learning for network intrusion detection. Robin Sommer, Vern Paxson, 2010 IEEE Symposium on Security and Privacy (SP). IEEERobin Sommer and Vern Paxson. 2010. Outside the closed world: On using machine learning for network intrusion detection. In 2010 IEEE Symposium on Security and Privacy (SP). IEEE, 305-316.
Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. Rainer Storn, Kenneth Price, Journal of global optimization. 11Rainer Storn and Kenneth Price. 1997. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization 11, 4 (1997), 341-359.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, arXiv:1312.6199Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprintChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
Trevillie, Last Accessed. Trevillie. 2018. MagNet. https://github.com/Trevillie/MagNet. (2018). Last Accessed: 2021-9-23.
MiniSom: minimalistic and NumPy-based implementation of the. Giuseppe Vettigli, Self Organizing Map. Giuseppe Vettigli. 2018. MiniSom: minimalistic and NumPy-based implemen- tation of the Self Organizing Map. (2018). https://github.com/JustGlowing/ minisom/
Weilin Xu, David Evans, Yanjun Qi, arXiv:1704.01155Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprintWeilin Xu, David Evans, and Yanjun Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).
|
[
"https://github.com/dongtsi/",
"https://github.com/ymirsky/Kitsune-py.",
"https://github.com/Trevillie/MagNet.",
"https://github.com/JustGlowing/"
] |
[
"The UAVid Dataset for Video Semantic Segmentation",
"The UAVid Dataset for Video Semantic Segmentation"
] |
[
"Ye Lyu ",
"George Vosselman ",
"Guisong Xia ",
"Alper Yilmaz ",
"Michael Ying Yang "
] |
[] |
[] |
Video semantic segmentation has been one of the research focus in computer vision recently. It serves as a perception foundation for many fields such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale datasets, especially for the deep learning related methods. Currently, there already exist several semantic segmentation datasets for complex urban scenes, such as the Cityscapes and CamVid datasets. They have been the standard datasets for comparison among semantic segmentation methods. In this paper, we introduce a new high resolution UAV video semantic segmentation dataset as complement, UAVid. Our UAV dataset consists of 30 video sequences capturing high resolution images. In total, 300 images have been densely labelled with 8 classes for urban scene understanding task. Our dataset brings out new challenges. We provide several deep learning baseline methods, among which the proposed novel Multi-Scale-Dilation net performs the best via multi-scale feature extraction. We have also explored the usability of sequence data by leveraging on CRF model in both spatial and temporal domain.
| null |
[
"https://arxiv.org/pdf/1810.10438v1.pdf"
] | 53,084,841 |
1810.10438
|
58a6eb3584b2f5df2f25d39a218904d510cae516
|
The UAVid Dataset for Video Semantic Segmentation
Ye Lyu
George Vosselman
Guisong Xia
Alper Yilmaz
Michael Ying Yang
The UAVid Dataset for Video Semantic Segmentation
Video semantic segmentation has been one of the research focus in computer vision recently. It serves as a perception foundation for many fields such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale datasets, especially for the deep learning related methods. Currently, there already exist several semantic segmentation datasets for complex urban scenes, such as the Cityscapes and CamVid datasets. They have been the standard datasets for comparison among semantic segmentation methods. In this paper, we introduce a new high resolution UAV video semantic segmentation dataset as complement, UAVid. Our UAV dataset consists of 30 video sequences capturing high resolution images. In total, 300 images have been densely labelled with 8 classes for urban scene understanding task. Our dataset brings out new challenges. We provide several deep learning baseline methods, among which the proposed novel Multi-Scale-Dilation net performs the best via multi-scale feature extraction. We have also explored the usability of sequence data by leveraging on CRF model in both spatial and temporal domain.
I. INTRODUCTION
Visual scene understanding has been advancing in recent years, which serves as a perception foundation for many fields such as robotics and autonomous driving. The most effective and successful methods for scene understanding tasks adopt deep learning as their cornerstone, as it can distil high level semantic knowledge from the training data. However, the drawback is that deep learning requires tremendous number of samples for training to make it learn useful knowledge instead of noise, especially for real world applications. Semantic segmentation, as part of scene understanding, is to assign labels for each pixel in the image. To make the best of deep learning method, a large number of densely labelled images are required. At present, there are only several public semantic segmentation datasets available, which focus only on certain applications. MS COCO [1] provides semantic segmentation dataset containing common objects recognition in common scenes, and its semantic labelling task focuses on person, car, animal and different stuffs. Pascal VOC dataset [2] also provides objects like bus, car, cow, dog for semantic segmentation task. Other semantic segmentation datasets are designed for street scene objects recognition. Their target objects include pedestrians, cars, roads, lanes, traffic lights, trees and other street scene related objects. Specially, CamVid [3] provides continuously labelled driving frames, which can be used for temporal 1 consistency evaluation. Highway Driving dataset [4] provides 30Hz labels that are even denser in temporal domain, and it is designed for semantic video segmentation for driving scenes. Daimler Urban Segmentation dataset [5] is also a video dataset for street scene understanding, but its labels are sparser in temporal domain. Cityscapes dataset [6] focuses more on data variation as it is much larger in the number of labelled frames, which are collected from 50 cities, making it closer to real world complexity. Each frame is much larger in size compared with CamVid. The newly published Berkeley Deep Drive dataset [7] has even more image labels with medium image size across multiple street scenes. The KITTI Vision Benchmark Suite [8] also provides images of medium size for the task. To help learning models to generalize well across different scenes, ADE20K dataset [9] contributes as it spans more diverse scenes, and objects from much more different categories are labelled. ADE20K dataset brings more variability and complexity for general object representations in images. For remote sensing community, aerial image dataset is provided for ISPRS 2D semantic labelling contest [10]. All datasets above have had great impacts on the development of current state-of-the-art semantic segmentation methods. Dynamic scene understanding is another interesting topic. There are several video datasets for moving foreground objects segmentation, such as Video Segmentation Benchmark(VSB100) [11], [12], Freiburg-Berkeley Motion Segmentation dataset(MoSeg) [13], [14] and Densely Annotated VIdeo Segmentation dataset(DAVIS) [15]. In these datasets, foreground objects are labelled densely in both spatial and temporal domain. The challenge for continuous foreground segmentation is that the prediction across highly correlated frames should be consistent. Segmenting foreground objects of interest with consistency is difficult, but useful for surveillance and monitoring.
As present, most of the modern visual semantic segmentation tasks use information acquired on the ground. However, another data acquisition platform is more and more utilized, which is the unmanned aerial vehicle(UAV). Compact and light weighted UAVs are a trend for future data acquisition. The UAVs make image retrieval in large area cheaper and more convenient, which allows quick access to useful information around certain area. Distinguished from collecting images by satellites, UAVs capture images from the sky with flexible flying schedule and higher resolution, bringing the possibility to monitor and analyze landscape at specific location and time swiftly. These abilities make UAVs an effective data collection means for various applications.
The inherently fundamental applications for UAVs are surveillance [16], [17] and monitoring [18] in the target area. They have already been used for smart farming [19], precision agriculture [20] and weed monitoring [21]. To make the system more intelligent, it could rely on techniques like semantic segmentation and video object segmentation. In this aspect, UAV is a great platform to combine both of the two tasks. These two visual understanding tasks could also be the main foundations for higher level smart applications. As the data from UAVs has its own specialties, semantic segmentation and video object segmentation tasks using UAV data deserve more attentions. There are existing UAV datasets for detection and behaviour analysis [22], but to the best of our knowledge, public datasets for UAV video semantic segmentation do not exist. In this paper, a new high resolution UAV video semantic segmentation dataset, UAVid, is brought out, which covers semantic segmentation and video object segmentation as a video semantic segmentation task. In total, 300 images from 30 video sequences are densely labelled with 8 object classes. All the labels are acquired with our in-house video labeller tool. To test the usability of our dataset, several typical deep neural networks(DNNs) designed for image semantic segmentation together with CRF based video semantic segmentation methods are evaluated as baselines. In addition, we also show that our novel multi-scale-dilation net model is useful to deal with multi-scale problems for UAV images.
II. DATASET
Designing an UAV video dataset requires careful thought about the data acquisition strategy, UAV flying protocol and object classes selection for annotation. The whole process is designed considering the usefulness and effectiveness for UAV video semantic segmentation research.
A. Data Specification
Our data acquisition and annotation methodology is designed for UAV video semantic segmentation in complex scenes, featuring on both static and moving object recognition. To capture data that contributes the most towards researches on UAV scene understanding, the following features for the dataset are taken into consideration.
• High resolution. We adopt 4K resolution video recording mode with safe flying height of 30 to 50 meters. In this setting, it is visually clear enough to differentiate most of the objects, and objects that are horizontally far away could also be detected. In addition, it is even possible to detect humans that are not too far away. • Consecutive labelling. Our dataset is designed for video semantic segmentation, it is preferred to label images in sequence, where prediction stability could be evaluated.
As it is too expensive to label densely in temporal space, we label 10 images with 5 seconds interval for each sequence. • Complex and dynamic scenes with diverse objects.
Our dataset aims at achieving real world complexity, where there are both static and moving objects. Scenes near streets are chosen for the UAVid dataset as they are complex enough with more dynamic human activities. A variety of objects appear in the scene such as cars, pedestrians, buildings, roads, vegetation, billboards, light poles, traffic lights and so on. We fly UAVs with slant view along the streets or across different street blocks to acquire such scenes. • Data variation. In total, 30 small UAV video sequences are captured in 30 different places to bring variance to the dataset, preventing learning algorithms from overfitting. Data acquisition is done in good weather condition with sufficient illumination. We believe that data acquired in dark environment or other weather conditions like snowing or raining require special processing techniques, which are not the focus of our dataset.
B. Class Definition and Statistical Analysis
To fully label all types of objects in the street scene in a 4K UAV image is very expensive. As a result, only the most common and representative types of objects are labelled for our current version dataset. In total, 8 classes are deliberately selected for video semantic segmentation, they are building, road, tree, low vegetation, static car, moving car, human and clutter. Example instances from different classes are shown in Fig. 2. We deliberately divide the car class into moving car and static car classes. Moving car is such special class designed for moving object segmentation. Other classes can be inferred from their appearance and context, while moving car class may need additional temporal information in order to be separated properly from static car class. Achieving high accuracy for both static and moving car classes is one possible research goal for our dataset. Fig. 3. It clearly shows the unbalanced pixel number distribution of different classes. Most of the pixels are from classes like building, tree, clutter, road and low vegetation, and fewer pixels are from moving car and static car classes, which are both fewer than 2% of the total pixels. For human class, it is almost zero, fewer than 0.2% of the total pixels. Smaller pixel number is not necessarily resulted by fewer instances, but the size of each instance. A single building can take more than 10k pixels while a human instance in the image may only take fewer than 100 pixels. Normally, classes with too small pixel numbers are ignored in both training and evaluation for semantic segmentation task [6]. But we believe humans and cars are important classes that should be kept in street scenes rather than being ignored.
Number of pixels for each class is reported in
C. Annotation Method
We provide densely labelled fine annotations for high resolution UAV images. All the labels are acquired with our own video labeller tool. Pixel level, super-pixel level and polygon level annotation methods are provided for users. For super-pixel annotation, we adopt SLIC method [23] to achieve super-pixel segmentation with 4 different scales, which can be useful for objects with fuzzy boundaries like trees. Polygon annotation is used for regular shape annotation like buildings, while pixel level annotation serves as a basic annotation method. Our tool also provides video play functionality around certain frames to help inspecting whether certain objects are moving or not. As there might be overlapping objects, we label the overlapping pixels to be the class that is closer to the camera.
D. Dataset Splits
The whole 30 densely labelled video sequences are divided into training, validation and test splits. We do not split the data completely randomly, but in a way that makes each split to be representative enough for the variability of different scenes. All three splits should contain all classes. Our data is split at sequence level, and each sequence comes from a different scene place. Following this scheme, we get 15 training sequences(150 labelled images) and 5 validation sequences(50 labelled images) for training and validation splits respectively, whose annotations will be made publicly available. The test split consists of the left 10 sequences(100 labelled images), whose labels are withheld for benchmarking purposes. The ratios among training, validation and test splits are 3:1:2.
III. VIDEO SEMANTIC LABELLING
The task for UAVid dataset is to predict per-pixel semantic labelling for the UAV video sequences. The original video file for each sequence is provided together with the labelled images.
A. Tasks and Metrics
The semantic labelling performance is assessed based on the standard IoU metric [2]. The goal for this task is to achieve as high IoU score as possible. For UAVid dataset, clutter class has a relatively large pixel number ratio and consists of meaningful objects, which is taken as one class for both training and evaluation rather than being ignored.
B. Networks for Baselines
To test the usability of our UAVid dataset for semantic labelling task, we have evaluated the performance of several deep learning models for single image prediction. Although static car and moving car cannot be differentiated by their appearance from only one image, it is still possible to predict based on their context. We start with 3 typical deep fully convolutional neural networks, they are FCN-8s [24], Dilation net [25] and U-Net [26]. FCN-8s [24] has been a good baseline candidate for semantic segmentation. It is a giant model with strong and effective feature extraction ability, but yet simple in structure. It takes a series of simple 3x3 convolutional layers to form the main parts for high level semantic information extraction. This simplicity in structure also makes FCN-8s popular and widely used for semantic segmentation.
Dilation net [25] has similar front end structure with FCN-8s, but it removes last two pooling layers in VGG16. Instead, convolutions in all following layers from conv5 block are dilated by a factor of 2 due to the ablated pooling layers. Dilation net also applies a multi-scale context aggregation module in the end, which expands the receptive field to boost the performance for prediction. The module is achieved by using a series of dilated convolutional layers, whose dilation rate gradually expands as the layer goes deeper.
U-Net [26] is a typical symmetric encoder-decoder network originally designed for segmentation on medical images. The encoder extracts features, which are gradually decoded through the decoder. The features from each convolutional block in the encoder are concatenated to the corresponding convolutional block in the decoder to gradually acquire features of higher and higher resolution for prediction. U-Net is also simple in structure but good at preserving object boundaries.
C. Multi-Scale-Dilation Net
For a high resolution image captured by UAV in slant view, size of objects in different horizontal distances can dramatically vary. Such large scale variation in an UAV image can affect the accuracy for prediction. In a network, each output pixel in the final prediction layer has a fixed receptive field, which is formed by pixels in the original image that can affect the final prediction of that output pixel. When the objects are too small, the neural network may learn the noise from the background. When the objects are too big, the model may not acquire enough information to infer the label correctly. This is a long standing notorious problem in computer vision. To reduce such large scale variation effect, a novel multi-scale-dilation net (MS-Dilation net) is proposed in this paper.
One way to expand the receptive field of a network is to use dilated convolution. Dilated convolution can be implemented through different ways, one of which is to leverage on space to batch operation(S2B) and batch to space operation(B2S), which is provided in Tensorflow API. Space to batch operation outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. Batch to space operation does the inverse. Applying a standard 2D convolution on the image after S2B is the same as a dilated convolution on the original image. A single dilated convolution can be performed as S2B− > convolution− > B2S. This implementation for dilated convolution is efficient when there is a cascade of dilated convolutions, where intermediate S2B and B2S cancel out. For instance, 2 consecutive dilated convolution with the same dilation rate can be performed as S2B− > convolution− > convolution− > B2S.
By utilizing space to batch operation and batch to space operation, semantic segmentation can be done in different scales. In total, three streams are created for three scales as shown in Fig. 4. For each stream, a modified FCN-8s is used as the main structure, where the depth for each convolutional block is reduced due to the memory limitation. Here, filter depth is sacrificed for more scales. To reduce detail loss in feature extraction, the pooling layer in the fifth convolutional block is removed to keep a smaller receptive field. Instead, features with larger receptive field from other streams are concatenated to higher resolution features through skip connection in conv7 layers. Note that these skip connections need batch to space operation to retain spatial and batch number alignment. In this way, each stream handles feature extraction in its own scale and features from larger scales are aggregated to boost prediction for higher resolution streams.
Multiple scales may also be achieved by down sampling images directly [27]. However, there are 3 advantages for our multi-scale processing. First, every pixel is assigned to one batch in space to batch operation and all the labelled pixels shall be used for each scale with no waste. Second, there is strict alignment between image and label pairs in each scale as there is no mixture of image pixels or mixture of label pixels. Finally, the concatenated features in the conv7 layer are also strictly aligned.
For each scale, corresponding ground truth labels can also be generated through space to batch operation in the same way as the generation for input images in different streams. With ground truth labels for each scale, deeply supervised training can be done. The losses in three scales are all cross entropy loss. The loss in stream1 is the target loss while the losses in stream2 and stream3 are auxiliary losses, which we call the deep supervision losses. The final loss to be optimized is the weighted mean of the three losses, shown in the equation below. m1, m2, m3 are numbers of pixels of an image in each stream. n is batch index and t is pixel index. p is target probability distribution of a pixel, while q is the predicted probability distribution.
CE 1 = 1 m 1 m1 t=1 −p t log(q t )(1)CE 2 = 1 4m 2 4 n=1 m2 t=1 −p n t log(q n t )(2)CE 3 = 1 16m 3 16 n=1 m3 t=1 −p n t log(q n t ) (3) Loss = w 1 × CE 1 + w 2 × CE 2 + w 3 × CE 3 w 1 + w 2 + w 3 (4)
It is also interesting to note that every layer becomes a dilated version for stream2 and stream3, especially for pooling layer and transposed convolutional layer, which turn into dilated pooling layer and dilated transposed convolutional layer respectively. Compared to layers in stream1, layers in Fig. 4. Structure of the proposed Multi-Scale-Dilation network. Three scales of images are achieved by Space to Batch operation with rate 2. Standard convolutions in stream2 and stream3 are equivalent to dilated convolutions in stream1. The main structure for each stream is FCN-8s [24], which could be replaced by any other networks. Features are aggregated at conv7 layer for better prediction on finer scales. stream2 are dilated by rate of 2 and layers in stream3 are dilated by rate of 4. Theses 3 streams together form the MS-Dilation net.
D. Fine-tune Pre-trained Networks
Due to the limited size of our UAVid dataset, training from scratch may not be enough for the networks to learn diverse features for better label prediction. Pre-training a network has been proved to be very useful for various benchmarks [28], [29], [30], [31], which boosts the performance by utilizing more data from other dataset. To reduce the effect of limited training samples, we also explore how much pretraining a network can boost the score for UAVid semantic labelling task. We pre-train all the networks with cityscapes dataset [6], which comprises many more images for training.
E. Video Semantic Segmentation
For video semantic labelling task, it is ideal to output prediction consistently for the same objects observed in multiple different images. Taking advantage of temporal information effectively is valuable for video sequence label prediction. Normally, deep neural networks trained on individual images cannot provide completely consistent predictions spanning several frames. However, different frames provide observations from different viewing positions, through which multiple clues can be collected for object prediction. To utilize temporal information in UAVid dataset, we adopt feature space optimization(FSO) [32] method for sequence data prediction. It smooths the final label prediction for the whole sequence by applying 3D CRF covering both spatial and temporal domain. It is the optical flows and tracks in the method that link the images in temporal domain.
IV. EXPERIMENTS
Our experiments are divided into 3 parts. Firstly, we compare semantic segmentation results by training deep neural networks from scratch. These results serve as the basic baselines. Secondly, we analyse how pre-trained models can be useful for UAVid semantic labelling task, and we finetune deep neural networks that are pre-trained on cityscapes dataset [6]. Finally, we explore the influence of spatial temporal regulation by using video sequence data for semantic video segmentation.
It should be noted that the resolution of our UAV images is quite high. The size of each image is 4096×2160 or 3840×2160, which requires too much GPU memory for intermediate feature storage in deep neural networks. As a result, we clip each UAV image into 9 evenly distributed smaller overlapped images that cover the whole image for training. Each clipped image is of size 2048×1024. We keep such a moderate image size in order to reduce the ratio between zero padding area and valid image area. Bigger image size also resembles larger batch size if each pixel is taken as a training sample.
A. Train from Scratch
To have a fair comparison among different deep neural networks, we re-implement all the networks with Tensorflow [33], and all networks are trained with a Nvidia Titan X GPU. To accommodate the networks into 12G GPU memory, depth of some layers in Dilation net, U-Net and MS-Dilation net are reduced to maximally fit into the memory. The model configuration detail of different networks is shown in Fig. 5 in appendix. The neural networks share similar hyper-parameters for training from scratch. All models are trained with Adam optimizer for 27K iterations(20 epochs). The base learning rate is set to 10 −4 exponentially decaying to 10 −7 . Weight decay for all weights in convolutional kernels is set to 10 −5 . Training is done with one image per batch. For data augmentation in training, we apply random left and right flip. We also apply a series of color augmentation, including random hue operation, random contrast operation, random brightness operation, random saturation operation.
Deep supervision losses are used for our MS-Dilation net. The loss weights for three streams are 1.8, 0.8 and 0.4 respectively. The loss weights for stream2 and stream3 are set smaller than stream1 as the main goal is to minimize the loss in stream1. For Dilation net, basic context aggregation module is used and initialized as it is in [25]. All networks are trained end-to-end and their mean IoU scores are reported in percentage as shown in Tab. I.
For the four networks, they are all better at discriminating building, road and tree classes, achieving IoU scores higher than 50%. The scores for car, vegetation and clutter classes are relatively lower. All four networks completely fail to discriminate human class. Normally, classes with larger pixel number have relatively higher IoU scores. However, IoU score for moving car class is much higher than static car class even though the two classes have similar pixel number. The reason may be that static cars may appear in various context like parking lot, garage, side walk or partially blocked under the trees, while moving cars are normally running in the middle of road with very clear view.
Our model achieves the best mean IoU score and the best IoU score for most of the classes among the four networks. It shows the effectiveness of multi-scale feature extraction.
B. Fine-tune Pre-trained Models
For fine-tuning pre-trained networks, all the networks are pre-trained with cityscapes dataset [6]. Finely annotated data from both training and validation splits are used, that is 3,450 densely labelled images in total. Hyper-parameters and data augmentation are set the same as they are in section IV-A, except that the iteration is set to 52K. Next, all the networks are fine-tuned with data from UAVid dataset. As there is still large heterogeneity between these two datasets, all layers are trained for all networks. We only initialize feature extraction parts of the networks with pre-trained models, while the prediction parts are initialized the same as training from scratch. The learning rate is set to 10 −5 exponentially decaying to 10 −7 for FCN-8s, and 10 −4 exponentially decaying to 10 −7 for other 3 networks as they are easily stuck at local minimum with initial learning rate to be 10 −5 during training. The rest of the hyper-parameters are set the same as training from scratch. The performance is also shown in Tab. I.
To find out whether deep supervision losses are important, we have fine-tuned MS-Dilation net with 3 different deep supervision plans. For the first plan, we fine-tune MS-Dilation net without deep supervision losses for 30 epochs by setting loss weights to 0 in stream2 and stream3. For the second plan, we fine-tune MS-Dilation net with deep supervision losses for 30 epochs. For the final plan, we finetune MS-Dilation net with deep supervision losses for 20 epochs and without deep supervision losses for another 10 epochs. The IoU scores for three plans are shown in Tab. II. As it is shown, the best mean IoU score is achieved by the third plan. The better result for MS-Dilation net+PRT in Tab. I is achieved by fine-tuning 20 epochs without deep supervision losses after fine-tuning 20 epochs with deep supervision losses.
Clearly, deep supervision losses are very important for MS-Dilation net. However, neither purely fine-tuning the MS-Dilation net with deep supervision losses nor without achieves the best score. It is the combination of these two fine-tuning processes that brings the best score. Deep supervision losses are important as they can guide the multiscale feature learning process, but the network needs to be further fine-tuned without deep supervision losses to get the best multi-scale filters for prediction.
By fine-tuning the pre-trained models, the performance boost is huge for all networks across all classes except human class. The networks still struggle to differentiate human class. Nevertheless, the improvement is evident for MS-Dilation net with 8% improvement. Decoupling the filters with different scales can be very beneficial when objects appear in large scale difference.
C. Video Semantic Segmentation
For video semantic segmentation, we apply methods used in feature space optimization (FSO) [32]. As FSO process a block of images simultaneously, 5 consecutive frames with 15 frames interval (0.5s-0.7s gap) are extracted from provided video files, which form a block spanning 2s to 3s, and the test image is located at the center in each block. The gap between consecutive frames is not set too big so as to get good flow extraction. It is better to have longer sequence to gain longer temporal regularization, but due to memory limitation, it is not possible to support more than 5 images in a 30G memory without sacrificing the image size.
FSO process in each block requires several ingredients. Contour strength for each image is calculated according to [34]. The unary for each image is set as the softmax layer output from each fine-tuned network. Forward flows and backward flows are calculated according to [35], [36]. As the computation speed for optical flow at original image scale is extremely low, the images to be processed are downsized by 8 times for both width and height, and the final flows at original scale are got through bicubic interpolation and magnification. Then, points trajectories can be calculated according to [37] with the forward and backward flows. Finally, a dense 3D CRF is applied after feature space optimization as described in [32].
The IoU scores for FSO results with unaries from different fine-tuned networks are reported in Tab. I. For each model, there is improvement in mean IoU score and IoU score for each individual class except for human and moving car classes. FSO favors more for class whose instance covers more image pixels, and IoU score improves less for class with smaller instance like static car and it drops for moving car and human classes. The human class IoU score for MS-Dilation net drops by a large margin, nearly 4%.
V. CONCLUSION AND OUTLOOK
In this paper, we present a new UAVid dataset to advance the development of video semantic segmentation. It captures complex street scenes in slant view style with very high resolution videos. Classes for the video semantic labelling task have been defined and labelled. The usability of our UAVid dataset has also been proved with several deep convolutional neural networks, among which the proposed Multi-Scale-Dilation net performs the best via multi-scale feature extraction. It has also been shown that pre-training the network is beneficial for all classes in UAVid semantic labelling task. In the future, we will continually collect new UAV video data, which will be labelled densely in temporal space. We will extend labelling from current classes to more classes including window, door, balcony, etc. The benchmark together with our labelling tool will be published online.
Fig. 1 .
1Example images and labels from UAVid dataset. First row shows the images captured by UAV. Second row shows the corresponding ground truth labels. Third row shows the prediction results of MS-Dilation net+PRT+FSO model as in Tab. I.
Fig. 2 .
2Example instances from different classes. The first row shows the cropped instances. The second row shows the corresponding labels. From left to right, the instances are building, road, static car, tree, low vegetation, human and moving car respectively.
Fig. 3 .
3Pixel number histogram.
Wuhan University, China. [email protected] 3 Ohio State University, USA. [email protected] * Corresponding Author.University
of
Twente,
The
Netherlands.
{y.lyu,
george.vosselman, michael.yang}@utwente.nl
2
TABLE I
IIOU SCORES FOR DIFFERENT MODELS. PRT STANDS FOR PRE-TRAIN AND FSO STANDS FOR FEATURE SPACE OPTIMIZATION [32]. IOU SCORES ARE REPORTED IN PERCENTAGE AND BEST RESULTS ARE SHOWN IN BOLD.TABLE II IOU SCORES FOR DIFFERENT DEEP SUPERVISION PLANS. W STANDS FOR WITH AND W/O STANDS FOR WITHOUT.Model
Building Tree Clutter Road Low Vegetation
Static Car Moving Car
Human Mean IoU
FCN-8s
64.3
63.8
33.5
57.6
28.1
8.4
29.1
0.0
35.6
Dilation Net
72.8
66.9
38.5
62.4
34.4
1.2
36.8
0.0
39.1
U-Net
70.7
67.2
36.1
61.9
32.8
11.2
47.5
0.0
40.9
MS-Dilation Net(ours)
74.3
68.1
40.3
63.5
35.5
11.9
42.6
0.0
42.0
FCN-8s+PRT
77.4
72.7
44.0
63.8
45.0
19.1
49.5
0.6
46.5
Dilation Net+PRT
79.8
73.6
44.5
64.4
44.6
24.1
53.6
0.0
48.1
U-Net+PRT
77.5
73.3
44.8
64.2
42.3
25.8
57.8
0.0
48.2
MS-Dilation Net(ours)+PRT
79.7
74.6
44.9
65.9
46.1
21.8
57.2
8.0
49.8
FCN-8s+PRT+FSO
78.6
73.3
45.3
64.7
46.0
19.7
49.8
0.1
47.2
Dilation Net+PRT+FSO
80.7
74.0
45.4
65.1
45.5
24.5
53.6
0.0
48.6
U-Net+PRT+FSO
79.0
73.8
46.4
65.3
43.5
26.8
56.6
0.0
48.9
MS-Dilation Net(ours)+PRT+FSO
80.9
75.5
46.3
66.7
47.9
22.3
56.9
4.2
50.1
Method
Building Tree Clutter Road Low Vegetation Static Car Moving Car
Human Mean IoU
fine-tune w/o deep supervision
78.5
72.2
44.0
65.3
43.5
17.4
51.5
1.2
46.7
fine-tune w deep supervision
79.2
72.5
44.8
64.6
44.3
17.0
52.8
3.4
47.3
fine-tune w+w/o deep supervision
79.4
73.1
43.7
65.5
45.3
21.3
55.8
6.3
48.8
B. Big Image Clipping 2048x1024Fig. 6. Image clipping illustrator. The original images are of size 4096×2160 or 3840×2160, which are too big for training deep neural networks. Instead, the whole image is clipped into 9 evenly distributed smaller overlapped images for training, each of size 2048×1024.
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, ECCV. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in ECCV, 2014, pp. 740-755.
The pascal visual object classes challenge: A retrospective. M Everingham, S M A Eslami, L Van Gool, C K I Williams, J Winn, A Zisserman, IJCV. 1111M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes challenge: A retrospective," IJCV, vol. 111, no. 1, pp. 98-136, Jan. 2015.
Segmentation and recognition using structure from motion point clouds. G J Brostow, J Shotton, J Fauqueur, R Cipolla, ECCV. G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, "Segmentation and recognition using structure from motion point clouds," in ECCV, 2008, pp. 44-57.
Highway driving dataset for semantic video segmentation. B Kim, J Yim, J Kim, in BMVCB. Kim, J. Yim, and J. Kim, "Highway driving dataset for semantic video segmentation," in BMVC.
Efficient multi-cue scene segmentation. T Scharwächter, M Enzweiler, U Franke, S Roth, GCPR. T. Scharwächter, M. Enzweiler, U. Franke, and S. Roth, "Efficient multi-cue scene segmentation," in GCPR, 2013, pp. 435-445.
The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, CVPR. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Be- nenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," in CVPR, 2016.
Bdd100k: A diverse driving video database with scalable annotation tooling. F Yu, W Xian, Y Chen, F Liu, M Liao, V Madhavan, T Darrell, arXiv:1805.04687arXiv preprintF. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, "Bdd100k: A diverse driving video database with scalable annotation tooling," arXiv preprint arXiv:1805.04687, 2018.
Vision meets robotics: The kitti dataset. A Geiger, P Lenz, C Stiller, R Urtasun, International Journal of Robotics Research. 3211A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The kitti dataset," International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013.
Scene parsing through ade20k dataset. B Zhou, H Zhao, X Puig, S Fidler, A Barriuso, A Torralba, CVPR. 14B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, "Scene parsing through ade20k dataset," in CVPR, vol. 1, no. 2, 2017, p. 4.
Results of the isprs benchmark on urban object detection and 3d building reconstruction. F Rottensteiner, G Sohn, M Gerke, J Wegner, U Breitkopf, J Jung, ISPRS journal of photogrammetry and remote sensing. 93F. Rottensteiner, G. Sohn, M. Gerke, J. Wegner, U. Breitkopf, and J. Jung, "Results of the isprs benchmark on urban object detection and 3d building reconstruction," ISPRS journal of photogrammetry and remote sensing, vol. 93, pp. 256-271, 2014.
A unified video segmentation benchmark: Annotation, metrics and analysis. F Galasso, N Shankar Nagaraja, T Cardenas, T Brox, B Schiele, ICCV. F. Galasso, N. Shankar Nagaraja, T. Jimenez Cardenas, T. Brox, and B. Schiele, "A unified video segmentation benchmark: Annotation, metrics and analysis," in ICCV, 2013, pp. 3527-3534.
Occlusion boundary detection and figure/ground assignment from optical flow. P Sundberg, T Brox, M Maire, P Arbeláez, J Malik, CVPR. P. Sundberg, T. Brox, M. Maire, P. Arbeláez, and J. Malik, "Occlusion boundary detection and figure/ground assignment from optical flow," in CVPR, 2011, pp. 2233-2240.
Object segmentation by long term analysis of point trajectories. T Brox, J Malik, ECCV. T. Brox and J. Malik, "Object segmentation by long term analysis of point trajectories," in ECCV, 2010, pp. 282-295.
Segmentation of moving objects by long term video analysis. P Ochs, J Malik, T Brox, PAMI. 366P. Ochs, J. Malik, and T. Brox, "Segmentation of moving objects by long term video analysis," PAMI, vol. 36, no. 6, pp. 1187-1200, 2014.
A benchmark dataset and evaluation methodology for video object segmentation. F Perazzi, J Pont-Tuset, B Mcwilliams, L Van Gool, M Gross, A Sorkine-Hornung, CVPR. F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung, "A benchmark dataset and evaluation methodol- ogy for video object segmentation," in CVPR, 2016.
Autonomous uav surveillance in complex urban environments. E Semsch, M Jakob, D Pavlicek, M Pechoucek, WI-IAT. E. Semsch, M. Jakob, D. Pavlicek, and M. Pechoucek, "Autonomous uav surveillance in complex urban environments," in WI-IAT, 2009, pp. 82-85.
A ground control station for a multi-uav surveillance system. D Perez, I Maza, F Caballero, D Scarlatti, E Casado, A Ollero, Journal of Intelligent&Robotic Systems. 691-4D. Perez, I. Maza, F. Caballero, D. Scarlatti, E. Casado, and A. Ollero, "A ground control station for a multi-uav surveillance system," Journal of Intelligent&Robotic Systems, vol. 69, no. 1-4, pp. 119-130, 2013.
Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (uav). H Xiang, L Tian, Biosystems Engineering. 1082H. Xiang and L. Tian, "Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (uav)," Biosystems Engineering, vol. 108, no. 2, pp. 174-190, 2011.
Uavbased crop and weed classification for smart farming. P Lottes, R Khanna, J Pfeifer, R Siegwart, C Stachniss, ICRA. P. Lottes, R. Khanna, J. Pfeifer, R. Siegwart, and C. Stachniss, "Uav- based crop and weed classification for smart farming," in ICRA, 2017, pp. 3024-3031.
Robust long-term registration of uav images of crop fields for precision agriculture. N Chebrolu, T Läbe, C Stachniss, IEEE Robotics and Automation Letters. 34N. Chebrolu, T. Läbe, and C. Stachniss, "Robust long-term registration of uav images of crop fields for precision agriculture," IEEE Robotics and Automation Letters, vol. 3, no. 4, 2018.
Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks. A Milioto, P Lottes, C Stachniss, ISPRS Annals. 441A. Milioto, P. Lottes, and C. Stachniss, "Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks," ISPRS Annals, vol. 4, p. 41, 2017.
Learning social etiquette: Human trajectory understanding in crowded scenes. A Robicquet, A Sadeghian, A Alahi, S Savarese, ECCV. A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, "Learning social etiquette: Human trajectory understanding in crowded scenes," in ECCV, 2016, pp. 549-565.
Slic superpixels compared to state-of-the-art superpixel methods. R Achanta, A Shaji, K Smith, A Lucchi, P Fua, S Süsstrunk, PAMI. 3411R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Süsstrunk, et al., "Slic superpixels compared to state-of-the-art superpixel methods," PAMI, vol. 34, no. 11, pp. 2274-2282, 2012.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, CVPR. J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in CVPR, June 2015.
Multi-scale context aggregation by dilated convolutions. F Yu, V Koltun, ICLR. F. Yu and V. Koltun, "Multi-scale context aggregation by dilated convolutions," in ICLR, 2016.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, MICCAI. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in MICCAI, 2015, pp. 234-241.
Pyramid methods in image processing. E H Adelson, C H Anderson, J R Bergen, P J Burt, J M Ogden, RCA Engineer. 296E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden, "Pyramid methods in image processing," RCA Engineer, vol. 29, no. 6, pp. 33-41, 1984.
Semantic labeling in very high resolution images via a self-cascaded convolutional neural network. Y Liu, B Fan, L Wang, J Bai, S Xiang, C Pan, ISPRS Journal of Photogrammetry and Remote Sensing. Y. Liu, B. Fan, L. Wang, J. Bai, S. Xiang, and C. Pan, "Semantic labeling in very high resolution images via a self-cascaded convolu- tional neural network," ISPRS Journal of Photogrammetry and Remote Sensing, 2017.
One-shot video object segmentation. S Caelles, K.-K Maninis, J Pont-Tuset, L Leal-Taixé, D Cremers, L Van Gool, CVPR. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool, "One-shot video object segmentation," in CVPR, 2017.
Encoder-decoder with atrous separable convolution for semantic image segmentation. L.-C Chen, Y Zhu, G Papandreou, F Schroff, H Adam, arXiv:1802.02611arXiv preprintL.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, "Encoder-decoder with atrous separable convolution for semantic image segmentation," arXiv preprint arXiv:1802.02611, 2018.
Pyramid scene parsing network. H Zhao, J Shi, X Qi, X Wang, J Jia, H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network," in CVPR, 2017, pp. 2881-2890.
Feature space optimization for semantic video segmentation. A Kundu, V Vineet, V Koltun, CVPR. A. Kundu, V. Vineet, and V. Koltun, "Feature space optimization for semantic video segmentation," in CVPR, 2016, pp. 3168-3175.
Tensorflow: a system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, OSDI. 16M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., "Tensorflow: a system for large-scale machine learning." in OSDI, vol. 16, 2016, pp. 265-283.
Fast edge detection using structured forests. P Dollár, C L Zitnick, PAMI. 378P. Dollár and C. L. Zitnick, "Fast edge detection using structured forests," PAMI, vol. 37, no. 8, pp. 1558-1570, 2015.
High accuracy optical flow estimation based on a theory for warping. T Brox, A Bruhn, N Papenberg, J Weickert, ECCV. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, "High accuracy optical flow estimation based on a theory for warping," in ECCV, 2004, pp. 25-36.
Large displacement optical flow: descriptor matching in variational motion estimation. T Brox, J Malik, PAMI. 333T. Brox and J. Malik, "Large displacement optical flow: descriptor matching in variational motion estimation," PAMI, vol. 33, no. 3, pp. 500-513, 2011.
Dense point trajectories by gpu-accelerated large displacement optical flow. N Sundaram, T Brox, K Keutzer, ECCV. N. Sundaram, T. Brox, and K. Keutzer, "Dense point trajectories by gpu-accelerated large displacement optical flow," in ECCV, 2010, pp. 438-451.
|
[] |
[
"Travelling fluid waves in Einstein's gravity",
"Travelling fluid waves in Einstein's gravity",
"Travelling fluid waves in Einstein's gravity",
"Travelling fluid waves in Einstein's gravity"
] |
[
"Z Haba [email protected] \nInstitute of Theoretical Physics\nUniversity of Wroclaw\nPlac Maxa Borna 950-204WroclawPoland\n",
"Z Haba [email protected] \nInstitute of Theoretical Physics\nUniversity of Wroclaw\nPlac Maxa Borna 950-204WroclawPoland\n"
] |
[
"Institute of Theoretical Physics\nUniversity of Wroclaw\nPlac Maxa Borna 950-204WroclawPoland",
"Institute of Theoretical Physics\nUniversity of Wroclaw\nPlac Maxa Borna 950-204WroclawPoland"
] |
[] |
We assume that the energy-momentum and the scale factor a in the conformally flat metric are functions of the plane wave phase ωτ − qx, where τ is the conformal time. We obtain exact solutions of Einstein equations with various sources (including the inflaton) .
|
10.1142/s0219887820500152
|
[
"https://arxiv.org/pdf/1906.01978v3.pdf"
] | 202,537,382 |
1906.01978
|
5c5a925eeb81a8f44e95ba43d33b246e81e04760
|
Travelling fluid waves in Einstein's gravity
5 Jun 2019 June 6, 2019
Z Haba [email protected]
Institute of Theoretical Physics
University of Wroclaw
Plac Maxa Borna 950-204WroclawPoland
Travelling fluid waves in Einstein's gravity
5 Jun 2019 June 6, 2019
We assume that the energy-momentum and the scale factor a in the conformally flat metric are functions of the plane wave phase ωτ − qx, where τ is the conformal time. We obtain exact solutions of Einstein equations with various sources (including the inflaton) .
Introduction
It is believed that the universe is homogeneous and isotropic on the large scale. It is not clear how big this scale is. The actual observations show some substantial departures from isotropy and homogeneity resulting from the presence of the voids, domain walls, peculiar velocities, the dark flow and the great attractor [1]. A departure from homogeneity appears not only in galaxy observations but it has also its marks in the CMB spectrum [2]. It may be that some averaging is necessary for an application of homogeneous Einstein equations [3]. This averaging may change the assumptions of the ΛCDM model [4]. The assumption of the isotropy and homogeneity allows to derive explicit solutions of Einstein equations. In fact, any symmetry leads to a simplification [5]. Inhomogeneous cosmological models with a spherical symmetry have been extensively studied (see the review in [6]). Anisotropic cosmologies are usually discussed in relation to the Bianchi type models [7].In this paper we make an assumption that the universe geometry is determined by an energy-momentum of a fluid moving like a plane wave in a direction q, i.e., that it depends on ξ = ωτ − qx, where τ is the conformal time. We consider conformally flat metrics (the FLWR metrics belong to this class [8]). Then, its scale factor a also depends on ξ. We did not encounter such an assumption in the literature ( see however [9]). If q ω is very small then we return to the ΛCDM model with no conflict with the large scale astronomical observations. We think that such a travelling wave solution is interesting for itself. It can be used in more complex models where anisotropic sources are added to the standard homogeneous isotropic ones.
The plan of the paper is the following. In sec.2 we derive Einstein equations for an arbitrary scale factor a(x) in the conformally flat four-dimensional metric. Then, we assume that the dependence of a on x is in the form a(ωτ − qx). We discuss solutions of Einstein equations under such an assumption. We obtain some exact solutions(including periodic ones) in sec.3. In sec.4 we look for the travelling waves in the Einstein-Klein-Gordon system. In sec.5 we make the slow-roll approximation in order to derive solutions of the Einstein-Klein-Gordon system (with an inflaton) in an explicit form.
Travelling waves for conformally flat metrics
We consider the conformally flat metric in four space-time dimensions
ds 2 = a(x) 2 (dτ 2 − dx 2 ).(1)
Then, the 00 component of the Einstein tensor is [12]
G 00 = 3(a −1 ∂ τ a) 2 + (a −1 ∇a) 2 − 2a −1 △a(2)
and 00 component of Einstein equations reads (where G is the Newton constant)
G 00 = 8πGT 00 .(3)
For an ideal fluid
T µν = (ρ + p)u µ u ν − g µν p(4)
(where u µ u µ = 1) the conservation law (T 0µ ) ;µ in the comoving frame (where spatial velocities u j vanish ) reads
∂ τ (a −2 ρ) + 6a −3 ∂ τ aρ − a −3 ∂ τ a(ρ − 3p) = 0,(5)
where ρ and p may depend on x. If the equation of state of the fluid is
p = wρ(6)
then from the conservation law (5) of the energy-momentum we obtain
ρ = ρ 0 a −3−3w .(7)
The fluid (4) under the assumption (6) spreads with the acoustic velocity √ w (if w ≥ 0 ) which for the massless particles is just 1 √ 3 . From the kinetic theory of (ordinary) matter one can derive the bound 0 ≤ w ≤ 1 3 . The inflaton equation takes the form
∂ 2 τ φ − △φ + 2a −3 ∂ τ a∂ τ φ − 2a −3 ∇a∇φ + U ′ (φ) = 0.(8)
We assume that the scale factor a is of the form
a(τ, x) = a(ξ) = a(ωτ − qx).(9)
The metric (1)
(3ω 2 + q 2 )a −2 ( da dξ ) 2 − 2q 2 a −1 d 2 a dξ 2 = 8πGa 2 ρ(a).(10)
If q = 0 and ω = 1 then we obtain from eq.(10) the standard Friedman equation. If q = 0 then eq.(10) follows from Lagrangian
L = q 2 ( da dξ ) 2 a −1− 3ω 2 q 2 − 8πG daρ(a)a 2− 3ω 2 q 2(11)
with the Hamiltonian
H = q 2 ( da dξ ) 2 a −1− 3ω 2 q 2 +8πG daρ(a)a 2− 3ω 2 q 2 = p 2 4q 2 a 1+ 3ω 2 q 2 +8πG daρ(a)a 2− 3ω 2 q 2(12)
The Hamiltonian form of eq.(10) is relevant in the discussion of the stability of the model in a quantum theory [10]. For some aspects of the dynamical system (10) it is useful to represent it in the first order form. Let da dξ = u then
(3ω 2 + q 2 )a −2 u 2 − 2q 2 a −1 du dξ = 8πGa 2 ρ(a).(13)
or expressing the ξ derivatives by a we obtain an equivalent form of eq.(13)
2q 2 du da = (3ω 2 + q 2 )a −1 u − 8πGu −1 a 3 ρ(a).(14)
It can be seen that eq. (14) does not satisfy the Lipschitz condition at a = 0, u = 0. Hence, there may be some problems with the existence and uniqueness of the solution.
From the ξ-independence of the Hamiltonian it follows that the energy is a constant of motion. Using the constant of motion we can reduce eq.(10) to the first order equation. Eq.(10)is solved with the initial conditions a(ξ = 0) = a 0 and da dξ (ξ = 0) = u 0 . Let K = u 2 then eq.(14) can be expressed as
dK da − a −1 (1 + 3ω 2 q 2 )K = −8πGq −2 a 3 ρ.(15)
Eq.(15) can be integrated leading to the energy formula
( da dξ ) 2 = K = a 1+ 3ω 2 q 2 k 0 − 8πGq −2 a 1+ 3ω 2 q 2 a a0 b 2− 3ω 2 q 2 ρ(b)db,(16)
where we introduced k 0 ≥ 0 related to u 0 = ±a
1 2 + 3ω 2 2q 2 0 √ k 0 .
Taking the square root of K we obtain an equation for a a a0 dbb
− 1 2 − 3ω 2 2q 2 k 0 − 8πGq −2 b a0 b ′2− 3ω 2 q 2 ρ(b ′ )db ′ − 1 2 = ±ξ,(17)
where both signs on the rhs should be considered as a possible solution of eq.(10). If ρ = 0 then a = a 0 = const is a solution of eq.(10) with u 0 = 0. However, there are also non-trivial solutions
a 1 2 − 3ω 2 2q 2 = a 1 2 − 3ω 2 2q 2 0 ± k 0 ( 1 2 − 3ω 2 2q 2 )ξ.(18)
If ω 2 q 2 < 1 3 then a is defined for ξ ≥ 0 with a positive sign. If ω 2 q 2 > 1 3 then a is defined with a minus sign for ξ ≥ 0 (there is no solution with the initial condition a 0 = 0 and u 0 = 0). We can define the solution for negative ξ inverting the signs in eq. (18). If ω 2 q 2 < 1 3 then (besides the solution a = a 0 = 0 with the zero initial conditions a 0 = u 0 = 0) there is a solution defined for any ξ (with zero initial conditions)
a 1 2 − 3ω 2 2q 2 = k 0 ( 1 2 − 3ω 2 2q 2 )|ξ|.(19)
The non-uniqueness of the solution with zero initial conditions is a consequence of the violation of the Lipschitz condition for eq. (14). Note that ω q for the plane wave is the phase velocity | dx dτ |. The relevance of ω 2 q 2 < 1 3 will still appear in subsequent discussions in this paper.
If we have only one fluid then there is only one integral in the denominator
(17) db −1−3w− 3ω 2 q 2 .
(20)
It is convergent for a small b if −w > ω 2 q 2 .(21)
If the condition (21) is not satisfied then we must assume a 0 > 0. If there are more fluids then the criterion for convergence has to be applied for each of them. From the Hamiltonian (12) it follows that the energy is conserved and the second order differential system can be reduced to the first order form. We prefer to discuss the equivalent mechanical system (16). Eq.(16) can be expressed as
K = E − V,(22)
where K is the kinetic energy of a particle of mass m = 2 ,conserved total energy E = 0, moving in a potential V (a) with a ≥ 0 which under the assumption (7) for ρ is
V (a) = −k 0 a 1+ 3ω 2 q 2 + 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 a −3w− 3ω 2 q 2 0 a 1+ 3ω 2 q 2 − a 1−3w . (23)
The motion is possible only if V ≤ 0. The solution is determined by the integral (17) which is now expressed as
a a0 db 1 −V (b) = ±ξ.(24)
We may include many fluids in V then we have a sum of terms with various ρ 0i and w i (i.e., the sum of potentials a −1−3wi ). Such a dynamical system (on the positive axis) in cosmology has been discussed by many authors [13][14] [15]. However, their cosmological model is different from the one described by eq.(10) (or equivalent (16); the mechanical system (16) is the same as the one discussed by those authors but now we have different parameters and different powers of a). It is known (since Lemaitre [16]) that if the powers of a in V (a) are integers ( less than 4 or reduced to less than 4) then the integrals (24) can be expressed by elementary functions or elliptic functions. First, we describe the potentials (23) in general. Then, we restrict ourselves to a discussion of the solutions expressed by elementary functions. a 0 > 0 has no physical meaning. It just rescales τ and x. So, if a 0 > 0 then without loosing generality we may choose a 0 = 1.
Let us assume first that
w + ω 2 q 2 > 0(25)
(then a 0 cannot be zero). If
−k 0 + 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 < 0,(26)
then both potentials a 1−3w as well as a 1+ 3ω 2 q 2 enter V (a) with a negative sign. Hence, the motion takes place on the interval [1, ∞) . If we let a = ∞ in eq.(24) and ω 2 q 2 > 1 3 then the integral ∞ a0 da(−V (a)) − 1 2 = ±ξ ex is finite. Hence, a achieves infinity at finite ξ ex . Note that ρ and |p| tend to zero if 1 < −3w < 3 and they tend to infinity ("Big Rip" [11]) if −3w > 3.
There will be no explosion if −w < ω 2 q 2 < 1 3 . If eq.(25) is satisfied but
−k 0 + 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 > 0,(27)
then the positive potential a
1+ 3ω 2 q 2
at a certain a ex such that
−k 0 + 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 = 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 a −3w− 3ω 2 q 2 ex(28)
overcomes the negative potential a 1−3w . The motion is possible only on the interval [1, a ex ] (the kinetic energy K = 0 at a ex ). So, the motion begins with the velocity √ k 0 and ends with zero velocity at a ex . If we choose k 0 = 0 then the interval [1, a ex ] shrinks to [1,1] (no motion). In the limiting case ω 2
q 2 = −w, the potential V is V (a) = −(k 0 − 8πGρ 0 q 2 ln a)a 1−3w .(29)
The motion is possible till a c when V achieves 0. This is the second turning point (the first is at a = 0). Summarizing, if the inequality (25) is satisfied and ω 2 q 2 > 1 3 then a = ∞ at finite ξ. If −w < ω 2 q 2 < 1 3 then the motion takes place on the interval [1, ∞) without explosion.
If the inequality (21) is satisfied then the a 1+ 3ω 2 q 2 potential is negative whereas a 1−3w is positive and dominating at large a. We may assume a 0 = 0. In such a case the potential has two turning points a c such that V (a c ) = 0. One of them is a c = 0 the second turning point is determined by the solution of the equation
−k 0 = 8πGρ 0 q 2 (3w + 3ω 2 q 2 ) −1 a − 3ω 2 q 2 −3w c .(30)
We have a bouncing solution on the interval [0, a c ] with the period T (it can be infinite if the integral is divergent)
T = 2 ac 0 dbb − 1 2 − 3ω 2 2q 2 k 0 − 8πGρ 0 q −2 (−3w − 3ω 2 q 2 ) −1 b −3w− 3ω 2 q 2 − 1 2 .
If we have more fluids then the potential V (a) is a sum of terms with various w i and ρ 0i . We need a 0 > 0 if the inequality (25) is satisfied for any of these fluids (then there is no bouncing solution). In general, if ρ 0i > 0 then we shall have solutions with properties listed above depending on the conditions (21)-(30) satisfied by the particular fluid ( possibly with a finite explosion time).
Elementary solutions
It is well-known that some solutions of the differential equation (22)-(23) can be expressed by elementary functions [13][14] [15][16] by means of a calculation of the integral (24) (possibly with a change of variables). They appear in standard cosmological models with a different meaning of parameters [13]. In the particular case when ω 2 q 2 = 1 3 our eqs.(22)-(23) (E = 0) coincide with eq.(1) (set κ = 0) of Harrison [13] with his parameters ν, λ and C ν related to the parameters in eq. (23) as follows:
ν = w + 1 3 C ν = 8πGρ 0 q −2 3w + 1 ,(31)λ = 3k 0 − 3C ν a −3w−1 0 .(32)
(we set a 0 = 1 if a 0 > 0). Without repeating Harrison's calculations of the integral (24) let us quote his solutions. If λ > 0, C ν > 0
a 3ν = 3C ν λ sinh 3ν 2 λ 3 (ξ + ξ 0 ) 2 ,(33)
where ξ 0 is determined from the initial condition for a. If λ < 0
a 3ν = 3C ν |λ| sin( 3ν 2 |λ| 3 (ξ + ξ 0 )) 2 .(34)
In the rest of this section let us calculate the integral (24) in some special cases (beyond the ones of [13])when the powers of a in the potential (23) are integers (there may be other exact solutions of eqs.(23)-(24) which could be obtained by a change of variables, e.g., as eq.(33), but we do not consider them here). If w = − 1 3 (cosmic string) and ω 2 q 2 = 1 3 then the potential (23) has a singular denominator. In this case the potential has the form (29) and the corresponding integral (24) cannot be expressed by an elementary function. The case w = − 1 3 (cosmological string), ω 2 q 2 = 2 3 can be solved explicitly. Let
α 2 = 8πGρ 0 q −2 ,(35)β = k 0 − α.(36)
Then, for β > 0 we have a solution which explodes at finite ξ. If β < 0 then
a = 4α 2 |β| exp(α(ξ + ξ 0 )) 1 + exp(α(ξ + ξ 0 )) −2 .(37)
The next possibility is w = − 1 3 and ω 2 q 2 = 1. Let
σ 2 = 1 2 α 2 − k 0 .(38)
If σ 2 < 0 then a explodes at finite ξ. For σ 2 > 0 we obtain the solution
a = 4α 2 exp( 1 2 α(ξ + ξ 0 )) α 2 σ 2 + exp(α(ξ + ξ 0 )) −1 .(39)
The case w = − 1 3 and ω = 0 is also explicitly soluble
a = k 0 2α 2 1 − sin(α(qx + ξ 0 )) .(40)
It gives a periodic static space structure. As the next example let w = 0 (dust) then
−V (a) = (k 0 − 8πGρ 0 (3ω 2 ) −1 )a 1+ 3ω 2 q 2 + 8πGρ 0 (3ω 2 ) −1 a.(41)
If ω 2 q 2 = 1 3 then the integral (24) can be expressed by an elementary function . Let
γ 2 = α 2 − k 0 .(42)
If γ 2 < 0 then
ln 2 |γ 2 |(|γ 2 |a 2 + α 2 a) + 2|γ 2 |a + α 2 = (ξ + ξ 0 ) |γ 2 |. (43) If γ 2 > 0 a = α 2 2γ 2 1 − sin(γ(ξ + ξ 0 )) .(44)
Finally, let w = 1 (stiff matter [17])
−V (a) = (k 0 − 8πGρ 0 (3q 2 + 3ω 2 ) −1 )a 1+ 3ω 2 q 2 + 8πGρ 0 (3q 2 + 3ω 2 ) −1 a −2 . (45)
The solution is an elementary function if if w = 1 and ω 2 q 2 = 1 3 . Let
δ 2 = 1 4 α 2 − k 0 .(46)
If δ 2 < 0 then
a 2 = 1 4 α 2 √ −δ 2 sinh ± 2 −δ 2 (ξ + ξ 0 )) .(47)
If δ 2 > 0 then
a 2 = 1 4 α 2 √ δ 2 sin ± 2σ(ξ + ξ 0 ) .(48)
The choice of signs and ξ 0 is such that the positivity of a 2 and initial conditions are satisfied. We could still consider a sum of fluids discussed so far. Then the potentials are sums of a n with integer n. The solutions could again be expressed by elementary functions. For some other values of w and ω 2 q 2 such that we get the potential with integer powers of a the solutions can be expressed by elliptic functions [16][14] [15].
Einstein-Klein-Gordon equations
In the standard inflationary scheme [18] the scalar field is introduced in order to generate accelerated expansion. Let us first consider this problem using the ideal fluids T µν (4). From eqs. (10) and (16) we can calculate the "acceleration"
a 3w d 2 a dξ 2 = 4πGρ 0 1−3w 3wq 2 +3ω 2 +a 3w+ 3ω 2 q 2 3ω 2 +q 2 2q 2 k 0 − 8πGρ 0 (3wq 2 + 3ω 2 ) −1 a −3w− 3ω 2 q 2 0 .(49)
If w + ω 2 q 2 < 0 then the acceleration is positive for small a because the second term on the rhs is positive and dominating. It becomes negative for a large a because the first term is negative and the second decreasing to zero for large a. If w + ω 2 q 2 > 0 then the sign of acceleration at large a is determined by the sign of the second term on the rhs. This sign depends on ρ0 k0 . For small a the sign of the acceleration is positive if 1 − 3w > 0 and negative if 1 − 3w < 0.
In the rest of this section we discuss the scalar field minimally coupled to gravity. We assume φ(x) = f (ξ) then eq.(8) takes the form
2a −3 ∂ ξ a(ω 2 − q 2 )∂ ξ f = −a −2 (ω 2 − q 2 )∂ 2 ξ f − U ′ (f ).(50)
It could be written in the first order form
df dξ = X,(51)2a −3 ∂ ξ a(ω 2 − q 2 )X + U ′ (f ) = −a −2 (ω 2 − q 2 ) dX dξ .(52)
We have to supplement these equations with the equation (10) for a which depends on
ρ = T 0 0 == 1 2 a −2 (ω 2 + q 2 )(∂ ξ f ) 2 + U (f ).(53)
The first question concerning the dynamical system (10),(50)-(52) is the presence of stationary points da dξ = 0, df dξ = 0, dX dξ = 0. This is possible if U (f c ) = 0. An expansion f = f c + x q around the stationary point gives the oscillator equation for x q (with a real frequency if
U ′′ (f c )(ω 2 − q 2 ) −1 > 0) d 2 x q dξ 2 + U ′′ (f c )(ω 2 − q 2 ) −1 a 2 0 x q = 0(54)
and if a = a 0 + a q then
−q 2 ∂ 2 ξ a q = 4πGa 3 0 U ′ (f c )x q .(55)
So, a oscillates around its stationary point a 0 . In general, in eqs. (10),(50)-(52) we have a four-dimensional dynamical system. It is known that the necessary condition for the stability of the inflaton equation (52) is
(ω 2 − q 2 ) −1 f U ′ (f ) > 0.(56)
However, it would be difficult to establish some general results on a(ξ) without some approximations or numerical simulations. In the next section we make the slow-roll approximation. If U = 0 then eq.(52) can be solved explicitly
∂ ξ φ = κa −2 .(57)
Then, ρ s = ρ 0s a −6 .
Hence, w = 1 in eq.(7) ("stiff matter" [17]). We have found an elementary solution of eqs.(22)-(23) with w = 1 and ω 2 q 2 = 1 3 for one fluid. We can insert the density (58) in eq.(23) with two fluids
V (a) = −k 0 a 1+ 3ω 2 q 2 + 8πGρ0 q 2 (3w + 3ω 2 q 2 ) −1 a −3w− 3ω 2 q 2 0 a 1+ 3ω 2 q 2 − a 1−3w + 8πGρ0s q 2 (3 + 3ω 2 q 2 ) −1 a −3− 3ω 2 q 2 0 a 1+ 3ω 2 q 2 − a −2 .(59)
If w < 1 then the Klein-Gordon contribution dominates at small a but the behaviour at large a does not depend on the contribution of this "stiff matter". The integral (24) can be calculated explicitly if ω 2 q 2 = 1 3 and w = 1 3 (relativistic matter ).
Let
δ 2 s = 1 2 α 2 + 1 4 α 2 s − k 0 .(60)α 2 s = 8πGρ 0s q −2 .(61)
If δ 2 s > 0 then the integral (24) gives the solution
a 2 = α 2 4δ s 1 − 1 + 2 α 2 s δ 2 s α 2 sin(±2δ s (ξ + ξ 0 )) .(62)
If δ 2 s < 0 then sin → sinh.
Slow-roll approximation
We neglect second order derivatives of a in eq.(10) and of φ in eq.(8) and moreover we set in eq.(52) ρ ≃ U.
In such a case eqs.(50) and (10) reduce to
2a −3 ∂ ξ a(ω 2 − q 2 )∂ ξ f = −U ′ (f ),(64)(3ω 2 + q 2 )a −2 ( da dξ ) 2 = 8πGa 2 U.(65)
We require that the terms with the second order derivatives in eqs. (10) and (50) are negligible in comparison with the terms with the first order derivatives
(3ω 2 + q 2 )a −2 ( da dξ ) 2 >> 2q 2 a −1 | d 2 a dξ 2 |(66)
and
2a −3 |∂ ξ a(ω 2 − q 2 )∂ ξ f | >> a −2 |(ω 2 − q 2 )∂ 2 ξ f |.(67)
Using eqs.(64)-(65) we obtain that the conditions (66)-(67) are satisfied if ǫ << 1, η << 1 and
q 2 3ω 2 + q 2 << 1,(68)3ω 2 + q 2 ω 2 − q 2 ln a a 0(69)
As an example let
U = g 4 (φ 2 − µ 2 g ) 2 (70) then −2πGφ 2 + 4πG µ 2 g ln φ = 3ω 2 + q 2 ω 2 − q 2 ln a a 0(71)
For a small φ we have V ≃ µ 4 4g . Hence, from eq.(65)
a ≃ − ω H 0 ξ
with a certain H 0 . Then, ∂ t a = ωa −1 ∂ ξ a. Hence, a −1 ∂ t a ≃ H 0 and we obtain the exponential expansion. For larger φ 2 close to µ 2 g the slow-roll approximation is not reliable anymore because ǫ and η become large.
Consider next
U = g exp(λφ)(72)
Then from eq.(69)
− 16πG λ φ = 3ω 2 + q 2 ω 2 − q 2 ln a(73)
Inserting in eq.(65)
(3ω 2 + q 2 )( da dξ ) 2 = 8πGa 4− λ 2 16πG 3ω 2 +q 2 ω 2 −q 2(74)
Hence, we obtain a power-law expansion as expected in the exponential model
a ≃ |ξ| α (75) where α = 1 −1 + λ 2 16πG 3ω 2 +q 2 ω 2 −q 2(76)
These results are reliable for small λ as ǫ = λ 2 16πG and η = 2ǫ.
Summary and outlook
Our intuition on the cosmological evolution in Einstein gravity is to a large extent based on exact solutions of the isotropic and homogeneous universe. Besides the Bianchi class of homogeneous anisotropic metrics little is known about anisotropic and inhomogeneous evolutions. We have discussed equations which in contradistinction to the standard ones depend on the phase variable ωτ −qx. The equations reduce to the standard Friedman equations when ω → 1 and q → 0. However, the dynamics depends in a non-trivial way on ω and q. There may be no limit q → 0 of these solutions although there is only a minor lack of isotropy when q is small. The anisotropic solutions could be useful to test some hypothesis on non-homogeneous and anisotropic cosmologies. They could also be applied for a description of some anisotropically expanding inhomogeneous domains inside the universe. We have obtained some static periodic solutions of Einstein equations corresponding to ω = 0. We did not expect that Einstein equations admit such a periodic spatial structure. The inhomogeneous solutions found in this paper should be studied with respect to their classical stability as well as concerning stability under quantum and thermal fluctuations if they are to be applied in the early universe. Then, an application to the question of the bubble formation [19] [18]in an inhomogeneous universe looks interesting.
is related to the Minkowski metric by a conformal transformation. For this reason we assume the dependence of a on the Lorentz covariant variable ξ = ωτ −qx. Then, in the model (2)-(3) the 00 component of Einstein equations reads (where q 2 = q 2 )
We can obtain from eqs.(64)-(65) the solution a(φ)where
ǫ =
1
16πG
(
U ′
U
) 2
and
η =
1
8πG
U ′′
U
−8πG dφ
U
U ′ =
1
2
Acknowledgements Useful discussions with Andrzej Borowiec and Jurek Kowalski-Glikman are gratefully acknowledged.
. M A Strauss, arXiv:astro-ph:9807199M. A. Strauss, arXiv:astro-ph:9807199
Global expansion in an inhomogeneous universe. U Seljak, Lam Hui, ASP Conference series. 88U. Seljak and Lam Hui, Global expansion in an inhomogeneous universe, in ASP Conference series, Vol.88(1996),
. V N Lukash, V A Rubakov, arXiv:astro-ph/0807.1635Physics-Uspekhi. 51V.N. Lukash and V.A. Rubakov, Physics-Uspekhi,51,283(2008), arXiv:astro-ph/0807.1635
. S Dedeo, R R Caldwell, P J Steihardt, Phys.Rev. 67103509S.DeDeo, R.R. Caldwell,P.J. Steihardt, Phys.Rev.D67,103509(2003)
. A Kashlinsky, F Atrio-Barandela, D Kocevski, H Ebeling, Astroph J. 68649A. Kashlinsky, F. Atrio-Barandela, D. Kocevski and H. Ebeling, Astroph J.686,L49(2008)
. H K Eriksen, Astroph. J. 60514H.K. Eriksen et al, Astroph. J. 605,14(2004)
. D J Schwarz, C J Copi, D Huterer, G D Starkman, Class.Quant.Grav. 33184001D.J. Schwarz, C.J. Copi, D.Huterer and G.D. Starkman, Class.Quant.Grav.33184001(2016)
. G F R Ellis, Journ.Phys.Conference Series. 18912011G.F.R. Ellis, Journ.Phys.Conference Series 189,012011(2009)
. T Buchert, Class.Quant.Grav. 28164007T. Buchert, Class.Quant.Grav.28,164007(2011)
H Stephani, D Kramer, M Maccallum, C Hoenslaer, E Herlt, Exact Solutions of Einstein's Field Equations. Cambridge University PressH. Stephani, D. Kramer, M.Maccallum, C. Hoenslaer and E. Herlt, Exact Solutions of Einstein's Field Equations, Cambridge University Press,2003
. K Bolejko, M N Celerier, A Krasinski, Class.Quant.Grav. 28164002K. Bolejko, M.N. Celerier and A. Krasinski, Class.Quant.Grav.28,164002(2011)
. O Akarsu, S Kumar, S Sharma, L Tedesco, arXiv:astro-ph:1905.06949O. Akarsu, S. Kumar, S. Sharma and L.Tedesco, arXiv:astro-ph:1905.06949
. L Infeld, A Schild, Phys.Rev. 68250L.Infeld and A. Schild, Phys.Rev.68,250(1945)
. G U Varieschi, Gen.Relativ.Gravit42. 929G.U. Varieschi, Gen.Relativ.Gravit42,929(2010)
. P J E Peebles, arXiv:astro-ph/991035P.J.E. Peebles, arXiv:astro-ph/991035
. A T Mithani, A Vilenkin, arXiv:[hep-th]1110.40960128A.T. Mithani and A. Vilenkin, JCAP01(2012)028, arXiv:[hep-th]1110.4096
. S Nojiri, S D Odintsov, S Tsujikawa, Phys.Rev. 7163004S. Nojiri, S.D.Odintsov, and S. Tsujikawa, Phys.Rev.D71,063004(2005)
. M P Dabrowski, J Garecki, D B Blaschke, Annalen Physik. 1813M.P. Dabrowski, J. Garecki and D.B. Blaschke, Annalen Physik,18,13(2008)
. E R Harrison, Mon.Not.R.astr.Soc. 13769E.R. Harrison, Mon.Not.R.astr.Soc.137,69(1967)
. R Coquereaux, A Grossman, Annals Phys. 143296R. Coquereaux and A. Grossman, Annals Phys.143,296 (1982)
. M P Dabrowski, Annals Phys. 248199M.P. Dabrowski , Annals Phys. 248,199(1996)
. M P Dabrowski, T Stachowiak, Annals Phys. 321771M.P. Dabrowski and T. Stachowiak, Annals Phys. 321,771(2006)
. G Lemaitre, Ann.Soc.Sci.Bruxelles A53. 51G. Lemaitre, Ann.Soc.Sci.Bruxelles A53,51(1933)
. P.-H Chavanis, Phys.Rev. 92103004P.-H. Chavanis, Phys.Rev.D92,103004(2015)
. A Linde, Phys.Lett. 129177A. Linde, Phys.Lett. 129B,177(1983)
. S K Blau, E L Guendelman, A H Guth, Pys.Rev. 351747S.K. Blau, E.L. Guendelman and A.H. Guth, Pys.Rev.D35,1747(1987)
|
[] |
[
"External forward shock origin of high energy emission for three GRBs detected by Fermi",
"External forward shock origin of high energy emission for three GRBs detected by Fermi"
] |
[
"P Kumar \nDepartment of Astronomy\nUniversity of Texas at Austin\n78712AustinTXUSA\n",
"R Barniol Duran \nDepartment of Astronomy\nUniversity of Texas at Austin\n78712AustinTXUSA\n\nDepartment of Physics\nUniversity of Texas at Austin\n78712AustinTXUSA\n"
] |
[
"Department of Astronomy\nUniversity of Texas at Austin\n78712AustinTXUSA",
"Department of Astronomy\nUniversity of Texas at Austin\n78712AustinTXUSA",
"Department of Physics\nUniversity of Texas at Austin\n78712AustinTXUSA"
] |
[
"Mon. Not. R. Astron. Soc"
] |
We analyze the >100 MeV data for 3 GRBs detected by the Fermi satellite (GRBs 080916C, 090510, 090902B) and find that these photons were generated via synchrotron emission in the external forward shock. We arrive at this conclusion by four different methods as follows.(1)We check the light curve and spectral behavior of the >100 MeV data, and late time X-ray and optical data, and find them consistent with the so called closure relations for the external forward shock radiation. (2) We calculate the expected external forward shock synchrotron flux at 100 MeV, which is essentially a function of the total energy in the burst alone, and it matches the observed flux value.(3)We determine the external forward shock model parameters using the >100 MeV data (a very large phase space of parameters is allowed by the high energy data alone), and for each point in the allowed parameter space we calculate the expected X-ray and optical fluxes at late times (hours to days after the burst) and find these to be in good agreement with the observed data for the entire parameter space allowed by the >100 MeV data. (4) We calculate the external forward shock model parameters using only the late time X-ray, optical and radio data and from these estimate the expected flux at >100 MeV at the end of the sub-MeV burst (and at subsequent times) and find that to be entirely consistent with the high energy data obtained by Fermi/LAT. The ability of a simple external forward shock, with two empirical parameters (total burst energy and energy in electrons) and two free parameters (circum-stellar density and energy in magnetic fields), to fit the entire data from the end of the burst (1-50s) to about a week, covering more than eightdecades in photon frequency ->10 2 MeV, X-ray, optical and radio -provides compelling confirmation of the external forward shock synchrotron origin of the >100 MeV radiation from these Fermi GRBs. Moreover, the parameters determined in points(3)and(4)show that the magnetic field required in these GRBs is consistent with shock-compressed magnetic field in the circum-stellar medium with pre-shocked values of a few tens of micro-Gauss.
|
10.1111/j.1365-2966.2010.17274.x
|
[
"https://arxiv.org/pdf/0910.5726v2.pdf"
] | 53,706,603 |
0910.5726
|
84a53c521be34fb69d517914db10f1a8c5e4c861
|
External forward shock origin of high energy emission for three GRBs detected by Fermi
2010. 30 July 2010
P Kumar
Department of Astronomy
University of Texas at Austin
78712AustinTXUSA
R Barniol Duran
Department of Astronomy
University of Texas at Austin
78712AustinTXUSA
Department of Physics
University of Texas at Austin
78712AustinTXUSA
External forward shock origin of high energy emission for three GRBs detected by Fermi
Mon. Not. R. Astron. Soc
0002010. 30 July 2010Accepted 2010 June 30; Received 2010 June 29; in original form 2009 October 27arXiv:0910.5726v2 [astro-ph.HE] (MN L A T E X style file v2.2)radiation mechanisms: non-thermal -methods: numericalanalytical -gamma- ray burst: individual: 080916C090510090902B
We analyze the >100 MeV data for 3 GRBs detected by the Fermi satellite (GRBs 080916C, 090510, 090902B) and find that these photons were generated via synchrotron emission in the external forward shock. We arrive at this conclusion by four different methods as follows.(1)We check the light curve and spectral behavior of the >100 MeV data, and late time X-ray and optical data, and find them consistent with the so called closure relations for the external forward shock radiation. (2) We calculate the expected external forward shock synchrotron flux at 100 MeV, which is essentially a function of the total energy in the burst alone, and it matches the observed flux value.(3)We determine the external forward shock model parameters using the >100 MeV data (a very large phase space of parameters is allowed by the high energy data alone), and for each point in the allowed parameter space we calculate the expected X-ray and optical fluxes at late times (hours to days after the burst) and find these to be in good agreement with the observed data for the entire parameter space allowed by the >100 MeV data. (4) We calculate the external forward shock model parameters using only the late time X-ray, optical and radio data and from these estimate the expected flux at >100 MeV at the end of the sub-MeV burst (and at subsequent times) and find that to be entirely consistent with the high energy data obtained by Fermi/LAT. The ability of a simple external forward shock, with two empirical parameters (total burst energy and energy in electrons) and two free parameters (circum-stellar density and energy in magnetic fields), to fit the entire data from the end of the burst (1-50s) to about a week, covering more than eightdecades in photon frequency ->10 2 MeV, X-ray, optical and radio -provides compelling confirmation of the external forward shock synchrotron origin of the >100 MeV radiation from these Fermi GRBs. Moreover, the parameters determined in points(3)and(4)show that the magnetic field required in these GRBs is consistent with shock-compressed magnetic field in the circum-stellar medium with pre-shocked values of a few tens of micro-Gauss.
INTRODUCTION
The Fermi Satellite has opened a new and sensitive window in the study of GRBs (gamma-ray bursts); for a general review of GRBs see Gehrels, Ramirez-Ruiz & Fox (2009), Mészáros (2006), Piran (2004), Woosley & Bloom (2006), Zhang (2007). So far, in about one year of operation, Fermi has detected 12 GRBs with photons with energies >100 MeV. The >10 2 MeV emission of most bursts detected by the LAT (Large Area Telescope: energy coverage 20 MeV to >300 GeV) instrument aboard the Fermi satellite shows two very interesting features (Omedei et al. 2009): (1) The first >100 MeV photon arrives later than the first lower energy photon ⋆ E-mail: [email protected], [email protected] ( < ∼ 1 MeV) detected by GBM (Gamma-ray Burst Monitor), (2) The >100 MeV emission lasts for much longer time compared to the burst duration in the sub-MeV band (the light curve in sub-MeV band declines very rapidly).
There are many possible >100 MeV photons generation mechanisms proposed in the context of GRBs; see Gupta & Zhang (2007) and Fan & Piran (2008) for a review. Shortly after the observations of GRB 080916C (Abdo et al. 2009a), we proposed a simple idea: the >100 MeV photons in GRB 080916C are produced via synchrotron emission in the external forward shock (Kumar & Barniol Duran 2009). This proposal naturally explains the observed delay in the peak of the light curve for >100 MeV photons -it corresponds to the deceleration time-scale of the relativistic ejecta -and also the long lasting >100 MeV emission, which c 2010 RAS Table 1. The main quantities used in our analysis for these 3 GRBs. β LAT is the spectral index for the > 100MeV data, p is the power-law index for the energy distribution of injected electrons i.e. dn/dγ ∝ γ −p , z is the redshift, d L28 is the luminosity distance in units of 10 28 cm, t GRB is the approximate burst duration in the Fermi/GBM band and E γ,iso is the isotropic equivalent of energy observed in γ-rays in the 10keV-10GeV band for GRB080916C and GRB090902B, and in the 10keV-30GeV band for GRB090510. Data taken from Abdo et al. (2009aAbdo et al. ( , 2009bAbdo et al. ( , 2009c, De Pasquale et al. (2010). corresponds to the power-law decay nature of the external forward shock (ES) emission (the ES model was first proposed by Rees & Mészáros 1992, Mészáros & Rees 1993, Paczyński & Rhoads 1993; for a comprehensive review of the ES model, see, e.g., Piran, 2004, and references therein). Following our initial analysis on GRB 080916C, a number of groups have provided evidence for the external forward shock origin of Fermi/LAT observations (Gao et al. 2009;De Pasquale et al. 2010).
In this paper we analyze the >100 MeV emission of GRB 090510 and GRB 090902B in detail, and discuss the main results of our prior calculation for GRB 080916C (Kumar & Barniol Duran 2009), to show that the high energy radiation for all these three arose in the external forward shock via the synchrotron process. These three bursts -one short and two long GRBs -are selected in this work because the high energy data for these bursts have been published by the Fermi team as well as the fact that they have good afterglow follow up observations in the X-ray and optical bands (and also the radio band for GRB 090902B) to allow for a thorough analysis of data covering more than a factor 10 8 in frequency and > 10 4 in time to piece together the high energy photon generation mechanism, and cross check this in multiple different ways.
In the next section ( §2) we provide a simple analysis of the LAT spectrum and light curve for these three bursts to show that the data are consistent with the external forward shock model. This analysis consists of verifying whether the temporal decay index and the spectral index satisfy the relation expected for the ES emission (closure relation), and comparing the observed flux in the LAT band with the prediction of the ES model (according to this model the high energy flux is a function of blast wave energy, independent of the unknown circum-stellar medium density, and extremely weakly dependent on the energy fraction in magnetic fields).
We describe in §3 how the >100 MeV data alone can be used to theoretically estimate the emission at late times (t > ∼ a few hours) in the X-ray and optical bands within the framework of the external forward shock model, and that for these three bursts the expected flux according to the ES model is in agreement with the observed data in these bands.
Moreover, if we determine the ES parameters (ǫe, ǫB, n, and E) 1 using only the late time X-ray and optical fluxes (and radio data), we can predict the flux at >100 MeV at any time after the deceleration time for the GRB relativistic outflow. We show in §3 that this predicted flux at > 10 2 MeV is consistent with the value observed by the Fermi satellite for the bursts analyzed in this paper.
These exercises and results show that the high energy emission 1 ǫe and ǫ B are the energy fraction in electrons and magnetic field for the shocked fluid, n is the number density of protons in the burst circum-stellar medium, and E is the kinetic energy in the ES blast wave.
is due to the external shock as discussed in §3. We also describe in §3 that the magnetic field in the shocked fluid -responsible for the generation of > 100 MeV photons as well as the late time X-ray and optical photons via the synchrotron mechanism -is consistent with the shock compression of a circum-stellar magnetic field of a few tens of micro-Gauss. It is important to point out that we do not consider in this work the prompt sub-MeV emission mechanism for GRBs -which is well known to have a separate and distinct origin as evidenced by the very rapid decay of sub-MeV flux observed by Swift and Fermi/GBM (the flux in the sub-MeV band drops-off with time as ∼ t −3 or faster as opposed to the ∼ t −1 observed in the LAT band). Nor do we investigate the emission process for photons in the LAT band during the prompt burst phase.
ES MODEL AND THE >100 MEV EMISSION FROM GRBS: SIMPLE ARGUMENTS
In this work we consider 3 GRBs detected by Fermi/LAT in the > 10 2 MeV band: GRB 080916C (Abdo et al. 2009a), GRB 090510 (Abdo et al. 2009b, De Pasquale et al. 2010) and GRB 090902B (Abdo et al. 2009c). These bursts show the "generic" features observed in the >100 MeV emission of most of Fermi GRBs mentioned above, and these are the only three bursts for which we have optical, X-ray and Fermi data available. Some basic information for these 3 GRBs have been summarized in Table 1. The synchrotron process in the ES model predicts a relationship between the temporal decay index (α) of the light curve and the energy spectral index (β), which are so called closure relations. These relations serve as a quick check for whether or not the observed radiation is being produced in the external shock. In this paper, we use the convention f (ν, t) ∝ ν −β t −α .
Since the Fermi/LAT band detects very high energy photons ( > ∼ 10 2 MeV), it is reasonable to assume that this band lies above all the synchrotron characteristic frequencies (assuming that the emission process is synchrotron). In this case the spectrum should be ∝ ν −p/2 (Sari, Piran, Narayan 1998)-where p is the power law index of the injected electrons' energy distribution -and according to the external forward shock model (see, e.g., Panaitescu & Kumar 2000), the light curve should decay as ∝ t −(3p−2)/4 , giving the following closure relation: α = (3β − 1)/2. Using the data in Table 1 we find that all three bursts satisfy this closure relation (Table 2), which encourages us to continue our diagnosis of the >100 MeV emission in the context of the ES model.
We next check to see if the predicted magnitude of the synchrotron flux in the ES is consistent with the observed values. This calculation would seem very uncertain at first, but we note that the predicted external forward shock synchrotron flux at a frequency larger than all characteristic frequencies of the synchrotron emis- (1), the data in Table 1, and setting the isotropic kinetic energy in the ES to be E = E γ,iso , which gives a lower limit on E; most likely E = few × E γ,iso and we find that using E ∼ 3 × E γ,iso the fluxes match the observed values very well. Also, for this calculation, ǫ B = 10 −5 , ǫe = 0.25, p = 2.4 and Y < 1. Table 2. Comparison between the temporal decay index (α ES ) expected for the external forward shock model, and the observed decay index (α obs ); these values are equal to within 1-σ error bar. The ES flux calculated at time t is compared to the observed value at the same time. These two values are also consistent, further lending support to the ES origin of the > 100 MeV emission. Data are obtained from the same references as in Table 1. The theoretically calculated flux would be larger if ǫe > 0.25; GRB afterglow data for 8 well studied bursts suggest that 0.2 < ǫe < ∼ 0.8 (Panaitescu & Kumar 2001). sion is independent of the circum-stellar medium (CSM) density, n, and it is extremely weakly dependent on the fraction of the energy of the shocked gas in the magnetic field, ǫB, which is a highly uncertain parameter for the ES model. The density falls off as ∝ R −s , where R is the distance from the center of the explosion, and s = 0 corresponds to a constant CSM and s = 2 corresponds to a CSM carved out by the progenitor star's wind. The flux is given by (see e.g. Kumar 2000, Panaitescu & Kumar 2000:
fν = 0.2mJy E p+2 4 55 ǫ p−1 e ǫ p−2 4 B,−2 t − 3p−2 4 1 ν − p 2 8 (1 + Y ) −1 ×(1 + z) p+2 4 d −2 L28 ,(1)
where ǫe is the fraction of energy of the shocked gas in electrons, t1 = t/10s is the time since the beginning of the explosion in the observer frame (in units of 10s), ν8 is photon energy in units of 100MeV, E55 = E/10 55 erg is the scaled isotropic kinetic energy in the ES, Y is the Compton-Y parameter, z is the redshift and dL28 is the luminosity distance to the burst (in units of 10 28 cm). Using the values of Table 1, we can predict the expected flux at 100 MeV from the ES and compare it to the observed value at the same time.
We show in Table 2 that the observed high energy flux is consistent with the theoretically expected values for all three busts. The fact that these bursts satisfy the closure relation, and that the observed > 10 2 MeV flux is consistent with theoretical expectations, suggests that the high energy emission detected by Fermi/LAT from GRBs is produced via synchrotron emission in the ES. In the next section we carry out a more detailed analysis that includes all the available data from these bursts during the "afterglow" phase, i.e. after the emission in the sub-MeV band has ended (or fallen below Fermi/GBM threshold).
DETAILED SYNTHESIS OF ALL AVAILABLE DATA AND THE EXTERNAL FORWARD SHOCK MODEL
The simple arguments presented in the last section provide tantalizing evidence that the high energy photons from the three bursts considered in this paper are synchrotron photons produced in the external forward shock. We present a more detailed analysis in this section where we consider all available data for the three bursts after the end of the emission in the Fermi/GBM band, i.e. for t > tGRB, where tGRB is the "burst duration" provided in Table 1. The data we consider consist of > 10 2 MeV emission observed by Fermi/LAT and AGILE/GRID, X-ray data from Swift/XRT, optical data from Swift/UVOT and various ground based observatories, and radio data from Westerbork in the case of GRB 090902B.
The main idea is to use the > 10 2 MeV data to constrain the ES parameters (ǫe, ǫB, n and E) 2 -which as we shall see allow for a large hyper-surface in this space -and for each of the points in the allowed 4-D parameter space calculate the flux in the X-ray, optical and radio bands from the external forward shock at those times where data in one of these bands are available for comparison with the observed value. It would be tempting to think that such an exercise cannot be very illuminating as the ES flux calculated at any given time in these bands would have a large uncertainly that would reflect the large volume of the sub-space of 4-D parameter space allowed by the >10 2 MeV data alone. This, however, turns out to be incorrect -the afterglow flux generated by the ES in the X-ray and optical bands (before the time of jet break) is almost uniquely determined from the high-energy photon flux; the entire sub-space of the 4-D space, allowed by the >10 2 MeV data, projects to an extremely small region (almost to a point) as far as the emission at any frequency larger than ∼ νi is concerned; νi is the synchrotron frequency corresponding to the minimum energy of injected electrons (electrons just behind the shock front), which we also refer to as synchrotron injection frequency. Therefore, we can compare the ES model predictions of flux in the X-ray and optical bands with the observed data, and either rule out the ES origin for high energy photons or confirm it 3 .
We also carry out this exercise in the reverse direction, i.e. find the sub-space of 4-D parameter space allowed by the late time (t > ∼ 1day) X-ray, optical, and radio data, and then calculate the expected >10 2 MeV flux at early times for this allowed subspace for comparison with the observed Fermi/LAT data. This reverse direction exercise is not equivalent to the one described in the preceding paragraph since the 4-D sub-space allowed by the >10 2 MeV data and that by the late time X-ray and optical data are in general quite different (of course they have common points whenever early highenergy and late low energy emissions arise from the same ES).
The input physics in all of these calculations consist of the following main ingredients: synchrotron frequency and flux (see Rybicki & Lightman 1979 for detailed formulae; a convenient sum-mary of the relevant equations can also be found in Kumar & Narayan 2009), Blanford-McKee self-similar solution for the ES (Blandford & McKee 1976), electron cooling due to synchrotron and synchrotron self-Compton radiation (Klein-Nishina reduction to the cross-section is very important to incorporate for all the three bursts for at least a fraction of the 4-D parameter space), and the emergent synchrotron spectrum as described in e.g. Sari, Piran & Narayan (1998). Although the calculations we present in the following sections can be carried out analytically (e.g. Kumar & McMahon 2008), it is somewhat tedious, and so we have coded all the relevant physics in a program and use that for finding the allowed part of 4-D parameter space and for comparing the results of theoretical calculation with the observed data. Numerical codes have also the advantage that they enable us to make fewer assumptions and approximations. Nevertheless, we present a few analytical estimates to give the reader a flavor of the calculations involved.
We analyze the data for each of the three bursts individually in the following three sub-sections in reverse chronological order.
GRB 090902B
The Fermi/LAT and GBM observations of this burst can be found in Abdo et al. (2009c). The X-ray data for this GRB started at about half a day after the trigger time. The spectrum in the 0.3-10 keV X-ray band was found to be βx = 0.9 ± 0.1, and the light curve decayed as αx = 1.30 ± 0.04 (Pandey et al. 2010). The optical observations by Swift/UVOT started at the same time (Swenson & Stratta 2009) and show αopt ∼ 1.2. ROTSE also detected the optical afterglow starting at ∼ 1.4 hours and its decay is consistent with the UVOT decay (Pandey et al. 2009). The Faulkes Telescope North also observed the afterglow at about 21 hours after the burst using the R filter (Guidorzi et al. 2009). There is a radio detection available at about 1.3 days after the burst and its flux is ∼ 111µJy at 4.8GHz (van der Host et al. 2009).
The late time afterglow data obtained by Swift/XRT show that the X-ray band, 0.3-10 keV (νx), should lie between νi (the synchrotron injection frequency) and νc (the synchrotron frequency corresponding to the electrons' energy for which the radiative loss time-scale equals the dynamical time; we also refer to it as synchrotron cooling frequency). This is because νx > νi, otherwise the light curve would be rising with time instead of the observed decline. Moreover, if νc < νx, then p = 2βx ∼ 1.8 ± 0.2, and in that case fν x (t) ∝ t −(3p+10)/16 = t −0.96±0.04 , and that is inconsistent with the observed decline of the X-ray light curve (for decay indeces for values of p < 2 see the table 1 in Zhang & Mészáros (2004)). Thus, νi < νx < νc, so that βx = (p − 1)/2 or p ∼ 2.8 ± 0.2.
Next we determine if the X-ray data are consistent with a constant density circum-stellar medium or a wind-like medium. For s = 0 (s = 2) the expected temporal decay index of the X-ray light curve is α = 3(p − 1)/4 = 1.35 ± 0.15 (α = (3p − 1)/4 = 1.85 ± 0.15). Thus, a constant density circum-stellar medium is favored for this GRB.
The XRT flux at 1keV at t = 12.5hr was reported to be 0.4µJy (Pandey et al. 2010). Extrapolating this flux to the optical band using the observed values of αx and βx we find the flux at 21hr to be within a factor ∼ 3 of the ∼ 15µJy flux reported by Guidorzi et al. (2009). Thus, the emissions in the optical and the X-ray bands arise in the same source (ES) with νi below the optical band; also, if the optical band were below νi, then the optical light curve would be increasing with time, which is not observed. Moreover, the optical and X-ray data together provide a more accurate determination of the spectral index to be 0.69 ± 0.06 or p = 2.38 ± 0.12 which is consistent with the p-value for the high-energy data at t > tGRB (see Table 1).
If the >10 2 MeV emission is produced in the external forward shock then we should be able to show that the early high energy γray flux is consistent with the late time X-ray and optical data. We first show this approximately using analytical calculations, and then present results obtained by a more accurate numerical calculation in our figures.
The observed flux at 100 MeV and t = 50s can be extrapolated to half a day to estimate the flux at 1 keV. This requires the knowledge of where νc lies at this time. It can be shown that νc ∼ 100 MeV at 50 s in order that the flux at 100 keV does not exceed the observed flux limit (see subsection below). Therefore, νc ∝ t −1/2 is ∼ 3 MeV at 12.5 hr, and thus the expected flux at 1 keV is ∼ 0.5µJy which agrees with the observed value. Therefore, we can conclude that the >100MeV, X-ray and optical photons were all produced by the same source, and we suggest that this source must be the external forward shock as already determined for the X-ray and optical data.
We now turn to determining the ES parameter space for this burst. We can determine this space using both the forward direction and reverse direction approaches. We first list the constraints on the ES model, then give a few analytical estimates using the equations in, for example, Panaitescu & Kumar (2002), and then present the results of our detail numerical calculations.
Forward direction
In this subsection we only use the early high-energy emission to constrain the ES parameter space. The constraints at t = 50 s are: (i) The flux at 100 MeV should agree with the observed value (see Table 2) -within the error bar of 10%, (ii) νc < 100MeV at 50s for consistency with the observed spectrum, (iii) the flux at 100 keV should be smaller than 0.04 mJy (which is a factor of 10 less than the observed value), so that ES emission does not prevent the Fermi/GBM light curve to decay steeply after 25 seconds, and (iv) Y < 50 so that the energy going into the second Inverse Compton is not excessive.
The first 3 conditions give the following 3 equations at t = 50 s. The cooling frequency is given by (Panaitescu & Kumar 2002)
νc ∼ 6eV E −1/2 55 n −1 ǫ −3/2 B,−2 (1 + Y ) −2 < 100MeV .(2)
The flux at 100 keV, which is between νi and νc as discussed above, is (Panaitescu & Kumar 2002)
f 100keV ∼ 53mJy E 1.35 55 n 0.5 ǫ 0.85 B,−2 ǫ 1.4 e,−1 < 0.04mJy .(3)
And lastly, using (1), the flux at 100 MeV, which we assume is above νc, is
f100MeV ∼ 1 × 10 −4 mJy E 1.1 55 ǫ 0.1 B,−2 ǫ 1.4 e,−1 = 220nJy .(4)
Solving for n from (3) and for ǫe from (4), and substituting in (2), we find that at 50 s νc > ∼ 50 MeV. The injection frequency can also be estimated at t = 50 s, it is given by
νi ∼ 8keV E 1/2 55 ǫ 1/2 B,−2 ǫ 2 e,−1 ,(5)
and using (4), one finds νi ∼ 25keV E −1.07 55 ǫ 0.36 B,−2 , which gives νi ∼ 2 keV for ǫB ∼ 10 −5 . These values of νi and νc are consistent with the values obtained with detail numerical calculations and reported in the Fig. 1 caption.
Using (2), we can find a lower limit on ǫB, which is given by
ǫB > ∼ 1 × 10 −7 n 2/3 E 1/3 55 (1 + Y ) 4/3 .(6)
Also, we can solve for ǫe using (4) and substitute that into (3) to obtain an upper limit on ǫB, which is
ǫB < ∼ 3 × 10 −7 n 2/3 E 1/3 55 (1 + Y ) 4/3 .(7)
Note that these estimates are consistent with the numerical results we present in Fig. 1. We also find the ǫB ∝ n −2/3 dependence that is shown in the figure. Moreover, with these parameters we can predict the fluxes at late times. The X-ray and optical band lie between νi and νc at ∼ 1 day (see above). The first X-ray data point is at 12.5 hr, and the theoretically expected flux at 1 keV at this time is given by
f 1keV ∼ 1mJy E 1.35 55 n 0.5 ǫ 0.85 B,−2 ǫ 1.4 e,−1 ,(8)
and the optical flux at ∼ 7.5 × 10 4 s is
f2eV ∼ 47mJy E 1.35 55 n 0.5 ǫ 0.85 B,−2 ǫ 1.4 e,−1 .(9)
We can use (3) to find an upper limit for the X-ray and optical fluxes. In addition, we can find ǫe using (4), and use (6) to find a lower limit for these fluxes. We find that
0.5µJy < ∼ f 1keV < ∼ 0.8µJy(10)
for the X-ray flux at 12.5 hr, and
25µJy < ∼ f2eV < ∼ 36µJy(11)
for the optical flux at ∼ 7.5 × 10 4 s. These estimates agree very well with the observed values of 0.4µ Jy (Pandey et al. 2010) and 15µJy (Guidorzi et al. 2009) at the respective bands and times. We note that, although Inverse Compton cooling is very important at late times, the X-ray band lies below νc and therefore X-ray and optical fluxes are unaffected by Inverse Compton cooling. Next, we present the results obtained by detailed numerical calculations. We use the same constraints described at the beginning of this subsection to determine the parameter space allowed by the high-energy early data. It is worth noting that in our numerical calculations throughout the paper we make no assumption regarding the ordering of the characteristic frequencies, nor the location of the observed bands with respect to them. The projection of the sub-space of the 4-D parameter space allowed by the high energy data onto the ǫB-n plane is shown in Figure 1, and some of the other ES parameters are presented in the Fig. 1 caption. It is clear that there is a large sub-space that is consistent with the LAT data, and also that the magnetic field needed for the synchrotron source is consistent with the shock-compressed magnetic field in the CSM of strength < ∼ 30µG. For each point in the 4-D space allowed by the >10 2 MeV data we calculate the X-ray and the optical flux at late times. In spite of the fact that the 4-D sub-space allowed by the LAT data is very large ( fig. 1) the expected X-ray and optical flux at late times lie in a narrow range as shown by two diagonal bands in Figure 2; the width of these bands has been artificially increased by a factor 2 to reflect the approximate treatment of the radial structure of the blast wave and also to include in the calculation the effect of the blast wave spherical curvature on the ES emission (see, e.g., Appendix A of Panaitescu & Kumar 2000; both of these effects together contribute roughly a factor of 2). We see that the observed X-ray and optical light curves lie within the theoretically calculated bands (Fig. 2). This result strongly supports the ES model for the origin of the >10 2 MeV photons.
We note that the above mentioned extrapolation from early Figure 1. We determine the sub-space of the 4-D parameter space (for the forward external forward shock model with s = 0) allowed by the high energy data for GRB 090902B at t=50s as described in §3.1. The projection of the allowed subspace onto the ǫ B -n plane is shown in this figure (dots); the discrete points reflect the numerical resolution of our calculation. We also plot the expected ǫ B for a shock compressed CSM magnetic field of 5 and 30 µ-Gauss as the green and blue lines respectively; for a CSM field of strength B 0 , the value of ǫ B downstream of the shock-front resulting from the shock compressed CSM field is ≈ B 2 0 /(2πnmpc 2 ), where nmp is the CSM mass density, and c is the speed of light. Note that no magnetic field amplification is needed, other than shock compression of a CSM magnetic field of ∼ 30µG, to produce the >100 MeV photons. The synchrotron injection and cooling frequencies at t = 50s for the sub-space of 4-D parameter space allowed by the high energy data are 100eV < ∼ ν i < ∼ 3keV and 30MeV < ∼ νc < ∼ 100 MeV respectively, the Lorentz factor of the blast wave at t = 50s lies between 330 and 1500, and 10 55 erg < ∼ E < ∼ 3 × 10 55 erg. Note that at 0.5 day ν i would be below the optical band, and νc > 1MeV, and these values are consistent with the X-ray spectrum and the X-ray and optical decay indices at this time.
time, high-energy, data to late time, low-energy, flux prediction was carried out for a CSM with s = 0. We have also carried out the same calculation but by assuming a wind medium (s = 2), and in this case we find that the expected flux at late times is smaller than the observed values by a factor of 20 or more; this conclusion is drawn by comparing the late optical and X-ray fluxes predicted at a single time with the observations at that same time, i.e. without making use of the temporal decay indices observed in these bands. We pointed out above that the late time afterglow data for this burst are consistent with a uniform density medium, but not with a s = 2 medium. Thus, there is a good agreement between the late time afterglow data and the early >10 2 MeV data in regards to the property of the CSM; the two methods explore the CSM density at different radii.
Reverse direction
We carry out the above mentioned exercise in the reverse direction as well, i.e. we determine the ES parameter space using only the late time X-ray, optical and radio data, and use these parameters to determine the flux at 10 2 MeV at early times when Fermi/LAT observations were made. The constraints on ES model parameters at late times are the following: (i) The X-ray and optical flux at 12.5 hr and 7.5 × 10 4 s, respectively, should match the ES flux at these bands and at these times, (ii) the radio flux at 1.3 d should be Figure 2. The optical and X-ray fluxes of GRB 090902B predicted at late times using only the high energy data at 50s (assuming synchrotron emission from external forward shock) are shown in the right half of this figure, and the predicted flux values are compared with the observed data (discrete points with error bars). The width of the region between the green (magenta) lines indicates the uncertainty in the theoretically predicted X-ray (optical) fluxes (the width is set by the error in the measurement of 100 MeV flux at 50s, and the error in the calculation of external forward shock flux due to approximations made -both these contribute roughly equally to the uncertainty in the predicted flux at late times). LAT (X-ray) data red (black) circles, are from Abdo et al. 2009c(Evans et al. 2007) and were converted to flux density at 100 MeV (1keV) using the average spectral index provided in the text. Optical fluxes are from Swenson et al. 2009 (square) and Guidorzi et al. 2009 (triangle) and were converted to flux density using 16.4mag≈ 1mJy. The blue dashed line shows schematically the light curve observed by Fermi/GBM. The predicted value for the radio flux at one day has a very large range (not shown), but consistent with the observed value.
consistent with the observed value. We first show some analytical estimates and then turn to more detailed numerical calculations.
Constraint (i) is simply equation (8) set equal to the observed value of 0.4µ Jy at 12.5 hr. For the analytical estimates presented here, it is not necessary to use the optical flux at late times, since both the optical and X-ray bands lie between νi and νc, so they provide identical constraints. Constraint (ii), assuming that the radio frequency is below νi, gives f4.8GHz ∼ 19mJy E
Solving for ǫe in the last equation and substituting in constraint (i) gives an equation for ǫB, which is
ǫB = 6 × 10 −8 nE 2 55 .(13)
This estimate is consistent with the numerical result presented on Fig. 3. Moreover, one can see that we find ǫB ∝ n −1 , which is exactly what is found numerically (and agrees very well with the shock-compressed CSM field prediction).
We can now predict the high-energy flux at 100 MeV and early time using the ES parameters determined using late time afterglow data in X-ray and radio bands. We use (1) at t = 50 s, substituting ǫe from (12) and n from (13), and find that the flux should be
f100MeV ∼ 200nJy E 3/4 55 ǫ −1/4 B,−5 ,(14)
in agreement with the observed value at t = 50 s. We now turn to our numerical results.
Using the same set of constraints presented at the beginning of this subsection, we perform our numerical calculations to determine the ES parameter space allowed by the late time (t > ∼ 0.5 d) X-ray, optical and radio data and use that information to "predict" the 100 MeV flux at early times (t < ∼ 10 3 s). The numerical results of this exercise, for a s = 0 CSM medium, are also in good agreement with the Fermi/LAT data as shown in Figure 4. Moreover, the flux from the ES at t = 50s and 100 keV is found to be much smaller than the flux observed by Fermi/GBM (Fig. 4 -left panel), which is very reassuring, because otherwise this would be in serious conflict with the steep decline of the light curve observed in the sub-MeV band; this also shows that the sub-MeV and GeV radiations are produced by two different sources.
We note that the range of values for ǫB allowed by the late time radio, optical & X-ray afterglow data is entirely consistent with shock compressed circum-stellar medium magnetic field of strength < 30µG (see fig. 3). We also point out that the afterglow flux depends on the magnetic field B, and B 2 ∝ ǫBn, therefore, there is a degeneracy between ǫB and n and that makes it very difficult to determine n uniquely.
GRB 090510
The Fermi/LAT and GBM observations of this burst are described in Abdo et al. (2009b) and De Pasquale et al. (2010). This short burst has very early X-ray and optical data starting only 100s after the burst. The X-ray spectrum is βx = 0.57 ± 0.08 (Grupe & Hoverstein 2009). The temporal decay index is αx,1 = 0.74 ± 0.03 during the initial ∼ 10 3 s and subsequently the decay steepens to αx,2 = 2.18 ± 0.10 with a break at tx = 1.43 +0.09 −0.15 ks. The optical data shows αopt,1 = −0.5 +0.11 −0.13 and αopt,2 = 1.13 +0.11 −0.10 with a break at topt = 1.58 +0.46 −0.37 ks (De Pasquale et al. 2010). In the context of the ES model (also considered by Gao et al. 2009and De Pasquale et al. 2010 for the case of this specific burst), the data suggests that νx < νc, because in this case βx = (p − 1)/2, so p = 2.14 ± 0.16 and the temporal decay index (for s = 0) is αx = 3(p − 1)/4 = 0.86 ± 0.12 consistent with the observed value of αx,1. If we take νx > νc, then βx = p/2, so p = 1.14 ± 0.16 and the temporal Figure 3. We determine the sub-space of the 4-D parameter space (for the external forward shock model with s = 0) allowed by the late time (t > 0.5day) X-ray, optical and radio data for GRB 090902B as described in §3.1. The projection of the allowed subspace onto the ǫ B -n plane at t = 0.5day is shown in this figure (dots). We also plot the expected ǫ B for a shock compressed CSM magnetic field of 2 and 30 µ-Gauss as the green and blue lines, respectively; for a CSM field of strength B 0 , the value of ǫ B downstream of the shock-front resulting from the shock compressed CSM field is ≈ B 2 0 /(2πnmpc 2 ), where nmp is the CSM mass density, and c is the speed of light. Note that no magnetic field amplification is needed, other than shock compression of a CSM magnetic field of strength < ∼ 30µG, to produce the late time X-ray, optical and radio data. We arrived at this same conclusion from the modeling of early time >100 MeV radiation alone (see fig. 1). decay index should have been αx = (3p + 10)/16 = 0.84 ± 0.03, since p < 2, which is consistent with the observed temporal decay, however, the expected optical light curve index for this value of p is αopt = −(p + 2)/(8p − 8) = −2.8, which is inconsistent with the observed value of αopt,1 (see next paragraph). The X-ray afterglow data also shows that the medium in the vicinity of the burst must have been of constant density. This is because, for an s = 2 medium, the expected temporal decay of the X-ray flux, when νx < νc, is ∝ t −(3p−1)/4 = t −1.36 -much steeper than the observed decline of t −0.74 -while for s = 0 the expected decline is consistent with observations (Gao et al. 2009).
Given the fact that the break in the optical light curve and that in the X-ray light curve occur at the same time, i.e. tx = topt, it is unlikely that the emission in these two bands comes from two different, unrelated sources. Thus, it is natural to attribute both the optical and X-ray emissions to the external forward shock. The fact that the optical light curve is rising during the first ∼ 0.5hr as t 1/2 means that νopt < νi during this time period (Panaitescu & Kumar 2000), where νopt is the optical band. The break seen in both light curves can be attributed to the jet break. The X-ray light curve decay of t −2.2 for t > 1.4 × 10 3 s agrees very well with the expected post-jet-break light curve of ∝ t −p = t −2.12±0.14 (Rhoads 1999), and suggests a jet opening angle of ∼ 1 o (Sari, Piran & Halpern, 1999). The reason that αopt,2 is not as steep can be understood the following way. At the time of the jet break, the optical band is below νi, therefore, the light curve decays as ∝ t −1/3 instead of ∝ t −p (Rhoads 1999). At later times, when νi, which is decreasing rapidly, crosses the optical band, the optical light curve will transition slowly from ∝ t −1/3 to ∝ t −p , and that is why αopt,2 is not as large as αx,2; the timescale for this transition can be long/short depending on how far above γi the asymptotic distri-bution of n(γ) ∝ γ −p is attained. This interpretation is supported by the results of our numerical calculation shown in Figure 5 -we obtain a value of νi ∼ 500eV at 100s, which should cross the optical band at ∼ 4000s -a factor of ∼ 3 larger than the observed time of the jet break. This idea can be tested with optical data available at much later times: it should show the light curve slowly steepening to the asymptotic value of ∝ t −p . Moreover, the optical spectrum before the break in the light curve (t < topt) should be consistent with ν 1/3 . Is it possible that the rise of the optical band light curve might be due to the onset of the ES, while the initial X-ray emission (until the break at ∼ 1.4 ks) and the gamma-ray photons are from the "internal shock" mechanism (De Pasquale et al. 2010)? This seems unlikely, given that the density of the CSM required for the deceleration time of the GRB jet to be ∼ 10 3 s (topt) is extremely low, as can be seen from the following equation
n = 3E(1 + z) 3 32πc 5 mpΓ 8 t 3 peak ,(15)
where mp is the mass of the proton, c is the speed of light, Γ is the initial Lorentz factor of the GRB jet, t peak is the time when the peak of the light curve is observed and E is the isotropic energy in the ES. For GRB 090510, Γ was determined to be Γ > ∼ 10 3 by using γγ opacity arguments (Abdo et al. 2009b), which is a limit applicable to the scenario proposed by De Pasquale et al. (2010), where MeV and GeV photons are produced in the same source. We take t peak ∼ 10 3 s and E ∼ Eγ,iso and find that we need a CSM density of n ≈ 10 −9 E53Γ −8 3 t −3 peak,3 cm −3 , which is much smaller than the mean density of the Universe at this redshift, and therefore unphysical. Even though there is a strong dependence of CSM density on Γ, the upper limit on density provided above cannot be increased by more than a factor of ∼ 10, since the error in the determination of Γ is much less than a factor of 2 (Abdo et al. 2009b). Thus, the possibility that the peak of the optical light curve at ∼ 10 3 s is due to the deceleration of the GRB jet seems very unlikely. We note that in the scenario we present in this paper, the > 100 MeV emission observed by Fermi/LAT and the lower energy ( < ∼ 1 MeV) observed by Fermi/GBM are produced by two different sources, therefore, the pair-production argument can't be used to constrain Γ. However, in this scenario, the deceleration time for the GRB jet is < ∼ 1 s, and that means that the peak of the optical light curve at ∼ 10 3 s cannot correspond to the deceleration time.
We conclude that the available data suggest that optical and Xray photons are coming from the same source (ES model). We now consider whether the observed >100 MeV emission is also consistent with the ES model. We first use the observed data to show that >100 MeV, X-ray and optical data are produced by the ES, then we provide some analytical estimates of the ES model parameters and later show the results of our detailed numerical results in the figures.
AGILE/GRID reported a photon count in the 30 MeV-30 GeV band of 1.5 × 10 −3 cm −2 s −1 at 10s, and the light curve was reported to decline as t −1.3±0.15 (Giuliani et al. 2010). Therefore, the photon flux at 100s in this band is estimated to be ∼ 7.5 × 10 −5 cm −2 s −1 ; have also reported a single power-law decline of flux in the Fermi LAT band from ∼1s to 200s). The Swift/XRT reported a photon flux of 0.07 cm −2 s −1 in the 0.3-10 keV band at 100s. Using the spectrum reported in the Swift/XRT band (Grupe & Hoverstein 2009) -which is entirely consistent with the spectrum found in the AGILE/GRID band (Giuliani et al. 2010) -to extrapolate the observed photon count in the XRT band to the GRID band we find the expected photon flux at X-ray Optical Radio Figure 4. Using the X-ray, optical and radio data of GRB 090902B at late times (right panel) we constrain the external forward shock parameters, and then use these parameters to predict the 100 MeV flux at early times (left panel). The region between the red lines shows the range for the predicted flux at 100 MeV; note the remarkably narrow range for the predicted 100 MeV flux in spite of the large spread to the allowed ES parameters as shown in fig. 3. The blue point (left panel) indicates the flux at 100keV and 50s that we expect from the ES model; note that the ES flux at 100 keV falls well below the observed Fermi/GBM flux shown schematically by the dashed line in the left panel, and that is why the GBM light curve undergoes a rapid decline with time (∼ t −3 ) at the end of the prompt burst phase. The radio flux is taken from van der Host et al. 2009. All other data are the same as in Figure 2. 100s in the 30 MeV-30 GeV band of 7.9 × 10 −5 cm −2 s −1 , and that is consistent with the flux observed by AGILE.
The peak of the optical light curve was observed at ∼ 1000 s with a value of ∼ 100µJy, and the X-ray flux at 1000 s and ∼ 4keV was 2.2µJy (De Pasquale et al. 2010). Since we attribute the optical light curve peak with the crossing of νi through the optical band, then the peak of the optical light curve determines the synchrotron flux at the peak of the spectrum. Therefore, using the X-ray flux at 1000 s and the X-ray spectrum we can extrapolate back to optical band (2 eV) and we find a flux of 170µJy, which is consistent, within a factor of better than 2, with the observed optical value at this time. Therefore, we can conclude that >100 MeV, X-ray and optical emissions are all produced by the same source, and that source must be the external forward shock as that is known to produce long lasting radiation in the X-ray and optical bands with a well known closure relation between α and β that is observed in GRB 090510 in all energy bands.
Using the data in the LAT, XRT and optical bands we can determine the ES parameters for GRB 090510. The following observational constraints must be satisfied by the allowed ES parameters: (i) The flux at 100 MeV and 100s should be equal to the observed value (Table 2), (ii) νc < 100 MeV at 100s, (iii) the Xray flux at 1000s and ∼ 4keV should be equal to the observed value of 2.2µJy (De Pasquale et al. 2010), and (iv) the flux at the peak of synchrotron spectrum should be ∼ 100µJy (De Pasquale et al. 2010). This last constraint arises because the optical flux peaks when νi passes the optical band, and therefore the peak synchrotron flux should be equal to the measured peak optical flux; it should be noted that the peak synchrotron flux for s = 0 according to the ES model does not change with time as long as the shock front moves at a relativistic speed.
We present some analytical estimates for the ES parameters before showing our detailed numerical results. The ES flux at 100 MeV and t = 100 s, assuming that 100 MeV is above νc is given by (1) and is f100MeV ∼ 2.4 × 10 −6 mJy E 1.1 53 ǫ 0.1 B,−2 ǫ 1.4 e,−1 = 14nJy ,
which is the constraint (i). The flux at 4 keV and 1000s, assuming that it is between νi and νc is given by
f 4keV ∼ 3mJy E 1.35 53 n 0.5 ǫ 0.85 B,−2 ǫ 1.4 e,−1 = 2.2µJy ,(17)
which is constraint (iii). And lastly, constraint (iv) is that the peak synchrotron flux should equal the flux at the peak of the optical light curve, i.e.,
fp ∼ 12mJy E53n 1/2 ǫ 1/2 B,−2 = 100µJy .(18)
Just as was done for GRB090902B, constraint (ii) gives a lower limit on ǫB, which in the case for this GRB is not too useful. Instead, we can solve ǫe from (16) and substitute it in (17), which gives
ǫB = 1 × 10 −6 E 1/3 53 n 2/3 (1 + Y ) 4/3 ,(19)
consistent with the numerical calculation presented in Fig. 5. Also, with this last expression and using (18) we find that the CSM density for this GRB is
n ∼ 0.3cm −3 (1 + Y ) 4 E −5 53 ,(20)
which is also consistent with the fact that we only find numerical solutions with CSM densities lower than ∼0.1 cm −3 . For the ES parameters of this burst, the cooling frequency at 100 s can be estimated to be
νc ∼ 76eV E −1/2 53 n −1 ǫ −3/2 B,−2 (1 + Y ) −2 ,(21)
and substituting n from (18) gives νc ∼ 1MeV E 3/2 53 ǫ −1/2 B,−2 (1 + Y ) −2 . Thus, for ǫB ∼ 10 −5 we find νc ∼ 30 MeV. The injection frequency at 100 s is given by
νi ∼ 240eV E 1/2 53 ǫ 1/2 B,−2 ǫ 2 e,−1 ,(22)
and substituting ǫe from (16) 2), we determine the sub-space of 4-D parameter space (for the external forward shock with s = 0) allowed by the data for GRB 090510 at t = 50s. We show the projection of the allowed subspace onto the ǫ B -n plane in this figure (dots); the region agrees with the expected ǫ B from shockcompressed CSM magnetic field of < ∼ 30µG (the green and blue lines show 10µG and 30µG, respectively). The other parameters for the ES solution at this time are: The Lorentz factor of the blast wave is between 260 and 970, 0.1 < ǫe < 0.7 and 10 53 erg < ∼ E < ∼ 4 × 10 53 erg. At t = 100 s, we also find Y < 4, ν i ∼ 500 eV, νc ∼ 40 MeV.
with detail numerical calculations and reported in the Fig. 5 caption.
The detailed numerical results of the parameter search can be found in Figure 5; the sub-space of the 4-D parameter space allowed by the data for GRB 090510 is projected on the 2-D ǫB-n plane, which is a very convenient way of looking at the allowed sub-space. Note that all the available data for GRB 090510 can be fitted by the ES model and that the value of n allowed by the data is less than 0.1 cm −3 , which is in keeping with the low density expected in the neighborhood of short bursts. Moreover, ǫB for the entire allowed part of the 4-D sub-space is small, and its magnitude is consistent with what one would expect for the CSM magnetic field of strength < ∼ 30µG that is shock compressed by the blast wave (Fig. 5). The ES shock model provides a consistent fit to the data from optical to >10 2 MeV bands as can be clearly seen in Figure 6. The ES parameters found for this GRB can be found in the Fig. 5 caption.
GRB 080916C
The Fermi/LAT and GBM observations for this burst have been presented in Abdo et al. (2009a). For this burst, the optical and Xray observations started about 1d after the burst and both bands are consistent with fν (t) ∝ ν −0.5±0.3 t −1.3±0.1 (Greiner et al. 2009).
The fact that the optical light curve is decaying as t −1.3 means that νi is below the optical band at 1 day, because if νi is above the optical band, then the light curve should be rising as ∝ t 1/2 (as in the case of GRB090510). Moreover, the shallow spectral index in the Swift/XRT band (βx < 1) suggests that νc > 10keV at 1 day. The X-ray and optical data together yield a spectral index of 0.65 ± 0.03, and therefore p = 2.3 ± 0.06 which is consistent with the Fermi/LAT spectrum (see Table 1). The value of p can be used to calculate the time dependence of the light curve, and that is found to be t −0.98 (t −1.48±0.05 ) for s = 0 (s = 2) CSM. Thus, s = 2 CSM is preferred by the late time optical and X-ray afterglow data (Kumar & Barniol Duran 2009;Gao et al. 2009;Zou, Fan & Piran 2009).
Using the early >100 MeV data only, we determine the ES model parameters. With these parameters, we can then predict the X-ray and optical fluxes at late times, i.e. the forward direction approach. The constraints that should be satisfied are: (i) The ES flux at 100 MeV and 150 s should match the observed value (Table 2), (ii) νc < 100 MeV to be consistent with the observed spectrum, and (iii) the ES flux at 150 s should be smaller than the observed value to allow the 100 keV flux to decay rapidly as observed. These constraints are the same as the ones presented for the case of GRB090902B and the analytical approach is the same as the one presented on §3.1, therefore, we omit the details here. The ES parameters obtained numerically can be found in fig. 2 of Kumar & Barniol Duran (2009). With these parameters the X-ray and optical flux at late times can be calculated, and we find these in excellent agreement with the observations (Figure 7).
It is important to note here that this extrapolation from highenergy early time data to low energy, late time, flux prediction was carried out for a circum-stellar medium with s ∼ 2. We have also carried out the same calculation but for a uniform density medium (s = 0), and in this case the theoretically calculated flux at late times is larger than the observed values by a factor of ∼5 or more; the factor of 5 discrepancy is much larger than error in the flux calculation. We pointed out above that the late time afterglow data for this burst are consistent with a s = 2 medium but not s = 0 medium. Thus, there is a nice agreement between the late time afterglow data and the early >10 2 MeV data -which explore very different radii -in regards to the density stratification of the CSM.
We have carried out the exercise in the "reverse direction" as well. Using only the late X-ray and optical data, we determine the ES parameters. The observational constraints that need to be satisfied are: (i) The ES flux at X-ray and optical energies at 1 d should match the observed values, (ii) we should have the ordering νi < νopt < νX < νc to be consistent with the observed spectrum, (iii) the ES flux at 150 s should be smaller than the observed value to allow the 100 keV flux to decay rapidly as observed, and (iv) the Lorentz Factor of the ejecta should be > ∼ 60 at 1 d, since we don't want Γ to be too small at the beginning of the burst, because this would contradict estimates done at early times (Greiner et al. 2009). Since the analytical approach is very similar to the one for GRB090902B, we omit it here -the only difference is that it must be done for a wind-like medium, since the data of this GRB prefers it. The ES parameters can be found numerically and with these parameters we predict the >100 MeV flux at early times. This predicted flux agrees with the Fermi/LAT observations as shown in fig. 3 of Kumar & Barniol Duran (2009).
DISCUSSION AND CONCLUSION
The Fermi Satellite has detected 12 GRBs with >100 MeV emission in about one year of operation. In this paper we have analyzed the >100 MeV emission of three of them: two long-GRBs (090902B and 080916C) and one short burst (GRB 090510), and find that the data for all three bursts are consistent with synchrotron emission in the external forward shock. This idea was initially proposed in our previous work on GRB 080916C (Kumar & Barniol Duran 2009), shortly after the publication of this burst's data by Abdo et al. (2009a). Now, there are three GRBs for which high energy data has been published, and for all of them we have presented here multiple lines of evidence that >100 MeV photons, subse- Figure 6. Shown in this figure are data for GRB 090510 obtained by Fermi/LAT (>100 MeV), Swift/XRT (X-ray) and Swift/UVOT (optical) data, and a fit to all these data by the external forward shock model (solid lines). The jet break seen in X-ray has been modeled with a power-law, ∝ t −p ; the optical light curve after the jet break should show a shallower decay ∝ t −1/3 , because at this time νopt < ν i , but then it slowly evolves to an asymptotic decay ∝ t −p at later times (Rhoads 1999). The LAT (X-ray) data are from De Pasquale et al. 2009(Evans et al. 2007) and have been converted to flux density at 100 MeV (1 keV) using the average spectral index mentioned in the text ( §3.2). The optical data (squares) are from De Pasquale et al. (2010). Triangles mark upper limits in the X-ray and optical light curves. Figure 7. The optical and X-ray fluxes of GRB 080916C predicted at late times using only the high energy data at 150s (assuming synchrotron emission from external forward shock) are shown in the right half of this figure, and the predicted flux values are compared with the observed data (discrete points with error bars). The width of the region between the green (magenta) lines indicates the uncertainty in the theoretically calculated X-ray (optical) fluxes. The LAT (Abdo et al. 2009a) and X-ray fluxes (Evans et al. 2007(Evans et al. , 2009) at 100 MeV and 1keV, respectively, have been converted to mJy the same way as done for Figure 2. Optical fluxes (squares) are from Greiner et al. 2009 (triangles are upper limits). GBM flux at 100keV -blue filled circles -is taken from Abdo et al. 2009a. The thin dashed lines connecting LAT and GBM data are only to guide the eye. quent to the prompt GRB phase, were generated in the external forward shock. The reason that high energy photons are detected from only a small fraction of GRBs observed by Fermi is likely due to the fact that the high energy flux from the external forward shock has a strong dependence on the GRB jet Lorentz factor, and therefore very bright bursts with large Lorentz factors are the only ones detected by Fermi/LAT (this was pointed out by Kumar & Barniol Duran 2009, who also suggested that there should be no difference in long and short bursts, as far as the >100 MeV emission is concerned -the high energy flux is only a function of burst energy and time, eq. 1).
We have analyzed the data in 4 different ways, and all of them lead to the same conclusion regarding the origin of >10 2 MeV photons. First, we verified that the temporal decay index for the >100 MeV light curve and the spectral index are consistent with the closure relation expected for the synchrotron emission in the external forward shock. Second, we calculated the expected magnitude of the synchrotron flux at 100 MeV according to the external forward shock model and find that to be consistent with the observed value. Third, using the >100 MeV data only, we determined the external shock parameters, and from these parameters we predict the X-ray and optical fluxes at late times and find that these predicted fluxes are consistent with the observed values within the uncertainty of our calculations, i.e. a factor of two (see figs. 2, 7). And lastly, using the late time X-ray, optical and radio fluxes -which the GRB community has believed for a long time to be produced in the external forward shock -we determine the external shock parameters, and using these parameters we predict the expected >100 MeV flux at early times and find the flux to be in agreement with the observed value (see fig. 4). The fact that the >100 MeV emission and the lower energy ( < ∼ 1 MeV) emission are produced by two different sources at two different locations suggests that we should be cautious when using the highest observed photon energy and pairproduction arguments to determine the Lorentz factor of the GRB jet.
We point out that the external shocks for these bursts were nearly adiabatic, i.e. radiative losses are small. The evidence for this comes from two different observations: (1) the late time X-ray spectrum lies in the adiabatic regime; (2) a radiative shock at early times (close to the deceleration time) would produce emission in the 10-10 2 keV band far in excess of the observed limits. We find that radiative shock is not needed to explain the temporal decay index of the >100 MeV light curve as suggested by , provided that the observing band is above all synchrotron characteristic frequencies.
We find that the magnetic field required in the external forward shock for the observed high and low energy emissions for these three bursts is consistent with shock-compressed magnetic field in the CSM; the magnetic field in the CSM -before shock compression -should be on the order of a few tens of micro-Gauss (see figs. 1, 3 and 5). For these three bursts, at least, no magnetic dynamo is needed to operate behind the shock front to amplify the magnetic field.
The data for the short burst (GRB 090510) are consistent with the medium in the vicinity of the burst (within ∼1 pc) being uniform and with density less than 0.1 cm −3 ; the data rules out a CSM where n ∝ R −2 . On the other hand, the data for one of the two long Fermi bursts (GRB 080916C) prefers a wind like medium and the other (GRB 090902B) a uniform density medium; these conclusions are reached independently from late time afterglow data alone and from the early time high energy data projected to late time using the 4-D parameter space technique described in §3.
It is also interesting to note that the power-law index of the energy distribution of injected electrons (p) in the shocked fluid, for all the three Fermi bursts analyzed in this work, is 2.4 to within the error of measurement, suggesting an agreement with the Fermi acceleration of particles in highly relativistic shocks, e.g. Achterberg et al. (2001); a unique power-law index for electrons' distribution in highly relativistic shocks is not found in all simulations. The study of high energy emission close to the deceleration time of GRB jets is likely to shed light on the onset of collisionless shocks and particle acceleration process.
It might seem surprising that we are able to fit all data (optical, X-ray, < ∼ 10 2 MeV) for these three Fermi bursts with just a few parameters for the external forward shock. This is in sharp contrast to Swift bursts which often display a variety of puzzling (poorly understood) features in their afterglow light curves. There are two reasons that these Fermi bursts can be understood using a very simple model (external forward shock). (1) The data for the two long Fermi bursts (080916C and 090902B) are not available during the first 1/2 day, and that is precisely the time frame when complicated features (plateau, etc., eg. Nousek et al. 2006, O'Brien et al. 2006) are seen in the X-ray afterglow light curves of Swift bursts (we note that the external forward shock model in its sim-plest form can't explain these features) -however, the afterglow data at later times is almost invariably a smooth single (or double) power-law function that can be modeled by synchrotron emission from an external forward shock. (2) For very energetic GRBs -the three bursts we have analyzed in this paper are among the brightest bursts in their class -the progenitor star is likely to be completely destroyed leaving behind very little material to fall back onto the compact remnant at the center to fuel continued activity and give rise to complex features during the first few hours of the X-ray afterglow light curve (Kumar, Narayan & Johnson, 2008). To summarize, the GRB afterglow physics was simple in the decade preceding the launch of Swift, and then things became quite complicated, and now the Fermi data might be helping to clear the fog and reveal the underlying simplicity once again.
ACKNOWLEDGMENTS
RBD thanks Massimiliano De Pasquale for clarifying some aspects of the data of GRB 090510. We also thank Alex Kann for useful discussions about the optical data. We thank the referee for a constructive report. This work has been funded in part by NSF grant ast-0909110. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
one finds νi ∼ 250eV E −1.07 53 ǫ 0.36 B,−5 . These values of νi and νc are consistent with the values obtained
Figure 5 .
5Using the observational constraints mentioned in the text ( §3.
Fluxes in this table are in nJy. The fluxes are calculated using equationα ES
α obs
t[s]
f ES
100M eV
a
f obs
100M eV
GRB 080916C
1.30 ± 0.05
1.2 ± 0.2
150
> 16
67
GRB 090510
1.2 ± 0.2
1.38 ± 0.07
100
> 3
14
GRB 090902B
1.2 ± 0.2
∼ 1.5
50
> 100
220
a
c 2010 RAS, MNRAS 000, 000-000
In addition to these four parameters, the ES model also has an extra two, which are s and p. However, these last two can be estimated fairly directly by looking at the spectrum and temporal decay indeces of the light curves at different wavelengths. 3 It should be pointed out that the X-ray afterglow light curves of long-GRBs are rather complicated during the first few hours (see e.g.Nousek et al. 2006, O'Brien et al. 2006) and the ES model in its simplest form can't explain these features, however the behavior becomes simpler and consistent with ES origin after about 1/2 day. c 2010 RAS, MNRAS 000, 000-000
. A Abdo, Science. 3231688Abdo A. et al., 2009a, Science, 323, 1688
. A Abdo, arXiv:0908.1832preprintAbdo A. et al., 2009b, preprint (arXiv:0908.1832)
. A Abdo, ApJL. 706138Abdo A. et al., 2009c, ApJL, 706, 138
. A Achterberg, Y A Gallant, J G Kirk, A W Guthmann, 328393MN-RASAchterberg, A., Gallant, Y.A., Kirk, J.G., and Guthmann, A.W., 2001, MN- RAS 328, 393
. R D Blandford, C F Mckee, Phys. Fluids. 191130Blandford R. D., McKee C. F., 1976, Phys. Fluids, 19, 1130
. De Pasquale, M , ApJ. 709146De Pasquale M. et al. 2010, ApJ, 709, 146
. P A Evans, A&A. 469379Evans P.A. et al., 2007, A&A, 469, 379
. P A Evans, MNRAS. 3971177Evans P.A. et al., 2009, MNRAS, 397, 1177
. Y Z Fan, T Piran, Front. Phys. China. 3306Fan Y.Z., Piran T., 2008, Front. Phys. China, 3, 306
. W.-H Gao, J Mao, D Xu, Y Z Fan, ApJ. 70633Gao W.-H., Mao J., Xu D., Fan Y.Z., 2009, ApJ, 706, L33
. N Gehrels, E Ramirez-Ruiz, D B Fox, ARA&A. 47567Gehrels N., Ramirez-Ruiz E., Fox D.B., 2009, ARA&A, 47, 567
. G Ghirlanda, G Ghisellini, L Nava, A&A. 5107Ghirlanda G., Ghisellini G., Nava L., 2010, A&A, 510, 7
. G Ghisellini, G Ghirlanda, L Nava, MNRAS. 403926Ghisellini G., Ghirlanda G., Nava L., 2010, MNRAS, 403, 926
. A Giuliani, ApJ. 70884Giuliani A. et al., 2010, ApJ, 708, L84
. J Greiner, A&A. 49889Greiner J. et al., 2009, A&A, 498, 89
. D Grupe, E Hoverstein, GCN Circ9341Grupe D., Hoverstein E., 2009, GCN Circ., 9341
. C Guidorzi, GCN Circ9875Guidorzi C. et al., 2009, GCN Circ., 9875
. N Gupta, B Zhang, MNRAS. 38078Gupta N, Zhang B., 2007, MNRAS, 380, 78
. P Kumar, ApJ. 538125Kumar P., 2000, ApJ, 538, 125
. P Kumar, Barniol Duran R. 40075MNRASKumar P., Barniol Duran R., 2009, MNRAS, 400, L75
. P Kumar, E Mcmahon, MNRAS. 38433Kumar P., McMahon E., 2008, MNRAS, 384, 33
. P Kumar, R Narayan, MNRAS. 395472Kumar P., Narayan R., 2009, MNRAS, 395, 472
. P Kumar, R Narayan, J L Johnson, MNRAS. 3881729Kumar P., Narayan R., Johnson J.L., 2008, MNRAS, 388, 1729
. P Mészáros, Rep. Prog. Phys. 692259Mészáros P., 2006, Rep. Prog. Phys., 69, 2259
. P Mészáros, M J Rees, ApJ. 405278Mészáros P., Rees M.J., 1993, ApJ, 405, 278
. J A Nousek, ApJ. 642389Nousek J.A. et al., 2006, ApJ, 642, 389
. P O'brien, ApJ. 6471213O'Brien P. et al, 2006, ApJ, 647, 1213
N Omodei, arXiv:0907.0715Proceedings for 31st Int'l Cosmic-Ray Conference, preprint. for 31st Int'l Cosmic-Ray Conference, preprintOmodei N. et al., 2009, Proceedings for 31st Int'l Cosmic-Ray Conference, preprint (arXiv:0907.0715)
. B Paczyński, J E Rhoads, ApJ. 4185Paczyński B., Rhoads J.E., 1993, ApJ, 418, L5
. A Panaitescu, P Kumar, ApJ. 54366Panaitescu A., Kumar P., 2000, ApJ, 543, 66
. A Panaitescu, P Kumar, ApJ. 56049Panaitescu A., Kumar P., 2001, ApJ, 560, L49
. A Panaitescu, P Kumar, ApJ. 571779Panaitescu A., Kumar P., 2002, ApJ, 571, 779
. S B Pandey, W Zheng, F Yuan, C Akerlof, Gcn Circ, 9878Pandey S.B., Zheng W., Yuan F., Akerlof C., 2009, GCN Circ., 9878
. S B Pandey, ApJ. 714799Pandey S.B. et al., 2010, ApJ, 714, 799
. T Piran, Rev. Modern Phys. 761143Piran T., 2004, Rev. Modern Phys., 76, 1143
. M J Rees, P Mészáros, MNRAS. 25841Rees M.J., Mészáros P., 1992, MNRAS, 258, 41P
. J E Rhoads, ApJ. 525737Rhoads J.E., 1999, ApJ, 525, 737
Radiative Processes in Astrophysics. G B Rybicki, A P Lightman, Narayan R. 49717Wiley-Interscience PressApJRybicki G. B., Lightman A. P., 1979, Radiative Processes in Astrophysics. Wiley-Interscience Press, New York Sari R., Piran T., Narayan R., 1998, ApJ, 497, L17
. R Sari, T Piran, J P Halpern, ApJ. 51917Sari R., Piran T., Halpern, J.P., 1999, ApJ, 519, L17
. C Swenson, G Stratta, Circ, A J Horst, A P Kamble, R A M Wijers, C Kouveliotou, GCN Circ9883Swenson C., Stratta G., 2009, GCN Circ., 9877 van der Horst A.J., Kamble A.P., Wijers R.A.M., Kouveliotou C., 2009, GCN Circ., 9883
. S E Woosley, J S Bloom, ARA&A. 44507Woosley S.E., Bloom J.S., 2006, ARA&A, 44, 507
. B Zhang, Chinese J. Astron. Astrophys. 71Zhang B., 2007, Chinese J. Astron. Astrophys., 7, 1
. B Zhang, P Mészáros, Int. J. Modern Phys. A. 192385Zhang B., Mészáros P., 2004, Int. J. Modern Phys. A, 19, 2385
. Y C Zou, Y Z Fan, T Piran, MNRAS. 3961163Zou Y.C., Fan Y.Z., Piran T., 2009, MNRAS, 396, 1163
|
[] |
[
"Short time existence for a general backward-forward parabolic system arising from Mean-Field Games",
"Short time existence for a general backward-forward parabolic system arising from Mean-Field Games"
] |
[
"Marco Cirant ",
"Roberto Gianni ",
"Paola Mannucci "
] |
[] |
[] |
We study the local in time existence of a regular solution of a nonlinear parabolic backward-forward system arising from the theory of Mean-Field Games (briefly MFG). The proof is based on a contraction argument in a suitable space that takes account of the peculiar structure of the system, which involves also a coupling at the final horizon. We apply the result to obtain existence to very general MFG models, including also congestion problems.
|
10.1007/s13235-019-00311-5
|
[
"https://arxiv.org/pdf/1806.08138v1.pdf"
] | 119,709,576 |
1806.08138
|
4c6192d2ed39c989370ece275ab5449738d594bf
|
Short time existence for a general backward-forward parabolic system arising from Mean-Field Games
21 Jun 2018
Marco Cirant
Roberto Gianni
Paola Mannucci
Short time existence for a general backward-forward parabolic system arising from Mean-Field Games
21 Jun 2018Parabolic equationsbackward-forward systemMean-Field GamesHamilton- JacobiFokker-PlanckCongestion problems 2010 AMS Subject classification: 35K4035K6149N90
We study the local in time existence of a regular solution of a nonlinear parabolic backward-forward system arising from the theory of Mean-Field Games (briefly MFG). The proof is based on a contraction argument in a suitable space that takes account of the peculiar structure of the system, which involves also a coupling at the final horizon. We apply the result to obtain existence to very general MFG models, including also congestion problems.
Introduction
Let T N = R N /IZ N be the N -dimensional flat torus. Denote by Q T = T N × (0, T ). We consider the following nonlinear backward-forward parabolic system:
(1.1) −u t − a ij (x, t)u x i x j + F (u, m, Du, Dm, x, t) = 0, in Q T m t − c ij (x, t)m x i x j + G(u, m, Du, Dm, D 2 u, x, t) = 0, in Q T u(x, T ) = h[m(T )](x), m(x, 0) = m 0 (x), in T N ,
where h is a regularising nonlocal term. The aim of this paper is to study the short time existence of a regular solution of system (1.1) under very general assumptions on the data. The peculiarities of the system are: 1) nonlinear backward-forward parabolic form; 2) the final condition on u depends on m through a regularising nonlocal term; 3) the coupling functions F and G can have a very general form, but F does not depend on the second derivatives of the unknowns. From the structure 1) and 2), classical results on forward parabolic systems cannot be directly applied and the problem of well posedness is non standard. The general structure 1)-3) of (1.1) is inspired by parabolic systems arising from the theory of Mean-Field Games (briefly MFG), where u represents the value function of a stochastic control problem and m is a density distribution of a population of identical players. In a typical MFG setting, the functions u and m satisfy the following system of two equations (called Hamilton-Jacobi-Bellman and Fokker-Plank, respectively): We refer to Section 4 for a more detailed derivation of this system.
(1.2) −u t − A ij u ij + H(x, t, Du, m) = 0 in Q T m t − ∂ ij (A ij m) − div(mD p H(x,
As for the general problem (1.1), under the assumptions stated at the beginning of the following section, the main existence theorem can be stated as follows: The solution found in Theorem 1.1 is locally unique in the sense specified in Remark 3.1. The proof of the theorem is based on a contraction procedure in a suitable space, that takes into account the forward-backward structure of the system which has a coupling also at the final horizon. We only require F and G to be bounded with respect to x, t and locally Lipschitz continuous with respect to the other entries; in addition, G is required to be globally Lipschitz continuous with respect to the entry of the second order term D 2 u. This is a natural assumption for the models that we have in mind (see in particular the equation for m in (1.2)). As stated in point 2), h should be a regularising function of m. Such gain of regularity is true for example when one considers h of convolution form, or h independent of m. The gain of regularity of h is crucial in our fixed point method. Without this assumption the argument would need additional smallness of other data. For additional comments, see Remark 2.1 and Section 3.1, where it is shown that existence for arbitrary small times T may even fail for linear problems when h is not regularizing.
Our existence result can be applied to very general MFG models. The existence of smooth solutions for systems of the form (1.2) has been explored in several works, see e.g. [5,7,13,14,15,19,21] and references therein. Existence for arbitrarily large time horizons T typically requires assumptions on the behaviour of H at infinity, that are crucial to obtain a priori estimates. Our result is for short-time horizons, but just requires enough local regularity of H: we have basically no restrictions on the behaviour of H when its entries are large. We are in particular interested in MFG models with congestion, that are particularly delicate due to the presence of a singular Hamiltonian H. Short time existence has been discussed in [10], [16] under suitable growth assumptions on H, relying on the peculiar MFG structure. For a detailed description and derivation of MFG systems, additional references and the statements of our results on congestion problems, see Section 4.
The paper is organized as follows: in Section 2 we state the assumptions and we present some preliminary results. In Section 3 we give the proof of the main theorem. We also give a counterexample for a very simple linear system where the final condition is of local (non-regularizing) type. In Section 4 we apply the result to prove short time existence of a solution to some general classes of MFGs. In the Appendix we give the proof of the classical estimate in the periodic setting, stated in Section 2, that is used extensively.
Notations: For any non-negative real number r ≥ 0 and q ≥ 1, we will denote by W r q (T N ) the (fractional) Sobolev-Slobodeckij space of periodic functions (see [18, p. 70] for its definition); we will denote by u (m) q,T N its norm. Note that when r ∈ N,
W r q (T N ) is a standard Sobolev space. For any positive integer m, W 2m,m q (Q T ), with norm u (2m)
q,Q T , will be the usual Sobolev parabolic space (see [18, p. 5]). W 1,0 p (Q T ), with norm u (1) q,Q T , will be the space of functions in L p (Q T ) with weak derivatives in the x-variable in L p (Q T ). For any real and non-integer number r > 0, C r,r/2 (Q T ) with norm |u| (r) Q T will be the standard Hölder parabolic space (see [18, p. 7], where the alternative notation H r,r/2 is used). Note that here we mean that the regularity is up to the parabolic boundary, hence, since the spatial variable varies in the torus, up to t = 0. Finally, C 1,0 (Q T ) with norm |u| (1) Q T will be the space of continuous functions on Q T with continuous derivatives in the x-variable, up to t = 0 as for the Hölder spaces. We denote by · ∞ the ∞-norm. S N denotes the space of symmetric matrices of order N .
Setting of the problem and preliminary results
In this section we state our standing assumptions and we write some useful lemmata and propositions. Throughout the paper we assume:
(A1) a ij (x, t) and c ij (x, t) are continuous functions on Q T .
(A2) F (a, b, p, q, x, t) : R × R + × R N × R N × Q T → R is such that for all M > 0 there exists L F (M ) > 0 (L F (M ) is an increasing function of M , bounded for bounded values of M ) such that |F (a 1 , b 1 , p 1 , q 1 , x, t)| ∞ ≤ L F (M ), |F (a 1 , b 1 , p 1 , q 1 , x, t) − F (a 2 , b 2 , p 2 , q 2 , x, t)| ≤ L F (M )(|a 1 − a 2 | + |b 1 − b 2 | + |p 1 − p 2 | + |q 1 − q 2 |), for all |a i |, |b i |, |b i | −1 , |p i |, |q i | ≤ 2M , i = 1, 2 and all (x, t) ∈ Q T . (A3) G(a, b, p, q, H, x, t) : R × R + × R N × R N × S N × Q T → R is such that for all M > 0 there exists L G (M ) > 0 (L G (M ) is an increasing function of M , bounded for bounded values of M ) such that |G(a 1 , b 1 , p 1 , q 1 , H 1 , x, t)| ∞ ≤ L G (M )(1 + |H 1 |), |G(a 1 , b 1 , p 1 , q 1 , H 1 , x, t) − G(a 2 , b 2 , p 2 , q 2 , H 2 , x, t)| ≤ L G (M )(|a 1 − a 2 | + |b 1 − b 2 | + |p 1 − p 2 | + |q 1 − q 2 |)(1 + |H 1 |) + L G (M )|H 1 − H 2 |, for all |a i |, |b i |, |b i | −1 , |p i |, |q i | ≤ 2M , i = 1, 2 and all H i ∈ S N , (x, t) ∈ Q T . (A4) h : C 1 (T N ) → C 2 (T N ), and there exists L h > 0 such that |h[m 1 ] − h[m 2 ]| (2) T N ≤ L h |m 1 − m 2 | (1) T N . (A5) m 0 ∈ W 2
∞ (T N ) and m 0 ≥ δ > 0. Before we prove the theorem, some remarks on the assumptions and useful preliminary lemmata are in order.
Remark 2.1. First, note that (A2) and (A3) require F and G to be bounded with respect to x, t and locally Lipschitz continuous with respect to a, b, p, q. Note that for G we need a linear dependence on H, this is a natural assumption for the models we have in mind. Moreover, G is required to be globally Lipschitz continuous with respect to H, that corresponds to the entry of the second order term D 2 u.
By (A4), h should be a regularizing function of m. Such gain of regularity holds for example when one considers h of the form h[m] = h 0 (m ⋆ ψ), where h 0 is a twice differentiable function and ψ is a smoothing kernel. Another example is to consider a constant function of m, namely h[m] = u T , where u T ∈ C 2 (T N ). The gain of regularity of h is crucial in our fixed point method. Without this assumption, say if h[m](x) = h 0 (m(x)), the argument would need additional smallness of other data. In this case, as we will see in Section 3.1, existence for arbitrary small times T may even fail for linear problems.
Lemma 2.2. There exists C 0 > 0 such that
(2.3) |h[m]| (2) T N ≤ L h |m|(1)T N + C 0 .
Proof. Since
|h[m 1 ] − h[m 0 ]| (2) T N ≥ |h[m 1 ]| (2) T N − |h[m 0 ]| (2) T N , hence from (A4) and (A5) |h[m 1 ]| (2) T N ≤ L h |m 1 − m 0 | (1) T N + |h[m 0 ]| (2) T N ≤ L h |m 1 | (1) T N + L h |m 0 | (1) T N + |h[m 0 ]| (2) T N . ✷ Lemma 2.3. Let α ∈ (0, 1). For any f ∈ C 1+α,(1+α)/2 (Q T ), (2.4) |f | (1) Q T ≤ |f (·, 0)|(1)T N + T α/2 |f | (1+α) Q T . Proof. Follows immediately from the definition of |f | (1+α) Q T . ✷ Lemma 2.4. Let q ≥ 2 and f ∈ L q (Q T ) be such that f q,Q T ≤ C. Then, for p = q/2, (2.5) f p,Q T ≤ C T 1 2p . Let f ∈ W 2,1 q (Q T ) be such that f (2) q,Q T ≤ C. Then, for p = q/2 (2.6) f (2) p,Q T ≤ C T 1 2p .
Proof. We prove (2.5), (2.6) is analogous. By Hölder inequality applied to |f | p for any r > 1, take r ′ such that 1 r + 1 r ′ = 1 we have
f p,Q T = Q T |f | p dxdt 1/p ≤ Q T |f | pr dxdt 1/pr Q T dxdt 1/pr ′ = f pr,Q T Q T dxdt (r−1)/pr = f pr,Q T (|T N |T ) (r−1)/pr ≤ C f pr,Q T T (r−1)/pr .
Choosing r such that q = rp, we have
(2.7) f p,Q T ≤ C T q−p pq , with q > p.
Taking r = 2, i.e. q = 2p we have the result. ✷
We recall now the following embedding proposition proved by R. Gianni in [9]. Observe that the constant M remains bounded for bounded values of T while in the estimate of Corollary of p.342 of [18] it blows up as T tends to zero. Proposition 2.5 (Inequality (2.21) of [9]). Let f ∈ W 2,1 q (Q T ). Then,
(2.8) |f | (2− N+2 p ) Q T ≤ M f (2) p,Q T + f (x, 0) (2−2/p) p,T N , p > N + 2 2 , p = N + 2,
where M remains bounded for bounded values of T .
In the following proposition we state some regularity results for linear parabolic equations in the flat torus. Such results are classical and well-known for equations on cylinders with boundary conditions (see [18]); for the convenience of the reader, we show that they hold true also for equations that are set in the domain Q T = T N × (0, T ), and basically follow from local parabolic regularity. Let
(2.9) Lu := u t − ij a ij (x, t)u x i x j + i a i (x, t)u x i + a(x, t)u = f (x, t) in Q T , u(x, 0) = u 0 (x) in T N . H1) Suppose that the functions a ij , a i , a, f belong to C α,α/2 (Q T ) and u 0 (x) ∈ C 2+α (T N ). H2) Suppose that the functions a ij , a i , a are continuous functions in Q T , f ∈ L q (Q T ) and u 0 (x) ∈ W 2−2/q q (T N ) with q > 3/2.
Proposition 2.6. Under assumptions H1) there exists a unique solution u ∈ C 2+α,1+α/2 (Q T ) of problem (2.9) and the following estimate holds:
(2.10) |u| (α+2) Q T ≤ C 1 |f | (α) Q T + |u 0 | (α+2) T N .
where the constant C 1 depends only on the norms of the coefficients a ij , a i , a specified in H1), on N, α and T , and remains bounded for bounded values of T . Under assumptions H2) there exists a unique solution u ∈ W 2,1 q (Q T ) of problem (2.9) and the following estimate holds:
(2.11) u (2) q,Q T ≤ C 2 |f | q,Q T + |u 0 | (2−2/q) q,T N ,
where the constant C 2 depends only on the norms of the coefficients a ij , a i , a specified in H2), on N, q and T , and remains bounded for bounded values of T .
Proof. The proof is given in Appendix A. ✷
The existence theorem
In this section we prove Theorem 1.1. At the end of the section we give a simple counterexample where existence may fail.
Proof of Theorem 1.1.
Step 1: Lipschitz regularization of F, G. Let K > 0 be large enough, so that
(3.12) K ≥ max 2|m 0 | (1) T N , 2 L h |m 0 | (1) T N + C 0 , 2 δ , where δ, L h , C 0 are as in (A4), (A5) and (2.3). Let ϕ,φ : R → R be globally Lipschitz functions such that ϕ(x) = x for all x ∈ [1/K, K], ϕ(x) ∈ [1/(2K), 2K] for all x ∈ R, andφ(x) = x for all x ∈ [−K, K],φ(x) ∈ [−2K, 2K] for all x ∈ R.
Similarly, let ψ : R N → R N be a globally Lipschitz function such that ψ(p) = p for all |p| ≤ K and |ψ(p)| ≤ 2K for all p ∈ R N . We will construct a solution to (1.1) with F, G replaced by F , G defined as follows:
F (a, b, p, q, x, t) = F (φ(a), ϕ(b), ψ(p), ψ(q), x, t) G(a, b, p, q, H, x, t) = G(φ(a), ϕ(b), ψ(p), ψ(q), H, x, t).
Note that by (A3), G satisfies
(3.13) | G(a 1 , b 1 , p 1 , q , H 1 , x, t) − G(a 2 , b 2 , p 2 , q 2 , H 2 , x, t)| ≤ L G (|a 1 − a 2 | + |b 1 − b 2 | + |p 1 − p 2 | + |q 1 − q 2 |)(1 + |H 1 |) + L G |H 1 − H 2 | for all a i , b i , p i , q i , H i ,
x, t (possibly by a constant L G that is larger than the one in (A3)). Moreover, again by (A3) and the fact that |φ|, |ϕ|, |ψ| are bounded by 2K, we have for some L(K) > 0
(3.14) | G(a, b, p, q, H, x, t)| = |G(a, b, p, q, H, x, t)| ≤ L(K)(|H| + 1)
for all a, b, p, q, H, x, t. Analogous bounds hold also for F by (A2).
Step 2: fixed point set-up. Let us define the space (see also Remark 3.3)
X T M = {(u, m) : u ∈ W 2,1 p (Q T ) ∩ C 1,0 (Q T ), m ∈ C 1,0 (Q T ), u(2)
p,Q T + |u|
(1) Q T + |m| (1) Q T ≤ M, p > N + 2}.
Define now the operator T on X T M in the following way:
T (û,m) = (u, m)
where (u, m) is the solution of the following problems
(3.15) m t − c ij (x, t)m x i x j + G(m,û, Dû, Dm, D 2û , x, t) = 0, in Q T m(x, 0) = m 0 (x), in T N . (3.16) −u t − a ij (x, t)u x i x j + F (m,û, Dû, Dm, x, t) = 0, in Q T u(x, T ) = h(m(x, T ), x), in T N
We aim at showing that T is a contraction on X T M for suitable M and small T .
Step 3: T maps X T M into itself, that is, T (û,m) = (u, m) ∈ X T M for any (û,m) ∈ X T M . Denote by u * (x, T − t) := u(x, t). The couple of functions (m, u * ) solves (3.15) and (3.17) u * t − a ij (x, T − t)u * x i x j + F (m(x, T − t),û(x, T − t), Dû(x, T − t), Dm(x, T − t), x, T − t) = 0, in Q T , u * (x, 0) = h(m(T, x), x), in T N .
The initial condition for u * (x, 0) depends on m(T, x) which is well defined from the regularity of m(t, x) obtained below. Note that problem (3.15) is well posed, namely there exists a unique solution m(x, t) such that m ∈ W 2,1 p (Q T ) (see [18, Theorem 9.1 p. 341]). Moreover, since G satisfies (3.14) and û (2) p,Q T ≤ M , then the term G(m,û, Dû, Dm, D 2û , x, t) is such that G p,Q T ≤ L(K)(M + 1). We can therefore apply Proposition 2.6 to (3.15) to get
(3.18) m (2) p,Q T ≤ C L(K)(M + 1) + m 0 (2−2/p) p,T N ≤ C(M ),
(in what follows, we will not make the dependence on constants on K explicit). Hence, by the embedding (2.8), we have the following inequality:
(3.19) |m| (2− N+2 p ) Q T ≤ C m (2) p,Q T + m 0 (x) (2−2/p) p,T N , p > N + 2 2 , p = N + 2,
where C is bounded for bounded values of T . Hence, from (3.18) and (A5)
(3.20) |m| (2− N+2 p ) Q T ≤ C(M ), p > N + 2 2 , p = n + 2.
Since p > N + 2, i.e. 2 − N +2 p > 1, then (2.4) easily yields
(3.21) |m| (1) Q T ≤ |m 0 | (1) T N + T 1 2 − n+2 2p C(M ).
In particular, note that, from (3.20), we have that the trace m(x, T ) is well defined and
(3.22) |m(x, T )|(1)
T N ≤ C(M ). We now pass to study the well posedness of problem (3.17) and the regularity of its solution u * . From estimate (3.22), the regularising assumptions (A4) and (2.3) on h, the initial condition u * (x, 0) = h(m(T, x), x) is well defined. In turn, when m(x, t) is assigned with the regularity found above (see (3.20)) problem (3.17) admits a solution u * by boundedness of F . From the initial condition for u * and (2.3),
(3.23) |u * (x, 0)| (2) T N ≤ L h |m(x, T )|(1)
T N + C 0 .
By (3.21),
(3.24) |u * (x, 0)| (2) T N ≤ L h |m 0 | (1) T N + L h C(M ) T 1 2 − n+2 2p + C 0 ≤ C(M ).
In particular, taking into account again that T N is bounded, for any q > 1, for some constant C we have
(3.25) u * (x, 0) (2−2/q) q,Q T ≤ C u * (x, 0) (2) q,Q T ≤ C|u * (x, 0)| (2) Q T ≤ C(M ).
We now study the regularity of u * . Since the estimate in Proposition 2.6 is valid for any q, we obtain, because of the boundedness of F , (3.21) and (3.25),
(3.26) u * (2) q,Q T ≤ C(M, q)
, for any q.
Applying (2.6) of Lemma 2.4 we get
(3.27) u * (2) p,Q T ≤ C(M, 2p) T 1 2p .
Hence, using again embedding (2.8) and (3.27) we obtain
(3.28) |u * | (2− n+2 p ) Q T ≤ C u * (2) p,Q T + u * (x, 0) (2−2/p) p,Q T ≤ C(M, p),
Therefore, using (2.4) of Lemma 2.3 and taking into account (3.28), we have
(3.29) |u * | (1) Q T ≤ |u * (x, 0)| (1) Q T + C(M, p)T 1 2 − n+2 2p .
At this point, using estimate (3.24) we obtain
(3.30) |u * | (1) Q T ≤ L h |m 0 |(1)T N + C 0 + C(M, p)T 1 2 − n+2 2p .
Now we can easily see that (3.27) and (3.30) together with (3.21) allow us to prove that T maps X T M into itself. Indeed,
u * (2) p,Q T + |u * |(1)Q T + |m| (1) Q T ≤ C(M, 2p) T 1 2p + (L h + 1)|m 0 |(1)T N + C 0 + C 1 (M, p)T 1 2 − n+2 2p .
At this point we choose
M 1 := 3((L h + 1)|m 0 | (1) T N + C 0 )
and we take T sufficiently small that
C(M 1 , 2p) T 1 2p ≤ M 1 /3 and C 1 (M 1 , p)T 1 2p − n+2 p ≤ M 1 /3, thus obtaining u * (2) p,Q T + |u * | (1) Q T + |m| (1) Q T ≤ M 1 , that is, T : X T M 1 → X T M 1 , for all T sufficiently small.
Step 4:
T : X T M 1 → X T M 1 is a contraction operator. Let (û i ,m i ) ∈ X T M 1 , i = 1, 2.
Let us denote T (û 2 ,m 2 ) =: (u 2 , m 2 ) and T (û 1 ,m 1 ) =: (u 1 , m 1 ). We have to prove that
u 1 − u 2 (2) p,Q T + |u 1 − u 2 | (1) Q T + |m 1 − m 2 | (1) Q T ≤ γ û 1 −û 2 (2) p,Q T + |û 1 −û 2 | (1) Q T + |m 1 −m 2 | (1) Q T ,(3.32) U * t − a ij (x, t)U * x i x j + F (u 1 (x, T − t),m 1 (x, T − t), Dû 1 (x, T − t), Dm 1 (x, T − t), x, T − t)− F (u 2 (x, T − t),m 2 (x, T − t), Dû 2 (x, T − t), Dm 2 (x, T − t), x, T − t) = 0, U * (x, 0) = h(m 1 (T, x), x) − h(m 2 (T, x), x), in T N .
Sinceû i andm i , i = 1, 2 belong to X T M 1 , we follow the same procedure as in Step 1. First, note that G satisfies (3.13), so
(3.33) G(û 1 ,m 1 , Dû 1 , Dm 1 , D 2û 1 ) − G(û 2 ,m 2 , Dû 2 , Dm 2 , D 2û 2 ) p,Q T ≤ L G |Û | (1) Q T + |M | (1) Q T 1 + |D 2û 1 | p,Q T + L G Û (2) p,Q T ≤ C |Û | (1) Q T + Û (2) p,Q T + |M | (1) Q T
Therefore, by Proposition 2.6 and M (x, 0) = 0,
(3.34) M (2) p,Q T ≤ C(|Û | (1) Q T + Û (2) p,Q T + |M | (1) Q T ).
Hence, from (3.34), (2.4) and the embedding (2.8),
(3.35) |M | (1) Q T ≤ C(p)(|Û | (1) Q T + Û (2) p,Q T + |M | (1) Q T )T 1 2 − n+2 2p .
As far as U * is concerned, from Proposition 2.6,
(3.36) U * (2) q,Q T ≤ C(q)( Û (1) q,Q T + M (1) q,Q T + U * (x, 0) (2−2/q) q,T N ) for any q.
Note that, from assumption (A4) and using the boundedness of T N , for some constant C we have
(3.37) U * (x, 0) (2−2/q) q,T N ≤ C U * (x, 0) (2) q,T N = C h[m 1 (T )] − h[m 2 (T )] (2) q,T N ≤ C|h[m 1 (T )] − h[m 2 (T )]| (2) T N ≤ CL h |M (T, x)| (1) T N .
Hence from (3.37), (3.36) becomes
(3.38) U * (2) q,Q T ≤ C(q)( Û (1) q,Q T + M (1) q,Q T + |M (T, x)| (1) T N ).
Then, in view of (3.35) and boundedness of Q T ,
(3.39) U * (2) q,Q T ≤ C(q)(|Û | (1) Q T + Û (2) p,Q T + |M |(1)
Q T ), for any q.
From (2.6) of Lemma 2.4 we obtain
(3.40) U * (2) p,Q T ≤ C(p)(|Û | (1) Q T + Û (2) p,Q T + |M | (1) Q T )T 1/2p .
From the embedding result (2.8) (see also (3.29)), we have
(3.41) |U * | (1) Q T ≤ |U * | (2− N+2 p ) Q T ≤ C 1 (p)( U * (2) p,Q T + U * (x, 0) (2−2/p) p,T N ) ≤ C(p)(|Û | (1) Q T + Û (2) p,Q T + |M | (1) Q T )(T 1/2p + T 1 2 − n+2 2p ).
where the last inequality comes from (3.40), (3.37) and (3.35). At this point, taking into account (3.40), (3.41), (3.35), for T sufficiently small, we have proved that the operator T is a contraction. The fixed point (u * (T − t), m) is a solution to (1.1) with F, G replaced by F , G, with the required regularity.
Step 5: back to the initial problem. Note that F and G coincide with F and G respectively whenever the fixed point (u, m) satisfies m(x, t) ∈ [1/K, K], u(x, t) ∈ [−K, K] and |Du(x, t)|, |Dm(x, t)| ≤ K on Q T . This is true if T is sufficiently small. Indeed, by (3.30) and the choice (3.12) of K one has |u| (1)
Q T = |u * | (1) Q T ≤ L h |m 0 | (1) T N + C 0 + C(M, p)T 1 2 − n+2 2p ≤ K, while by (3.21), |m| (1) Q T ≤ |m 0 | (1) T N + T 1 2 − n+2 2p C(M ) ≤ K.
Finally, by (3.20) and (A4),
min Q T m ≥ min T N m(x, 0) − |m| (2− N+2 p ) Q T T 1 2 − n+2 2p ≥ δ − C(M )T 1 2 − n+2 2p ≥ 1 K ,
that yields the desired result. ✷ Remark 3.1. From (3.18), m ∈ W 2,1 p (Q T ), with p > N + 2. Hence the function found above is locally unique in the following sense: for any M > 0 sufficiently large, there exists T M such that for any T < T M there exists an unique solution of (1.1), (u, m) ∈ X T M . Remark 3.2. If we assume also that a ij , c ij , F and G are Hölder continuous with respect to x, t, if m 0 ∈ C 2+α (T N ) and h takes its values in C 2+α (T N ), then the solution of Theorem 1.1 will belong to C 2+α,(2+α)/2 (Q T ). Remark 3.3. In the definition of the space X T M we take u belonging both to W 2,1 p (Q T ) and C 1,0 (Q T ). This may appear unnecessary, since W 2,1 p (Q T ) is continuously embedded in C 1,0 (Q T ). The crucial point is that such embedding depends on T ; to rule out this dependence one has to make the initial datum explicit (see in particular (2.8)). Therefore, to simplify a bit the argument we preferred to control separately both u
Non-regularizing h: a counterexample to short-time existence
As mentioned in Remark 2.1, it is crucial in our fixed point method that h in the final condition u(x, T ) = h[m(T )] be a regularising function of m. We will show in the sequel that without this assumption, existence for arbitrary small times T may even fail for linear problems. Let us consider the following linear parabolic backward-forward system, with α ∈ R to be chosen
(3.42) −u t − ∆u = 0, in T N × (0, T ), m t − ∆m = ∆u, in T N × (0, T ), u(x, T ) = α m(x, T ), m(x, 0) = m 0 (x) in T N .
Here, h[m](x) = αm(x), and clearly h[m] has the same regularity of m. Thus, h does not satisfy (A4). We claim that For all α < −2, there exist smooth initial data m 0 and a sequence T k → 0 such that (3.42) is not solvable on [0, T k ].
Suppose that, for some T > 0, there exists a solution (u, m) to (3.42). Let λ k and φ k (x), k ≥ 0 be the eigenvalues and eigenfunctions of the Laplace operator on T N , i.e.
−∆φ k = λ k φ k , φ k (x) ∈ C ∞ (T N ), k ≥ 0. Let m k (t) = T N m(x, t)φ k (x)dx, u k (t) = T N u(x, t)φ k (x)dx, m 0k = T N m 0 (x)φ k (x)dx. We can represent (u, m) by m(x, t) = +∞ 0 m k (t)φ k (x), u(x, t) = +∞ 0 u k (t)φ k (x),
and m k and u k satisfy
(3.43) −u ′ k + λ k u k = 0 in (0, T ), m ′ k + λ k m k = −λ k u k in (0, T ), u k (T ) = α m k (T ), m k (0) = m 0k .
We will suppose that the coefficients of the initial datum satisfy m 0k = 0 for all k (this is possible as soon as m 0k vanishes sufficiently fast as k → ∞).
Deriving the second equation and taking into account the first one in (3.43), we get
(3.44) m ′′ k − λ 2 k m k = 0 in (0, T ), m k (0) = m 0k , λ k (α + 1) m k (T ) = −m ′ k (T ).
Solving (3.44) we obtain that
(3.45) m k (t) = A k sinh(λ k t) + B k cosh(λ k t) for some A k , B k ∈ R, where B k = m 0k = 0.
If (α + 1) sinh(λ k T ) + cosh(λ k T ) = 0, then A k is uniquely determined, i.e.
A k = −B k (α + 1) cosh(λ k T ) + sinh(λ k T ) (α + 1) sinh(λ k T ) + cosh(λ k T ) .
Note that if α < −2, (α + 1) sinh(λ k T ) + cosh(λ k T ) vanishes for positive values of T , and in particular when T coincides with some
T k := 1 λ k tanh −1 − 1 α + 1 .
In such case we reach a contradiction, since (α + 1) cosh(λ k T ) + sinh(λ k T ) = 0, and therefore A k cannot be determined. Since (3.45) has no solutions, m cannot exist. Finally, by the fact that λ k → +∞ as k → +∞, we have T k → 0, hence a short time existence result (as stated in Theorem 1.1) cannot hold: for any T there exists a T k ∈ (0, T ] such that the k−th problem does not admit a solution in [0, T k ], and for this reason, problem (3.42) cannot be solved.
Remark 3.4. Note that without the regularising assumption on h the existence argument of Theorem 1.1 would work supposing additional smallness of some data. For example, one could consider the equation m t − ∆m = ǫ∆u (in a system like (3.42)) with ǫ sufficiently small, or final datum u(x, T ) = α m(x, T ) with |α| and m 0 (x) suitably small. This is coherent with the previous non-existence counterexample where α < −2.
Some parabolic systems arising in the theory of Mean-Field Games
Mean-Field Games (MFG) have been introduced simultaneously by Lasry and Lions [20], [21], [22] and Huang et. al. [17] to describe Nash equilibria in games with a very large number of identical agents. A general form of a MFG system can be derived as follows. Consider a given population density distribution m(x, t). A typical agent in the game wants to minimize his own cost by controlling his state X, that is driven by a stochastic differential equation of the form
(4.46) dX s = −v s ds + Σ(X s , s)dB s ∀s > 0,
where v s is the control, B s is a Brownian motion and Σ(·, ·) is a positive matrix. The cost is given by
E X 0 T 0 L(X s , s, v s , m(X s , s))ds + h[m(T )](X T ) ,
where L is some Lagrangian function, h is the final cost, defined as a functional of m(·, T ) and the state X T at the final horizon T of the game. Assume that all the data are periodic in the x-variable. Formally, the dynamic programming principle leads to an Hamilton-Jacobi-Bellman equation for the value function of the agent u(
x, t) = E x T t Lds + h[m(T )](X T ), that is, u solves (4.47) −u t − A ij u ij + H(x, t, Du, m) = 0 in T N × (0, T ), u(x, T ) = h[m(T )](x) in T N ,
where A(x, t) = 1 2 ΣΣ T (x, t) and the Hamiltonian H is the Legendre transform of L with respect to the v variable, i.e.
H(x, t, p, m) = sup v∈R N {p · v − L(x, t, v, m)}.
Moreover, the optimal control v * s of the agent is given in feedback form by In an equilibrium situation, since all agents are identical, the distribution of the population should coincide with the distribution of all the agents when they play optimally. Hence, the density of the law of every single agent should satisfy the following Fokker-Planck equation
v * (x, s) ∈ argmax v {Du(x, s) · v − L(x,(4.48) m t − ∂ ij (A ij m) − div(mD p H(x, t, Du, m)) = 0 in T N × (0, T ), m(x, 0) = m 0 (x) in T N ,
where m 0 is the density of the initial distribution of the agents ((4.48) can be derived by plugging v * into (4.46) and using the Ito's formula). (4.49) where the dependence on H, A, m and their derivatives with respect to (m, Du, x, t) and (x, t) respectively has been omitted for brevity.
= −(∂ x i x j A ij )m − 2(∂ x i A ij )m x j − D p H · Dm − m(H x i p i + H p i p j u x i x j + H mp i m x i ),
A general short-time existence result for (4.47)-(4.48) reads as follows.
Theorem 4.1. Suppose that A ∈ C 2 (Q T ), h satisfies (A4), m 0 satisfies (A5) and
• H is continuous with respect to x, t, p, m,
• H, ∂ p i H, ∂ 2 x i p i H, ∂ 2 p i p j H, ∂ 2 mp i H are locally Lipschitz continuous functions with respect to p, m ∈ R N × R + , uniformly in x, t ∈ Q T .
Then, there exists T > 0 such that for all T ∈ (0, T ] the system (4.47)-(4.48) admits a regular solution u, m ∈ W 2,1 q (Q T ) with q > N + 2.
Proof. In view of (4.49) and the standing assumptions, it suffices to apply Theorem 1.
✷
A typical case in the MFG literature is when L has split dependence with respect to m and v, that is,
L(x, t, v, m) = L 0 (x, t, v) + f (x, t, m).
The existence of smooth solutions in this case has been explored in several works, see e.g. [5,7,13,14,15] and references therein. Existence for arbitrary time horizon T typically requires assumptions on the behaviour of H at infinity, that are crucial to obtain a priori estimates. As stated in the Introduction, our result is for short-time horizons, but no assumptions on the behaviour at infinity of H are required. Note finally that C 2 regularity of H is crucial for uniqueness in short-time, while for large T uniqueness may fail in general even when H is smooth (see [3,4,6,7]).
Congestion problems
A class of MFG problems that attracted an increasing interest during the last few years is the so-called congestion case, namely when
L(x, t, v, m) = m α L 1 (v) + f (x, t, m),
where α > 0 and L 1 is a convex function. The term m α penalizes L 1 (v) when m is large, so agents prefer to move at low speed in congested areas. On the other hand, as soon as the environment density m approaches zero, an agent can increase his own velocity without increasing significantly his cost. The parameter α can be then regarded as the strength of congestion. The difficulties in this problem are mainly caused by the singular term m α . It has been firstly discussed by Lions [19], and has been subsequently addressed in a series of papers. In [8,11,12] the stationary case is treated. As for the time-dependent problem, short-time existence of weak solutions, under some restrictions on α and H, has been proved in [16]. A general result of existence of weak solutions for arbitrary time horizon T is discussed in [1]. So far, smoothness of solutions has been verified in the short-time regimes only in [10]. All the mentioned works do rely on the MFG structure of (4.47)-(4.48). Here, we just exploit standard regularizing properties of the diffusion, and propose a general existence result for (4.47)-(4.48) that requires very mild local (regularity) assumptions on the nonlinearity H. The key tool is the standard contraction mapping theorem, that has already been explored in the MFG setting in [7] and [2] in more particular cases.
Here, H(x, t, p, m) = m α H 1 (p/m α )−f (x, t, m) where H 1 is the Legendre transform of L 1 , so F (x, t, u, m, Du, Dm) = m α H 1 Du m α − f (x, t, m), G(x, t, u, m, Du, D 2 u, Dm) = −(∂ x i x j A ij )m − 2(∂ x i A ij )m x j − D p H 1 · Dm − m 1−α (H 1 ) p i p j u x i x j + αm −α (H 1 ) p i p j u x j m x i .
The MFG system then takes the form (4.50)
−u t − A ij u ij + m α H 1 (Du/m α ) = f (x, t, m) in T N × (0, T ), m t − ∂ ij (A ij m) − div(mD p H 1 (Du/m α )) = 0 in T N × (0, T ), u(x, T ) = h[m(T )](x), m(x, 0) = m 0 (x) in T N .
A corollary of Theorem 4.1 thus reads Corollary 4.2. Suppose that A ∈ C 2 (Q T ), h satisfies (A4), m 0 satisfies (A5) and
• f is continuous with respect to x, t, m and locally Lipschitz continuous with respect to m,
• H 1 is continuous and has second derivatives that are locally Lipschitz continuous.
Then, there exists T > 0 such that for all T ∈ (0, T ] the system (4.50) admits a solution u, m ∈ W 2,1 q (Q T ) with q > N + 2.
A Appendix
In this final appendix we prove Proposition 2.6 of Section 2.
Proof of Proposition 2.6. We write the proof for the existence of a solution in the class C 2+α,1+α/2 (Q T ) and for the estimate (2.10). In a similar way one obtains the existence in W 2,1 q (Q T ) and the proof of (2.11). Recall that the problem on T N × [0, T ] is equivalent to the same problem with 1periodic data in the x-variable in R N × [0, T ], namely with all the data satisfying w(x + z, t) = w(x, t) for all z ∈ IZ N . As far as the existence of a smooth solution of problem (2.9) is concerned, it is sufficient to apply Theorem 5.1 p.320 of [18]. Since the solution of such a Cauchy problem is unique, it must be periodic in the x-variable. and dist(R N 1 , C(R N 2 )) = 1. We take advantage of local parabolic estimates, which allow us to get an a priori estimate regardless of the lateral boundary conditions which are unknown for us.
In particular, using the local estimate (10.5) p. 352 of [18] with Ω ′ = R N 1 and Ω ′′ := R N 2 , (note that in our case S ′′ is empty) we have
(A.52) |u| (α+2) R N 1 ×[0,T * ] ≤ C 1 |f | (α) R N 2 ×[0,T * ] + |u 0 | (α+2) R N 2 + C 2 |u| R N 2 ×[0,T * ] ,
where T * < T , C 1 and C 2 depend on N , T , and the modulus of Hölder continuity of the coefficients of the operator. It is now crucial to observe that Hölder norms on
t, Du, m)) = 0, in Q T u(x, T ) = h[m(T )](x), m(x, 0) = m 0 (x) in T N , where A(x, t) = 1 2 ΣΣ T (x,t) and the Hamiltonian H is the Legendre transform of some Lagrangian function L, i.e. H(x, t, p, m) = sup v∈R N {p · v − L(x, t, v, m)}.
Theorem 1 . 1 .
11Under the assumptions (A1)-(A5) there exists T > 0 such that for all T ∈ (0, T ] the problem (1.1) has a solution u, m ∈ W 2,1 p (Q T ) with p > N + 2 satisfying equations in (1.1) a.e..
by U := u 1 − u 2 , M := m 1 − m 2 ,Û :=û 1 −û 2 , andM :=m 1 −m 2 . Denoting by U * (x, T − t) := U (x, t), taking into account (3.15)-(3.16) and (3.17), U * − c ij (x, t)M x i x j + G(û 1 ,m 1 , Dû 1 , Dm 1 , D 2û 1 , x, t)− G(û 2 ,m 2 , Dû 2 , Dm 2 , D 2û 2 , x, t) = 0, in Q T , M (x, 0) = 0, in T N .
s, Du(x, s), m(x, s))} Typically, one assumes L to be convex in the v-entry. In this case, H is strictly convex in the p-entry, and v * (x, s) can be uniquely determined by v * (x, s) = D p H(x, s, Du(x, s), m(x, s)).
The coupled system of nonlinear parabolic PDEs (4.47)-(4.48) with backwardforward structure is of the form (1.1) if F (x, t, u, m, Du, Dm) = H(x, t, Du, m), G(x, t, u, m, Du, D 2 u, Dm)
Now we prove estimate (2.10). Let R N 1 := [−1, 1] N and R N 2 := [−2, 2] N . Clearly(A.51) [0, 1] N ⊂ R N 1 ⊂ R N 2 ⊂ R N
R N 1 ×+
1[0, T * ], R N 2 × [0, T * ] and T N × [0, T * ] coincide by periodicity of u, f , u 0 in the x-variable and the inclusions (A.51).C 2 (|u 0 | T N + T * |u t | T N ×[0,T * ] ).Taking T * sufficiently small we can write(A.54) |u| (α+2) T N ×[0,T * ] ≤ C 3 |f | (α) T N ×[0,T * ] + |u 0 | (α+2) T N ,whereC 3 depends on the coefficients of the equation, on N , T , α and T * does not depend on u 0 . We can iterate the estimate (A.54) to cover all the interval [0, T ] in [ T T * ] + 1 steps, thus obtaining (2.10). The proof of (2.11) is completely analogous. One has to exploit the local estimate in W 2,1 p of [18], eq. (10.12), p. 355. Note that since R N 1 and R N 2 consist of finite copies of [0, 1] N , norms on W 2m,m p (R N i × (0, T )), i = 1, 2, are multiples (depending on N ) of W 2m,m p (T N × (0, T )).
✷
Acknowledgments. The first and third authors are members of GNAMPA-INdAM, and were partially supported by the research project of the University of Padova "Mean-Field Games and Nonlinear PDEs" and by the Fondazione CaRiPaRo Project "Nonlinear Partial Differential Equations: Asymptotic Problems and Mean-Field Games".
Y Achdou, A , arXiv:1706.08252Porretta Mean field games with congestion. preprintY. Achdou, A. Porretta Mean field games with congestion, preprint arXiv: 1706.08252, (2017).
Strong solutions for time-dependent mean field games with non-separable Hamiltonians. D M Ambrose, J. Math. Pures Appl. 9113D.M. Ambrose Strong solutions for time-dependent mean field games with non-separable Hamiltonians. J. Math. Pures Appl. 9(113), 141-154 (2018).
Cirant Uniqueness of solutions in MFG with several populations and Neumann conditions. M Bardi, M , Springer INdAM Series. Cardaliaguet, P., Porretta, A., Salvarani, F.PDE Models for Multi-agent PhenomenaM. Bardi, M. Cirant Uniqueness of solutions in MFG with several populations and Neumann conditions. In: Cardaliaguet, P., Porretta, A., Salvarani, F. (eds.) PDE Models for Multi-agent Phenomena. Springer INdAM Series (2018).
On non-uniqueness and uniqueness of solutions in finite-horizon mean field games. M Bardi, M Fischer, ESAIM: Control Optim. Calc. Var. To appear inM. Bardi, M. Fischer On non-uniqueness and uniqueness of solutions in finite-horizon mean field games. To appear in ESAIM: Control Optim. Calc. Var. (2018).
P Cardaliaguet, Notes on Mean Field Games, unpublished notes. P. Cardaliaguet, Notes on Mean Field Games, unpublished notes (2013), https:// www.ceremade.dauphine.fr/cardalia/index.html.
On the existence of oscillating solutions in non-monotone mean-field games. M Cirant, arXiv:1711.08047preprintM. Cirant On the existence of oscillating solutions in non-monotone mean-field games, preprint arXiv: 1711.08047, (2017).
Tonon Time-dependent focusing Mean-Field Games: the sub-critical case. M Cirant, D , 10.1007/s10884-018-9667-x,to appear in J. Dynam. Differential EquationsM. Cirant, D. Tonon Time-dependent focusing Mean-Field Games: the sub-critical case., to appear in J. Dynam. Differential Equations, DOI: 10.1007/s10884-018-9667-x, (2018).
On the Existence of Solutions for Stationary Mean-Field Games with Congestion. D Evangelista, D Gomes, 10.1007/s10884-017-9615-1to appear in J. Dynam. Differential EquationsD. Evangelista, D. Gomes On the Existence of Solutions for Stationary Mean- Field Games with Congestion., to appear in J. Dynam. Differential Equations, DOI: 10.1007/s10884-017-9615-1, (2018).
Global existence of a classical solution for a large class of free boundary problems. R Gianni, NoDEA Nonlinear Differential Equations Appl. 23R. Gianni Global existence of a classical solution for a large class of free boundary problems, NoDEA Nonlinear Differential Equations Appl., 2, (1995), n.3, 291-321.
Short-time existence of solutions for mean filed games with congestion. D A Gomes, V K Voskanyan, J. London Math. Soc. 23D.A. Gomes, V.K. Voskanyan Short-time existence of solutions for mean filed games with congestion, J. London Math. Soc , 2, 92, (2015), no. 3, 778-799.
Explicit solutions of one-dimensional, firstorder, stationary mean-field games with congestion. D Gomes, L Nurbekyan, M Prazeres, 2016 IEEE 55th Conference on Decision and Control. Gomes, D., Nurbekyan, L., Prazeres M. Explicit solutions of one-dimensional, first- order, stationary mean-field games with congestion. In: 2016 IEEE 55th Conference on De- cision and Control, CDC 2016, pp. 4534-4539 (2016).
Existence for stationary mean-field games with congestion and quadratic Hamiltonians. D Gomes, H Mitake, NoDEA Nonlinear Differential Equations Appl. 226D. Gomes and H. Mitake Existence for stationary mean-field games with congestion and quadratic Hamiltonians. NoDEA Nonlinear Differential Equations Appl., 22(6):1897-1910, 2015.
Morgado Time dependent mean-field games in the superquadratic case. To appear in ESAIM: Control, Optimisation and Calculus of Variations. D Gomes, E Pimentel, H Sanchez, D. Gomes, E. Pimentel, and H. Sanchez-Morgado Time dependent mean-field games in the superquadratic case. To appear in ESAIM: Control, Optimisation and Calculus of Variations, (2017).
Time-dependent mean-field games in the sub-quadratic case. D Gomes, E Pimentel, H Sanchez-Morgado, Comm. Partial Differential Equations. 401D. Gomes, E. Pimentel, and H. Sanchez-Morgado. Time-dependent mean-field games in the sub-quadratic case. Comm. Partial Differential Equations, 40(1):40-76, (2015).
Regularity theory for mean-field game systems. D Gomes, E Pimentel, V Voskanyan, D. Gomes, E. Pimentel, and V. Voskanyan. Regularity theory for mean-field game systems. (2016).
Graber Weak solutions for mean field games with congestion. P J , arXiv:1503.04733preprintP. J. Graber Weak solutions for mean field games with congestion, preprint arXiv: 1503.04733, (2015).
Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. M Huang, R P Malhamé, P E Caines, Commun. Inf. Syst. 6M. Huang, R.P. Malhamé, P.E. Caines, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst. 6 (2006), 221-251.
Uraltseva Linear and quasilinear equations of parabolic type. O A Ladyzhenskaya, V A Solonnikov, N N , Translations of Math Mon. 23O. A. Ladyzhenskaya, V. A. Solonnikov, N. N. Uraltseva Linear and quasilinear equations of parabolic type, Translations of Math Mon., 23, Providence, R.I 1968.
College de france course on mean-field games. P.-L Lions, P.-L. Lions, College de france course on mean-field games. 2007-2011.
Jeuxà champ moyen. I. Le cas stationnaire. J.-M Lasry, P.-L Lions, C. R. Math. Acad. Sci. Paris. 343J.-M. Lasry, P.-L. Lions, Jeuxà champ moyen. I. Le cas stationnaire, C. R. Math. Acad. Sci. Paris 343 (2006), 619-625.
Jeuxà champ moyen. II. Horizon fini et contrôle optimal. J.-M Lasry, P.-L Lions, C. R. Math. Acad. Sci. Paris. 343J.-M. Lasry, P.-L. Lions, Jeuxà champ moyen. II. Horizon fini et contrôle optimal, C. R. Math. Acad. Sci. Paris 343 (2006), 679-684.
Mean field games. J.-M Lasry, P.-L Lions, Japan. J. Math. (N.S.). 2J.-M. Lasry, P.-L. Lions, Mean field games, Japan. J. Math. (N.S.) 2 (2007), 229-260.
|
[] |
[
"EFFECTIVE FORCING WITH CANTOR MANIFOLDS",
"EFFECTIVE FORCING WITH CANTOR MANIFOLDS"
] |
[
"Takayuki Kihara "
] |
[] |
[] |
A set A of integers is called total if there is an algorithm which, given an enumeration of A, enumerates the complement of A, and called cototal if there is an algorithm which, given an enumeration of the complement of A, enumerates A. Many variants of totality and cototality have been studied in computability theory. In this note, by an effective forcing construction with strongly infinite dimensional Cantor manifolds, which can be viewed as an effectivization of Zapletal's "half-Cohen" forcing (i.e., the forcing with Henderson compacta), we construct a set of integers whose enumeration degree is cototal, almost total, but neither cylinder-cototal nor telograph-cototal.An e-degree is total (cototal) if it contains a total (cototal) set. Note that the notion of totality can be defined via a total function on ω. We consider the following notions for a function g : ω → ω and b ∈ ω:Graph(g) = { n, m : g(n) = m}, Cylinder(g) = {σ ∈ ω <ω : σ ≺ g}, Graph b (g) = {2 n, m : g(n) = m} ∪ {2 n, m + 1 : m ≥ b and g(n) = m}.Here, ·, · is the standard pairing function, that is, an effective bijection between ω 2 and ω, and we write σ ≺ g if σ is an initial segment of g, that is, σ(n) = g(n) for n < |σ|. As usual, every finite string on ω is identified with a natural number via an effective bijection between ω <ω and ω.
| null |
[
"https://arxiv.org/pdf/1702.02630v1.pdf"
] | 51,990,465 |
1702.02630
|
01e0ac1258907f16f1f0bd805cb79953e39a8f6e
|
EFFECTIVE FORCING WITH CANTOR MANIFOLDS
8 Feb 2017
Takayuki Kihara
EFFECTIVE FORCING WITH CANTOR MANIFOLDS
8 Feb 2017
A set A of integers is called total if there is an algorithm which, given an enumeration of A, enumerates the complement of A, and called cototal if there is an algorithm which, given an enumeration of the complement of A, enumerates A. Many variants of totality and cototality have been studied in computability theory. In this note, by an effective forcing construction with strongly infinite dimensional Cantor manifolds, which can be viewed as an effectivization of Zapletal's "half-Cohen" forcing (i.e., the forcing with Henderson compacta), we construct a set of integers whose enumeration degree is cototal, almost total, but neither cylinder-cototal nor telograph-cototal.An e-degree is total (cototal) if it contains a total (cototal) set. Note that the notion of totality can be defined via a total function on ω. We consider the following notions for a function g : ω → ω and b ∈ ω:Graph(g) = { n, m : g(n) = m}, Cylinder(g) = {σ ∈ ω <ω : σ ≺ g}, Graph b (g) = {2 n, m : g(n) = m} ∪ {2 n, m + 1 : m ≥ b and g(n) = m}.Here, ·, · is the standard pairing function, that is, an effective bijection between ω 2 and ω, and we write σ ≺ g if σ is an initial segment of g, that is, σ(n) = g(n) for n < |σ|. As usual, every finite string on ω is identified with a natural number via an effective bijection between ω <ω and ω.
Introduction
A set A ⊆ ω is total if the complement of A is enumeration reducible to A, and cototal if A is enumeration reducible to the complement of A. Recently, it is found that the notion of cototality naturally arises in symbolic dynamics, group theory, graph theory, etc. For instance, Jeandel [5] showed that the set of words that appear in a minimal subshift is cototal, and Jeandel [5] and Thomas-Williams [15] showed that the set of non-identity words in a finitely generated simple group is cototal. Motivated by these examples, the notion of cototality has received increasing attention in computability theory. For a thorough treatment of totality and cototality, see Andrews et al. [1].
In this note, to distinguish various notions of totality/cototality, we will utilize the notion of a strongly infinite dimensional Cantor manifold, which means a strongly infinite dimensional compactum with no weakly infinite dimensional partition. The notions of a Cantor manifold and strong infinite dimensionality were introduced by Urysohn and Alexandrov, respectively. These notions have been extensively studied in topological dimension theory [3,17].
Our construction can be naturally viewed as an effective forcing construction via the forcing P Z with strongly infinite dimensional Cantor submanifolds of the Hilbert cube. From this viewpoint, our forcing P Z is related to the forcing with Henderson compacta, which is recently introduced by Zapletal [18,19] (see also [12,13]). Here, a compactum is (strongly) Henderson if it is (strongly) infinite dimensional, and all of its subcompacta are either zero-dimensional or (strongly) infinite dimensional. The notion WORK IN PROGRESS. P Z is equivalent to the forcing with strongly Henderson compacta since every strongly infinite dimensional compactum contains a strongly Henderson Cantor manifold [14,16]. Zapletal [19] used this forcing to solve Fremlin's half-Cohen problem, asking the existence of a forcing which adds no Cohen real, but whose second iterate adds a Cohen real. That is to say, the forcing P Z is a half-Cohen forcing! In [6] I have proposed a problem on effectivizing Zapletal's forcing. Thus, our work can be regarded as a path to solving my problem.
However, we shall emphasize a striking difference between Zapletal's work and ours. The notion of a Cantor manifold does not appear in Zapletal's article [19]. Indeed, as pointed out by Zapletal himself, the forcing with uncountable dimensional subcompacta of the Hilbert cube is already a half-Cohen forcing. This means that the notion of a Cantor manifold is not necessary at all if we only aim at obtaining a half-Cohen forcing. On the contrary, to achieve our purpose, we will make crucial use of the property being a Cantor manifold.
Preliminaries
In this note, we will use notions from computability theory [11,10] and infinite dimensional topology [3,17].
2.1. Computability Theory. As usual, the terminology "computably enumerable" is abbreviated as "c.e." An axiom is a c.e. set of pairs (n, D) of a natural number n ∈ ω and (a canonical index of) a finite set D ⊆ ω. A set A ⊆ ω is enumeration reducible to B ⊆ ω (written as A ≤ e B) if there is an axiom Ψ such that n ∈ A if and only if there is D ⊆ B such that (n, D) ∈ Ψ. In this case, we write Ψ(B) = A, that is, Ψ is thought of as a function from P(ω) to P(ω), and thus, Ψ is often called an enumeration operator. The notion of enumeration reducibility induces an equivalence relation ≡ e , and each ≡ e -equivalence class is called an enumeration degree (or simply an e-degree). The e-degrees form an upper semilattice (D e , ≤, ∨).
Let G be either Graph, Cylinder, or Graph 0 . Then, for an e-degree is total if and only if it contains G(f ) for a function f : ω → ω. We now consider the following variants of cototality:
(1) An e-degree is graph-cototal if it contains Graph(f ) ∁ for some func-
tion f : ω → ω. (2) An e-degree is cylinder-cototal if it contains Cylinder(f ) ∁ for some function f : ω → ω. (3) An e-degree is telograph-cototal if it contains Graph b (f ) for some function f : ω → ω and b ∈ ω.
We have the following implications: total graph-cototal All of the above implications are strict [7], and all of the above properties imply cototality [1]. An e-degree a is almost total if for any e-degree b, either b ≤ a holds or a ∨ b is total. Clearly, totality implies almost totality. Although the notion of almost totality is found to be quite useful in various contexts (see [9,4,8]), the relationship with cototality has not been studied yet.
Continuous degrees.
Fix a standard open basis (B e ) e∈ω of the Hilbert cube [0, 1] ω . For an oracle z ⊆ ω, we say that a set P ⊆ [0, 1] ω is Π 0 1 (z) if there is a z-c.e. set W ⊆ ω such that P = [0, 1] ω \ e∈W B e . We identify each point x ∈ [0, 1] ω with its coded neighborhood basis:
Nbase(x) = {e ∈ ω : x ∈ B e }. We write B D = d∈D B d . Note that D ⊆ Nbase(x) if and only if x ∈ B D .
An enumeration degree a is called a continuous degree if a contains Nbase(x) for some x ∈ [0, 1] ω . The notion of a continuous degree is introduced by Miller [9], and it has turned out to be a very useful notion (see [2,4,8]).
Fact 1 ([9, 1]). A continuous degree is almost total, and cototal.
2.2.
Infinite Dimensional Topology. In this note, all spaces are assumed to be separable and metrizable. A compactum is a compact metric space, and a continuum is a connected compactum.
Observation 2 (see Section 4). Any nonempty open subset of a continuum is not zero-dimensional.
A space S is an absolute extensor for X if for any closed set P ⊆ X , every continuous function f : P → S can be extended to a continuous function g : X → S. A space X is at most n-dimensional if the n-sphere S n is an absolute extensor for X . A space X is countable dimensional if it is a countable union of finite dimensional subspaces.
To introduce the notion of strong infinite dimensionality, we first recall the Eilenberg-Otto characterization of topological dimension. Given a disjoint pair (A, B) of closed subsets of X , we say that L is a partition of X between A and B if there is a disjoint pair (U, V ) of open sets in X such that A ⊆ U , B ⊆ V , and L = X \ (U ∪ V ). An essential sequence in X is a sequence of pairs of disjoint closed sets in X such that for any sequence (L i ) i≤n , if L i is a partition of X between A i and B i , then i L i = ∅. Then, a space X is at most n-dimensional if and only if X has no essential sequence of length n + 1.
A space X is strongly infinite dimensional (abbreviated as SID) if X has an essential sequence of length ω. No SID space is countable dimensional. Note that the Hilbert cube [0, 1] ω is SID.
A space X is hereditarily strongly infinite dimensional (abbreviated as HSID) if X is SID, and any subspace of X is either zero-dimensional or SID. We also say that X is strongly Henderson if X is SID, and any compact subspace of X is either zero-dimensional or SID. Clearly, every HSID space is strongly Henderson.
A space X is SID-Cantor manifold if X is compact, SID, and for any disjoint pair of nonempty open sets U, V ⊆ X , X \ (U ∪ V ) is SID. In other words, every partition of X between nonempty sets is SID.
Main Theorem
For an ordinal α, let L α be the α-th rank of Gödel's constructible universe. By ω CK α , we denote the α-th ordinal which is admissible or the limit of admissible ordinals. In this note, we will show the following:
Theorem 3. There is a continuous degree d ∈ L ω CK ω +1 \ L ω CK ω
such that the following holds:
(1) d is not cylinder-cototal.
(2) For any a ≤ d, if a is telograph-cototal, then a ∈ L ω CK ω . Corollary 4. There is a set of integers whose enumeration degree is cototal, almost total, but neither cylinder-cototal nor telograph-cototal.
3.1. Technical Tools. To prove Theorem 3, we need to effectivize a few facts in infinite dimensional topology. It is known that every SID compact metrizable space has an HSID compact subspace. The construction in [14] (see also van Mill [17, We shall also extract an effective content from the SID-version of the Cantor Manifold Theorem [16], which asserts that every SID compactum contains a SID-Cantor manifold. Let O z be a Π 1 1 -complete subset of ω relative to an oracle z, that is, the hyperjump of z.
Proposition 6 (see Section 4). Every strongly Henderson Π 0 1 (z) subset of [0, 1] ω contains a Π 0 1 (O z ) SID-Cantor manifold. As a corollary, if a strongly Henderson compactum H ⊆ [0, 1] ω has a Π 0 1code in L ω CK α , then there is an SID-Cantor manifold C ⊆ H which has a Π 0 1 -code in L ω CK α +1 . The reason for the need of the hyperjump in Proposition 6 is due to noneffectivity of the notion of an absolute extensor 1 . Because the Lebesgue covering dimension and the Eilenberg-Otto characterization admit a quite effective treatment, if we could prove the Cantor Manifold Theorem only using the notions of a covering or an Eilenberg-Otto separation 2 , we would obtain a far more effective version of Proposition 6. However, to the best of our knowledge, any proof employs the Borsuk Homotopy Extension Theorem (or its variant), which requires us to deal with extendability of functions.
3.2.
Proof of Main Theorem. We now prove Theorem 3 by applying Proposition 6. Hereafter, we write O n for the n-th hyperjump.
Proof of Theorem 3. We construct a decreasing sequence (P s ) s∈ω of SID Π 0 1 (O α(s) ) subsets of [0, 1] ω where α(s) ∈ ω, and z ∈ s P s satisfies the following requirements:
P e,k : z = Ψ e (O k ), N e,b : (∀f ) [Ψ e (z) = Graph b (f ) =⇒ (∃n) f ≤ T O n ], M d,e : (∀g) [Ψ d (z) = Cylinder(g) ∁ and Ψ e • Ψ d (z) = z =⇒ (∃n) g ≤ T O n ].
Here, (Ψ e ) e∈ω is a list of enumeration operators, and ≤ is the Turing reducibility. We first check that these requirements ensure the desired property. It is clear that P-requirements ensure that z is not arithmetical. Assume that Graph b (f ) ≤ e z for some f and b. By the N -requirements, f is arithmetical, and so is Graph b (f ), since Graph b (f ) ≤ e Graph(f ). Similarly, if Cylinder(g) ∁ ≡ e z for some g, then by the M-requirements, g is arithmetical, and so is Cylinder(g) ∁ . However, it is impossible since the Prequirements ensure that z is not arithmetical as mentioned above. Thus, we have Cylinder(g) ∁ ≡ e z for any g.
We now start the construction. We begin with a strongly Henderson Π 0 1 set P 0 ⊆ [0, 1] ω in Proposition 5. Inductively assume that we have constructed a strongly Henderson Π 0 1 (O α(s) ) set P s ⊆ P 0 . At the beginning of stage s, choose a Π 0 1 (O α(s)+1 ) SID-Cantor manifold Q s ⊆ P s by using Proposition 6. Note that Q s is also strongly Henderson since Q s is a closed subspace of P s .
3.2.1. P-strategy. If s = 3 e, k , we proceed the following simple diagonalization strategy. Assume that Ψ e (O k ) determines a point y ∈ Q s . Since Q s has at least two points, for a sufficiently small open neighborhood U of y, Q s \ U is nonempty. Thus, by Observation 2, Q s \ U ⊇ Q s \ U is not zero-dimensional. Now, Q s is strongly Henderson, and so is P s+1 := Q s \ U . Then, the requirement P e,k is clearly fulfilled.
3.2.2.
N -strategy. If s = 3 e, b + 1, then for any n ∈ ω and c > b, consider the following:
U n c = {Q s ∩ B D : (∃m ∈ [b, c)) 2 n, m + 1, D ∈ Ψ e }, V n c = {Q s ∩ B D : (∀m ∈ [b, c)) 2 n, m , D ∈ Ψ e }.
Clearly, U n c and V n c are c.e. open in Q s for any n and c. Hereafter by ψ z we denote a function such that Ψ z e = Graph b (ψ z ) if it exists. For instance, z ∈ U n c ensures that ψ z (n) ∈ [b, c), and z ∈ V n c ensures that ψ z (n) ∈ [b, c). Without loss of generality, we can always assume that 2 n, m + 1, D ∈ Ψ e implies 2 n, k , D ∈ Ψ e for all k = m.
Case A1. There are c, n such that U n c ∩ V n c is nonempty. By regularity, there is a nonempty open ball B whose closure is included in U n c ∩ V n c . By Observation 2, Q s ∩ B (and hence Q s ∩ B) is not zero-dimensional. Hence, Q s ∩ B is SID since Q s is strongly Henderson. Then, define P s+1 = Q s ∩ B and go to stage s + 1.
If Case A1 is applied,
we have z ∈ P s+1 ⊆ U n c ∩V n c . However, z ∈ U n c ∩V n c ensures that both ψ z (n) ∈ [b, c) and ψ z (n) ∈ [b, c) whenever ψ z (n) is defined. This means that ψ z (n) is ill-defined on z ∈ U n c ∩ V n c . Hence, N e,b is satisfied. Case A2. There are n and c such that both Q s ∩ U n c and Q s ∩ V n c are nonempty. Note that Q s ∩ U n c and Q s ∩ V n c are disjoint since Case A1 is not applied. Since Q s is a SID-Cantor manifold, P s+1 := Q s \ (U n c ∪ V n c ) is SID. Then go to stage s + 1.
If Case A2 is applied at some substage n ∈ ω, then P s+1 ∩ (U n c ∪ V n c ) is empty, and therefore z ∈ U n c ∪ V n c . That is, 2 n, m + 1 ∈ Ψ z e for all m ∈ [b, c), and 2 n, m ∈ Ψ z e for some m ∈ [b, c). The former means that ψ z (n) ∈ [b, c), but the latter has to imply that ψ z (n) ∈ [b, c) whenever ψ z (n) is defined. This is impossible, and therefore ψ z (n) is undefined. Hence, N e,b is satisfied.
For each m ∈ [b, c), consider the following:
U n c,m = {Q s ∩ B D : 2 n, m + 1, D ∈ Ψ e }. Note that z ∈ U n c,m ensures that ψ z (n) = m.
Clearly, U n c = b≤m<c U n c,m . Now consider the following set:
J n c = {m ∈ [b, c) : Q s ∩ U n c,m = ∅} The condition m ∈ J n c indicates that Q s has an element z such that ψ z (n) = m whenever ψ z is defined.
Case A3. There are n and c such that Q s ∩ V n c is empty (this indicates that ψ z (n) ∈ [b, c) is never ensured for any z ∈ Q s ), and J n c is not a singleton. If J n c is empty, put P s+1 = Q s , and go to stage s + 1. If J n c has at least two elements, consider
F = Q s \ m∈[b,c) U n c,m . We claim that F disconnects Q s . To see this, note that (Q s ∩ U n c,m ) b≤m<c is pairwise disjoint. Otherwise, Q s ∩ U n c,m ∩ U n c,k is nonempty for some b ≤ m < k < c. Note that U n c,m ⊆ U n c,m+1
and by our assumption on Ψ e , U n c,k ⊆ V n c,m+1 . This implies that Q s ∩ U n m+1 ∩ V n m+1 is nonempty. Then, however, Case A1 must be applied. Now, choose m ∈ J n , and then Q s \ F is written as the union of disjoint open sets Q s ∩ U n c,m and Q s ∩ k =m U n c,k , that is, F disconnects Q s . Then, as in the previous argument, since Q s is a SID-Cantor manifold, P s+1 := Q s ∩ F is SID. Then go to stage s + 1.
If Case A3 is applied, since P s+1 ⊆ Q s does not intersect with V n c , for any z ∈ P s+1 , we have ψ z (n) ∈ [b, c) whenever ψ z is defined. If J n is empty, then Q s ∩ U n c,m is empty, in particular, z ∈ U n c,m , for all m ∈ [b, c). If J n has at least two elements, our construction ensures that P s+1 ∩ m U n c,m is empty, and in particular, z ∈ U n c,m for all m ∈ [b, c). This clearly implies that ψ z (n) ∈ [b, c) in both cases. Thus, ψ z has to be undefined, and therefore N e,b is satisfied.
Let I s be the set of all n ∈ ω such that Q s ∩ U n c is empty for any c. Note that for any n ∈ I s and z ∈ Q s , ψ z (n) ∈ [b, ∞) is never ensured. Then for any n ∈ I s and i < b, consider the following c.e. open set in Q s :
W n i = {Q s ∩ B D : (∀j < b)[j = i =⇒ 2 n, j , D ∈ Ψ e }. Note that z ∈ W n i ensures that ψ z (n) = j for any j ∈ b \ {i}. Case A4.
There are n ∈ I s and i, j with i = j such that W n i ∩ W n j is nonempty. As in Case A1, there is a nonempty open ball C such that C ⊆ W n i ∩ W n j . Then, define P s+1 = Q s ∩ C, which is SID by Observation 2 since Q s is strongly Henderson, and go to stage s + 1.
If case A4 is applied, we have z ∈ P s+1 ⊆ W n i ∩ W n j . Note that ψ z (n) is undefined on z ∈ W n i ∩ W n j , because ψ z (n) = k for any k < b, whereas ψ z (n) ∈ [b, ∞) by n ∈ I s . Hence, N e,b is satisfied.
For each n ∈ I s , consider the following:
K n = {i < b : Q s ∩ W n i = ∅} Case A5.
There is n ∈ I s (i.e., Q s ∩ U n c is empty for any c) such that K n in not a singleton. If K n is empty, put P s+1 = Q s , and go to stage s + 1. If K n has at least two elements, as in Case A3, H = Q s \ i<b W n i disconnects Q s ; otherwise Case A4 is applied. Since Q s is a SID-Cantor manifold, P s+1 := Q s ∩ H is SID. Then go to stage s + 1.
If Case A5 is applied, we have n ∈ I s , which forces that for any z ∈ P s+1 , ψ z (n) < b whenever ψ z is defined. If K n is empty, then Q s ∩ W n i is empty, and in particular, z ∈ W n i , for all i < b. If K n has at least two elements, P s+1 ∩ i<b W n i is empty, and in particular, z ∈ W n i for all i < b. This clearly implies ψ z (n) ≥ b whenever ψ z is defined in both cases. Thus, ψ z has to be undefined, and therefore N e,b is satisfied.
Case A6. None of the above cases is applied. Since Case A2 is not applied, for any n and c, either Q s ∩ U n c or Q s ∩ V n c is empty. Therefore, if n ∈ I s , then Q s ∩ V n c is empty for some c. Since Case A3 is not applied, J n c has to be a singleton for such c. If n ∈ I s , since Case A5 is not applied, K n has to be a singleton. In other words, for any n ∈ ω, one of the following holds:
(1) There is c such that Q s ∩ V n c is empty, and J n c is a singleton. (2) Q s ∩ U n c is empty for all c, and K n is a singleton. Note that given a Σ 0 2 (ξ) set S ⊆ [0, 1] ω , we can decide S = ∅ or not by using ξ ′′ , the double jump of ξ. Thus, given n, by using O α(s)′′ , we can decide whether (1) or (2) holds, and define Γ(O α(s)′′ ; n) = p, where p is a unique element in J n c if (1) holds and c is the least witness, or K n if (2) holds. Finally, we define P s+1 = Q s and then go to stage s + 1.
If Case A6 is applied, we have constructed a function Γ(O α(s)′′ ). We claim that for any z ∈ P s+1 = Q s , Γ(O α(s)′′ ) = ψ z whenever ψ z is defined. Fix z ∈ P s+1 and n ∈ ω, and assume that ψ z (n) is defined. First assume that (1) holds for n and c is the least witness of this fact. Then Γ(O α(s)′′ ; n) = p is the unique element of J n c . Since (1) holds, Q s ∩ V n c is empty, and as before, this forces
ψ z (n) ∈ [b, c). Since J n = {p}, for all m ∈ [b, c) with m = p, Q s ∩ U n
c,m is empty, that is, z ∈ U n c,m and therefore ψ z (n) = m. Thus we must have ψ z (n) = p, and therefore, Γ(O α(s)′′ ; n) = p = ψ z (n). Next, assume that (2) holds. Then Γ(O α(s)′′ ; n) = p is the unique element of K n . Since (2) holds, Q s ∩ U n c is empty for all c, which forces ψ z (n) < b. Since K n = {p}, for all i < b with i = q, Q s ∩ W n i is empty, that is, z ∈ W n i and therefore ψ z (n) = i. Thus we must have ψ z (n) = q, and therefore, Γ e,b (O α(s)′′ ; n) = p = ψ z (n). Consequently, the requirement N e,b is fulfilled.
3.2.3. M-strategy. Now, assume that s = 3 d, e + 2. Hereafter we abbreviate Cylinder(g) ∁ as C g , and use Ψ and Φ instead of Ψ d and Ψ e , respectively. We can assume that each basic open ball B e is of the form i<d(e) B(q(e); 2 −r(e) ) × j≥d(e) [0, 1] j . Then we define the ε-enlargement B ε e as i<d(e) B(q(e); 2 −r(e) + ε) × j≥d(e) [0, 1] j . We can also assume that if e, D ∈ Ψ, then any distinct strings σ, τ ∈ D are incomparable.
Step 1. We first construct a sequence (E t ) t∈ω of finite sets of pairwise incomparable strings. Each node in E t is called active at t. We will also define an index e, a finite set I t of indices, and a positive rational m(t) such that we inductively ensure the following:
(IH) g ∈ [E t ] =⇒ Ψ(C g ) ∈ d∈I t+1 B d or Ψ(C g ) ∈ B m(t) e . By removing d∈It B d ∪ B m(t) e
from Q s , we will eventually ensure that if g extends no active string at t, then Ψ(C g ) ∈ P s+1 .
We first define an index e. Assume that Ψ is nontrivial, that is, Ψ(C g ) ∈ Q s for some g (otherwise put P s+1 = Q s and go to stage s + 1). Under this assumption, there is e, D ∈ Ψ such that Q s ∩ B e is nonempty, and that Q s \ B ε e has a nonempty interior for some rational ε > 0. Put m(0) = 0, I 0 = ∅ and declare that each string ρ ∈ D is active at 0, that is, E 0 = D. Given t, assume that E t has already been constructed.
Choose the lexicographically least node ρ ∈ E t which is active at t. Consider δ = min{dist(B e , B d ) : d ∈ I t }, where dist is a formal distance function. Note that we can effectively calculate the value δ. Inductively we assume that m(t) < min{δ, ε}. We now consider the situation that there is a converging Ψ-computation on [ρ] which outputs a point y such that dist(y, B m(t) e ) > 0. Then, Ψ eventually enumerates a sufficiently small neighborhood B d of y such that the formal distance between B d and B m(t) e is positive. We first consider the case that this situation holds.
Case B1. There is d, D ∈ Ψ such that D has no initial segment of ρ, and that the formal distance between B d and B m(t) e is positive. Then define m(t + 1) = m(t), I t+1 = I t ∪ {d}. We also define E t+1 as the set of strings in D which extends a string in E t , that is,
E t+1 = {σ ∈ D : (∃τ ∈ E t ) τ σ}.
Note that if g ∈ [D] and Ψ(C g ) converges, then we have Ψ(C g ) ∈ B d . Thus, by induction hypothesis (IH) at t, for any g ∈ [E t+1 ], either Ψ(C g ) ∈ d∈I t+1 B d or Ψ(C g ) ∈ B m(t+1) e holds. Then, choose a rational q such that m(t) < q < min{δ, ε}. In this case, define m(t + 1) = q, and I t+1 = I t , and declare that ρ is not active at t + 1, that is, E t+1 = E t \ {ρ}. Note that the closure of B . Thus, by induction hypothesis (IH) at t, for any g ∈ [E t+1 ], either
Ψ(C g ) ∈ d∈I t+1 B d or Ψ(C g ) ∈ B m(t+1) e holds.
Finally, put m = sup t m(t), I = t I t , and U = d∈I B d . Note that the downward closure E * of E = t E t forms a finite-branching tree, and ∅ ′ -c.e., since one can decide which case holds by using ∅ ′ . Therefore the set [E * ] of all infinite paths through E * is a compact Π 0 1 (∅ ′′ ) subset of ω ω . We also note that [E * ] = t [E t ]. Our construction ensures that, by induction,
g ∈ [E * ] =⇒ Ψ(C g ) ∈ U or Ψ(C g ) ∈ B m e .
We assume that P s ∩ U is nonempty; otherwise, put P s+1 = P s \ B ε e . Our construction ensures that B m e ∩U is empty because we always choose m(t) < δ. Since P s is a SID-Cantor manifold,
P s \ (B m e ∪ U ) is SID. Then there is a SID-Cantor manifold Q s ⊆ P s \ (B m e ∪ U ).
Step 2. Consider the following:
U σ = {B d : σ, D ∈ Φ}. We write ϕ(x) = g if Φ x = C g . Then, x ∈ U σ means that ϕ(x) does not extend σ whenever it is defined. Thus, if there are incomparable strings σ, τ such that x ∈ U σ ∪ U τ , then ϕ(x) is undefined. At each substage t, check whether Q s ∩ σ∈Et U σ is nonempty. Note that if it is true, there is x ∈ Q s such that ϕ(x) ∈ [E t ] or else ϕ(x) is undefined.
Case C1. If Q s ∩ σ∈Et U σ is nonempty at some substage t. Then define P s+1 = Q s ∩ σ∈Et U σ and go to stage s + 1. Note that P s+1 is a nonempty open subset of a SID-Cantor manifold Q s , and therefore, P s+1 is SID by Observation 2. In this case, for any x ∈ P s+1 , whenever ϕ(x) converges, ϕ(x) ∈ [E t ] ⊇ [E * ], and therefore Ψ(C ϕ(x) ) ∈ Q s ⊇ P s+1 . This forces that Ψ • Φ(x) = x for any x ∈ P s+1 , which ensures the requirement M d,e .
Case C2. Case C1 is not applied, and at least two of (V σ ) σ∈Et is nonempty at some substage t,
where V σ = Q s ∩ τ ∈Et\{σ} U τ . Note that x ∈ V σ means that if ϕ(x) ∈ [E t ]
, then ϕ(x) must extend σ. In this case, choose σ such that V σ is nonempty, and then V =σ := τ ∈Et\{σ} V τ is also nonempty (where
x ∈ V =σ means that ϕ(x) ∈ [E t ], then ϕ(x) must extend some τ ∈ E t \ {σ}). Note that V =σ ⊆ U σ , and thus V σ ∩ V =σ ⊆ Q s ∩ σ∈Et U σ is empty since Case C1 is not applied. Therefore, Q s \ (V σ ∪ V =σ ) is SID since Q s is a SID-Cantor manifold. Then define P s+1 = Q s \ (V σ ∪ V =σ )
and go to stage s + 1. Then for any x ∈ P s+1 , since x ∈ V σ ∪ V =σ , ϕ(x) ∈ [E t ] ⊇ [E * ] as mentioned above. This forces that Ψ • Φ(x) = x for any x ∈ P s+1 , which ensures the requirement M d,e .
Case C3. V σ is empty for any σ ∈ E t , at some substage t. This automatically implies that ϕ(x) ∈ [E t ] for any x ∈ Q s . Put P s+1 = Q s , go to stage s + 1, and then Ψ • Φ(x) = x for any x ∈ P s+1 as in the above argument. Then, the requirement M d,e is fulfilled.
Case C4. Otherwise, for each t, there is a unique σ(t) ∈ E t such that Q s ⊆ V σ(t) . Note that t σ(t) is infinite, or otherwise σ(t) is not active for almost all t. Define Γ(x) = t σ(t). We claim that for any
x ∈ Q s , if Ψ •Φ(x) = x, then ϕ(x) = Γ(x). First note that if Ψ •Φ(x) = x (in this case, Ψ(C ϕ(x) ) = x ∈ Q s ) we must have ϕ(x) ∈ [E * ] as mentioned in Case C1.
Thus, for each t, there is a unique σ ∈ E t such that ϕ(x) extends σ, that is, x ∈ V σ . By uniqueness of σ, we must have σ = σ(t). Hence, ϕ(x) extends σ(t) for any t ∈ ω. If t σ(t) is infinite, this concludes that ϕ(x) = Γ(x). Otherwise, σ = σ(t) for almost all t. This implies that σ is not active at some stage, and then [E * ] ∩ [σ] = ∅. This contradicts our observation that σ ≺ ϕ(x) ∈ [E * ]. Thus, our claim is verified. Put P s+1 = Q s , and go to stage s + 1. Then, by the above claim, the requirement M d,e is fulfilled.
Technical appendix
4.1. Proof of Observation 2. Suppose for the sake of contradiction that there is a nonempty open subset U of a continuum X which is zero-dimensional. Fix a point x ∈ U , By regularity, there is an open neighborhood V of x such that V ⊆ U , where V is the closure of V in X . By zero-dimensionality of U , there is an open sets B ⊆ X such that x ∈ B ⊆ V such that U ∩ ∂B = ∅. However, we have ∂B ⊆ B ⊆ V ⊆ U , and therefore, U ∩ ∂B = ∂B = ∅. Hence, the empty set separates X , which contradicts our assumption that X is a continuum. Proof. Assume that S is of the form Y \ (U * ∪ V * ), so that Y ∩ A * ⊆ U * and Y ∩ B * ⊆ V * . Then B * ∩ U * is empty, and thus B ∩ U * is empty. Similarly, A ∩ V * is empty. Therefore, A ∪ U * and B ∪ V * are separated. Consider
U = {x ∈ [0, 1] ω : d(x, A ∪ U * ) < d(x, B ∪ V * )}, V = {x ∈ [0, 1] ω : d(x, B ∪ V * ) < d(x, A ∪ U * )},
Clearly, U and V are disjoint. Note that A∪U * and B ∪V * are d-computable since d(x, E ∪ F ) = min{d(x, E), d(x, F )}. Thus, U and V are Σ 0 1 . We also note that
A ∪ U * ⊆ U and B ∪ V * ⊆ V . Define T = [0, 1] ω \ (U ∪ V ).
We assume that a basic open set of the Hilbert cube is of the form i<n (p i , q i ) × i≥n I i such that p i and q i are rationals, and I i = [0, 1]. Fix an effective list U = (U n ) n∈ω of open sets written as finite unions of basic open sets in the Hilbert cube, so that one can decide whether U m ∩ U n = ∅.
For each i ∈ ω define A i = π −1 i {0} and B i = π −1 i {1}.
Note that one can give an effective enumeration (U i n , V i n ) n∈ω of all disjoint pairs in U 2 such that A i ⊆ U i n and B i ⊆ V i n . Lemma 8. Let C ⊆ [0, 1] be a computable Cantor set, let Λ be a computable subset of ω, and let j ∈ Λ. Then, there exists a uniform sequence (S i ) i∈Λ of Π 0 1 sets in [0, 1] ω such that S i is a partition of [0, 1] ω between A i and B i , and for any compact set
M ⊆ i S i , if C ⊆ π j (M ) then M is SID.
Proof. Assume Λ = {a(i)} i∈ω . Let C ⊆ [0, 1] be a computable homeomorphic copy of 2 ω , and fix a computable homeomorphism ι : C → 2 ω . Let C n be the set of all x such that ι(x) contains at least n + 1 many ones. If ι(x) is of the form 0 k(0) 10 k(1) 10 k(2) 1 . . . , we definex = k(0), k(1), . . . . Note that x ∈ C n if and only ifx(n) is defined. For any x ∈ [0, 1], define L n (x) as follows.
L n (x) = [0, 1] ω \ U a(n) x(n) ∪ V a(n) x(n) if x ∈ C n , [0, 1] ω if x ∈ C n .
Note that L n (x) is Π 0 1 uniformly relative to x. Therefore,
H n = {x = (x i ) i∈ω : x ∈ L n (x j )}.
is Π 0 1 . Let C n,m be the set of all x ∈ C n such thatx(n) = m. For E = π −1 a(n) [C n ], note that H n ∩ E is a partition between A a(n) ∩ E and B a(n) ∩ E. To see this, consider
U = m∈ω U a(n) m ∩ π −1 a(n) [C n,m ], V = m∈ω V a(n) m ∩ π −1 a(n) [C n,m ], It is clear that H n ∩ E = E \ (U ∪ V ), A a(n) ∩ E ⊆ U , B a(n) ∩ E ⊆ V , and U and V are disjoint.
We claim that U and V are d-computable. Given a positive real ε < 2 −j ,
put E ε (x j ) = B ε (x j ) ∩ C n . If E ε (x j ) is empty, then d(x, U ) ≥ ε.
Assume that E ε (x j ) is nonempty. There are two cases. If [x j − ε, x j + ε) contains a point in C \ C n , then E ε (x j ) intersects with C n,m for almost all m. Choose S of the form U a(n) k such that x ∈ S and π −1 a(n) [C n ] ⊆ S. Then, clearly B ε (x) intersects with S. In this case, d(x, U ) ≤ ε. Otherwise, consider the set D of all m such that E ε (x j ) intersects with C n,m . Note that D is
finite. Then, consider U D = m∈D U a(n) m ∩ π −1 a(n) [C n,m ]. Clearly U D is d- computable uniformly in D. If we see d(x, U D ) > ε, then d(x, U ) ≥ ε. If we see d(x, U D ) < ε, then d(x, U ) < ε. This gives a procedure computing d(x, U ). Hence, U is d-computable. Similarly, V is d-computable. Thus, H n ∩ E is a d-computable partition of E between A a(n) ∩ E and B a(n) ∩ E.
By Lemma 7, there is a uniform sequence of (S n ) n∈ω of Π 0 1 partitions of [0, 1] ω between A a(n) and B a(n) such that S n ⊆ H n . Then, define
H = n∈ω S n .
Clearly, H is Π 0 1 . Assume that M is a compact subset of H. We claim that if C ⊆ π j [M ], then M is SID. If not, there exists partitions L n of the Hilbert cube between A a(n) and B a(n) such that M ∩ n∈ω L n is empty. By compactness, we can assume that L n is of the form
[0, 1] ω \ (U a(n) k(n) ∪ V a(n) k(n) ). Then, for z = 0 k(0) 10 k(1) 1 . . . , L n (z) = L n . Since C ⊆ π j [M ], there is x ∈ M such that x j = z. Since M ⊆ H ⊆ H n ,
we have x ∈ L n (z) = L n . Therefore, we get M ∩ n L n = ∅, a contradiction.
Let (C i ) i∈ω
4.3.
Proof of Proposition 6. Given a space X , we say that L is a partition of X if there is a pair of disjoint nonempty open sets U, V ⊆ X such that L = X \(U ∪ V ). Recall that an SID-Cantor manifold X is a SID compactum such that, if L is a partition of X , then L is SID. In the construction of a Cantor manifold, we will use the notion of an absolute extensor. However, it is known that there is no CW-complex S such that a compactum X is SID if and only if S is an absolute extensor for X . Thus, instead of directly constructing an SID-Cantor manifold, we will show the following lemma, and combine it with the existence of a strongly Henderson Π 0 1 compactum. Lemma 9. For any Π 0 1 set P ⊆ [0, 1] ω , if P is at least n-dimensional, then P contains a Π 0 1 (O) subset which is not partitioned by an at most (n − 2)dimensional closed subset.
Proof. Given a sequence of closed subsets of [0, 1] ω , note that ( The above observation shows that inessentiality is a Σ 0 1 property. We use Ess((A i , B i ) i<n , P ) to denote a Π 0 1 formula saying that (A i , B i ) i<n is essential in P .
A i , B i ) i<n is inessential in P if and only if (∃(U i , V i ) i<n ) A i ⊆ U i , B i ⊆ V i , and i<n (P \ (U i ∪ V i )) = ∅, where (U i , V i )
Since dim(P ) ≥ n, there is a sequence (A i , B i ) i<n of n disjoint pairs of closed subsets of [0, 1] ω which is essential in P . By normality of [0, 1] ω , for any i < n, there are open sets C i , D i such that C i ∩ D i = ∅, A i ⊆ C i and B i ⊆ D i . By compactness of A i and B i , we can assume that C i and D i are finite unions of basic open balls. Then,
Q = {(A i , B i ) i<n : Ess((A i ∩ C i , B i ∩ D i ) i<n , P )} is
4.4.
Remarks. It is natural to ask whether one can replace O in Lemma 9 with an arithmetical oracle or an arbitrary PA degree. Here, a set of integers has a PA degree if it computes a complete consistent extension of Peano Arithmetic (or ZFC). At this time, we do not have an answer. We here only see that O cannot be replaced with ∅. We show that we have no way of obtaining even a connected Π 0 1 subset of a given Π 0 1 set of positive dimension.
Observation 10. The following are equivalent for x ∈ 2 ω (1) x has a PA degree.
(2) Every non-zero-dimensional Π 0 1 subset of [0, 1] ω has a nondegenerate Π 0 1 (x) subcontinuum. (3) Every non-zero-dimensional Π 0 1 subset of [0, 1] 2 has a nondegenerate Π 0 1 (x) subcontinuum. Proof. To see (1)⇒(2), let P be a Π 0 1 subset of [0, 1] ω of positive dimension. Then, let S be the set of all pairs (a, b) ∈ P 2 not separated by a clopen set in P , that is, there is no pair of open sets U, V such that P ⊆ U ∪ V , U ∩ V ∩ P = ∅, and a ∈ U and b ∈ V . By compactness of P , we can assume that U and V range over finite unions of basic open balls in the Hilbert cube. Moreover, by normality, we can assume that U ∩ V ∩ P = ∅. Therefore, it is easy to check that S is a nonempty Π 0 1 set. Moreover, we know that there is (c, d) ∈ S such that c = d. Choose disjoint computable closed balls B c and B d such that c ∈ B c and d ∈ B d . Thus, given a PA degree x, S ∩ (B c × B d ) has an x-computable element (a, b). Now, we claim that the quasi-component C a containing a in P is Π 0 1 (x). Let W be an effective enumeration of all U such that P ⊆ U ∪ V and U ∩ V ∩ P = ∅. As in the above argument, if S is a clopen set in P , then there is U ∈ W such that S ∩ P = U ∩ P . This shows that C a = P ∩ {U ∈ W : a ∈ U }, which is Π 0 1 (x). Since P is a compactum, C a is indeed a connected component. Note that C a is nondegenerate since a, b ∈ C a and a = b.
For (3)⇒(1), consider [0, 1]×P , where P is a Π 0 1 subset of a Cantor ternary set such that all of elements in P have PA degrees. Clearly, [0, 1] × P is onedimensional, but any connected subset Q is included in [0, 1] × {z} for some z ∈ P . Assume that Q is Π 0 1 (x). Note that the projection π[Q] = {z : (∃y ∈ [0, 1]) (y, z) ∈ Q} of Q is still Π 0 1 (x). Thus, π[Q] is a Π 0 1 (x) singleton. Then, x computes the unique element of π[Q], which has a PA degree.
The proof of (1)⇒(2) in Observation 10 is not uniform. For instance, one can easily construct a computable sequence (P e ) e∈ω of Π 0 1 sets with two connected components C 0 e and C 1 e such that the e-th Turing machine halts, iff C 0 e is a singleton, iff C 1 e is not a singleton. This implies that, to solve the problem of finding a uniform procedure that, given a Π 0 1 set of positive dimension, returns its nontrivial subcontinuum, we need at least the power of the halting problem 0 ′ .
Open Questions.
We here list open problems. Let A dim≥n be the hyperspace of at least n-dimensional closed subsets of the Hilbert cube equipped with the usual negative representation. By the n-dimensional Cantor manifold theorem, we mean a multi-valued function CMT n : A dim≥n ⇒ A dim≥n such that for any X, CMT n (X) is the set of all closed subsets Y ⊆ X such that Y is not partitioned by an at most (n − 2)-dimensional closed subset.
Question 14.
Determine the Weihrauch degree of the n-dimensional Cantor manifold theorem.
Theorem 3.3.10]) shows the existence of an HSID Π 0 1 (O) subset of [0, 1] ω , where O is a Π 1 1 -complete subset of ω, and a modification of the construction gives a strongly Henderson Π 0 1 compactum. Proposition 5 (see Section 4). There is a strongly Henderson Π 0 1 subset of [0, 1] ω .
Case B2 .
B2Otherwise, for any d, D ∈ Ψ, if D has no initial segment of ρ, then dist(B d , B
4. 2 .
2Proof of Proposition 5. Let d be a computable metric on the Hilbert cube. We say that A ⊆ [0, 1] ω is d-computable if the function d A : x → inf y∈A d(x, y) is computable. In other words, the closure of A is computable as a closed set. A partition L of Y is d-computable if there are disjoint open sets U and V in Y such that L = Y \(U ∪ V ) and U and V are d-computable in [0, 1] ω . Lemma 7. Let Y be a subspace of [0, 1] ω . Assume that (A, B) and (A * , B * ) are pairs of disjoint closed subsets of [0, 1] ω , and that A (B, resp.) is included in the interior of A * (B * , resp.) If S is a d-computable partition of Y between Y ∩ A * and Y ∩ B * , then one can effectively find a Π 0 1 partition T ⊆ S of [0, 1] ω between A and B.
be a computable collection of pairwise disjoint Cantor set in [0, 1] such that each nondegenerate subinterval of [0, 1] contains one of the C i . Then proceed the usual proof (see van Mill [17, Theorem 3.13.10]).
Question 11 .
11Does there exist an almost total degree which is not graphcototal? Question 12. Does there exist a Π 0 1 (z) HSID subset of [0, 1] ω for a hyperarithmeical z? Question 13. Can O in Lemma 9 be replaced with an arithmetical oracle or an arbitrary PA degree?
i<n ranges over all n disjoint pairs of open subsets of [0, 1] ω . By complete normality of [0, 1] ω , we can assume that U i and V i are disjoint in [0, 1] ω (see [17, Corollary 3.1.5]). Moreover, by compactness, we can assume that U i and V i are finite unions of basic open balls in [0, 1] ω .
still a nonempty Π 0 1 collection of n pairs of closed subsets of [0, 1] ω . By the low basis theorem, there is a low degree d such that Q has a d-computable element. In other words, there is a sequence(A i , B i ) i<n of n disjoint pairs of Π 0 1 (d) subsets of [0, 1] ω which is essential in P . Now,we follow the usual construction of a Cantor manifold. For A = i<n A i ∪ B i , there is a continuous function f : A → S n−1 such that f does not admit a continuous extension over P (see [17, Theorem 3.6.5]). Given s, by using O, we can check whether f admits a continuous extension over A ∪ (P s \ B e ). Then, proceed the usual construction (see [17, Theorem 3.7.8]).
Given a closed subset P of a compactum X, is the set of all f : P → S n extendible over X Borel in C(P, S n ) endowed with the compact-open topology?2 Of course, the separation dimension and the extension dimension coincide, but we need an instance-wise correspondence: Given a function f : A → S n , how can we obtain a sequence of pairs of disjoint closed sets in X whose inessentiality in Y is equivalent to extendability of f over Y for any compact space Y such that A ⊆ Y ⊆ X?
On cototality and the skip operator in the enumeration degrees. Uri Andrews, A Hristo, Rutger Ganchev, Steffen Kuyper, Joseph S Lempp, Alexandra A Miller, Mariya I Soskova, Soskova, submittedUri Andrews, Hristo A. Ganchev, Rutger Kuyper, Steffen Lempp, Joseph S. Miller, Alexandra A. Soskova, and Mariya I. Soskova. On cototality and the skip operator in the enumeration degrees. submitted.
Randomness for non-computable measures. Adam R Day, Joseph S Miller, Trans. Amer. Math. Soc. 3657Adam R. Day and Joseph S. Miller. Randomness for non-computable measures. Trans. Amer. Math. Soc., 365(7):3575-3591, 2013.
Dimension Theory. Ryszard Engelking, PWN-Polish Scientific Publishers19Amsterdam-Oxford-New York; WarsawNorth-Holland Mathematical LibraryRyszard Engelking. Dimension Theory. North-Holland Publishing Co., Amsterdam- Oxford-New York; PWN-Polish Scientific Publishers, Warsaw, 1978. North-Holland Mathematical Library, 19.
Turing degrees in Polish spaces and decomposability of Borel functions. Vassilios Gregoriades, Takayuki Kihara, Keng Meng Ng, submittedVassilios Gregoriades, Takayuki Kihara, and Keng Meng Ng. Turing degrees in Polish spaces and decomposability of Borel functions. submitted.
Enumeration in closure spaces with applications to algebra. Emmanuel Jeandel, preprintEmmanuel Jeandel. Enumeration in closure spaces with applications to algebra. preprint, 2015.
Higher randomness and lim-sup forcing within and beyond hyperarithmetic. Takayuki Kihara, to appear in Proceedings of the Singapore Programme on Sets and ComputationsTakayuki Kihara. Higher randomness and lim-sup forcing within and beyond hyper- arithmetic, 2016. to appear in Proceedings of the Singapore Programme on Sets and Computations.
Some notes based on discussions in Nanyang Technological University at Singapore. in preparation. Takayuki Kihara, Steffen Lempp, Meng Keng, Arno Ng, Pauly, Takayuki Kihara, Steffen Lempp, Keng Meng Ng, and Arno Pauly. Some notes based on discussions in Nanyang Technological University at Singapore. in preparation, Oct. 2016.
Point degree spectra of represented spaces. submitted. Takayuki Kihara, Arno Pauly, Takayuki Kihara and Arno Pauly. Point degree spectra of represented spaces. sub- mitted.
Degrees of unsolvability of continuous functions. Joseph S Miller, J. Symbolic Logic. 692Joseph S. Miller. Degrees of unsolvability of continuous functions. J. Symbolic Logic, 69(2):555-584, 2004.
P G Odifreddi, Logic and the Foundations of Mathematics. AmsterdamNorth-Holland Publishing CoIIP. G. Odifreddi. Classical Recursion Theory. Vol. II, volume 143 of Studies in Logic and the Foundations of Mathematics. North-Holland Publishing Co., Amsterdam, 1999.
Classical Recursion Theory. Piergiorgio Odifreddi, of Studies in Logic and the Foundations of Mathematics. AmsterdamNorth-Holland Publishing Co125Piergiorgio Odifreddi. Classical Recursion Theory, volume 125 of Studies in Logic and the Foundations of Mathematics. North-Holland Publishing Co., Amsterdam, 1989.
On Borel mappings and σ-ideals generated by closed sets. R Pol, P Zakrzewski, Adv. Math. 2312R. Pol and P. Zakrzewski. On Borel mappings and σ-ideals generated by closed sets. Adv. Math., 231(2):651-663, 2012.
Note on Borel mappings and dimension. Roman Pol, Topology Appl. 195Roman Pol. Note on Borel mappings and dimension. Topology Appl., 195:275-283, 2015.
New dimension-theory techniques for constructing infinite-dimensional examples. Leonard R Rubin, R M Schori, John J Walsh, General Topology Appl. 101Leonard R. Rubin, R. M. Schori, and John J. Walsh. New dimension-theory techniques for constructing infinite-dimensional examples. General Topology Appl., 10(1):93-102, 1979.
The bi-embeddability relation for finitely generated groups II. Simon Thomas, Jay Williams, Arch. Math. Logic. 553-4Simon Thomas and Jay Williams. The bi-embeddability relation for finitely generated groups II. Arch. Math. Logic, 55(3-4):385-396, 2016.
On Cantorian manifolds of an infinite number of dimensions. L A Tumarkin, Dokl. Akad. Nauk SSSR (N.S.). 115L. A. Tumarkin. On Cantorian manifolds of an infinite number of dimensions. Dokl. Akad. Nauk SSSR (N.S.), 115:244-246, 1957.
The infinite-dimensional topology of function spaces. Jan Van Mill, North-Holland Publishing Co64North-Holland Mathematical Library; AmsterdamJan van Mill. The infinite-dimensional topology of function spaces, volume 64 of North-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam, 2001.
Forcing idealized. Jindřich Zapletal, Cambridge Tracts in Mathematics. 174Cambridge University PressJindřich Zapletal. Forcing idealized, volume 174 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2008.
Dimension theory and forcing. Jindřich Zapletal, Topology Appl. 167Takayuki Kihara) Department of Mathematics, University of California, BerkeleyUnited States E-mail address: [email protected]řich Zapletal. Dimension theory and forcing. Topology Appl., 167:31-35, 2014. (Takayuki Kihara) Department of Mathematics, University of California, Berkeley, United States E-mail address: [email protected]
|
[] |
[
"Effective models of two-flavor QCD: from small towards large m q",
"Effective models of two-flavor QCD: from small towards large m q"
] |
[
"T Kähärä ",
"K Tuominen ",
"\nDepartment of Physics\nUniversity of Jyväskylä\nP.O.Box 35FIN-40014JyväskyläFinland\n",
"\nHelsinki Institute of Physics\nUniversity of Helsinki\nP.O.Box 64FIN-00014Finland\n"
] |
[
"Department of Physics\nUniversity of Jyväskylä\nP.O.Box 35FIN-40014JyväskyläFinland",
"Helsinki Institute of Physics\nUniversity of Helsinki\nP.O.Box 64FIN-00014Finland"
] |
[] |
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study how these models, on the meanfield level when coupled with the Polyakov loop, behave as a function of increasing bare quark (or pion) mass. We find qualitatively similar behaviors for the cases of linear sigma model and Nambu-Jona-Lasinio model and, relating to existing lattice data, show that one cannot conclusively decide which or the two approximate symmetries drives the phase transitions near the physical point. * Electronic address: [email protected] † Electronic address: [email protected]
|
10.1103/physrevd.80.114022
|
[
"https://arxiv.org/pdf/0906.0890v1.pdf"
] | 119,090,667 |
0906.0890
|
fbcee426fbeea3b52fbbc4be1fa78184517aba03
|
Effective models of two-flavor QCD: from small towards large m q
T Kähärä
K Tuominen
Department of Physics
University of Jyväskylä
P.O.Box 35FIN-40014JyväskyläFinland
Helsinki Institute of Physics
University of Helsinki
P.O.Box 64FIN-00014Finland
Effective models of two-flavor QCD: from small towards large m q
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study how these models, on the meanfield level when coupled with the Polyakov loop, behave as a function of increasing bare quark (or pion) mass. We find qualitatively similar behaviors for the cases of linear sigma model and Nambu-Jona-Lasinio model and, relating to existing lattice data, show that one cannot conclusively decide which or the two approximate symmetries drives the phase transitions near the physical point. * Electronic address: [email protected] † Electronic address: [email protected]
I. INTRODUCTION
It has been proposed that one can understand the interplay between chiral symmetry restoration and center symmetry breaking and their implications on the phase transitions of QCD at finite temperature and density by treating chiral symmetry as almost exact, and center symmetry breaking as a consequence of chiral symmetry restoration and interactions [1]. This picture allows one to explain, in very transparent terms, why deconfinement and chiral symmetry restoration coincide in the theory with fundamental quarks [2] while for adjoint quarks the two phase transitions are widely separated [3]. For adjoint quarks at finite chemical potential this model framework also predicts interesting multicritical phenomena, see [4]. For QCD the standard viewpoint, supported by the physical spectrum, is that in the two-flavor case the chiral symmetry is almost exact whereas the center symmetry is badly broken. However, one might regard this picture too simple, since it does not take into account how rapidly the system departs from the limit of exact chiral symmetry. Consider cranking pion mass up from its physical value. There are basically two options: First, it could be that even for modest masses chiral symmetry remains a good approximation while the center symmetry becomes an approximate symmetry only for a very heavy pion. Alternatively, the second option is that the system departs from the chiral limit very fast, and chiral symmetry becomes very quickly a poor symmetry simultaneously allowing center symmetry to restore rapidly as a function of pion mass. This latter viewpoint has been advocated for in e.g. [5]. A new ingredient of our analysis is the treatment of chiral and Polyakov degrees of freedom within the same model setup simultaneously. Furthermore, in the spirit of our earlier work [6], we consider side-by-side two different realizations of the chiral sector. This will allow us to gain more insight into what extent the quantitative results of these models can be trusted.
We consider linear sigma model and Nambu-Jona-Lasio model [7] both augmented with the Polyakov loop. There has been a large body of literature devoted on these models recently [6,8,9,10,11,12,13]. These models provide a simple and concrete realization for the intertwining of deconfinement and chiral symmetry restoration as explained in general terms in [1]. As discussed in detail in [6], especially at finite densities the numerical output is quite sensitive on which realization of the chiral dynamics is used. Therefore the quantitative results should, when used to draw concusions about QCD dynamics, be interpreted with some care.
In this paper, we continue the investigation started in [6]. Here our aim is to study how these models respond when the amount of explicit chiral symmetry breaking is increased. For the case of Polyakov-linear sigma model (PLSM), we basically confirm the picture established in [5] and we find that Polyakov-Nambu-Jona-Lasinio (PNJL) model leads to qualitatively similar results. Our main conclusion is that the PLSM and PNJL models lead to qualitatively and to some extent quantitatively (within 10% or less) similar phase structure at and away from the physical value of m π . This observation strengthens the viability of these models as an effective description of the thermodynamics of two-flavor QCD at zero net-quark density.
The paper is organized as follows: in section II we briefly recall the basic definitions of the two models we consider and explain how the parameters are affected to take into account varying pion mass. Then in section III we present our main results and in section IV our conclusions and outlook.
II. THE MODELS
A. Models at the physical point mπ = 140 MeV Let us first review the general definitions of the models which we consider. We start with the description in terms of parameters fixed to correspond to the physical values reproducing the observed masses for the pion and scalar resonance σ. The PLSM model consists of the linear sigma model, a polyakov loop potential and an interaction between the two. The PNJL model is similar with the chiral part of the Lagrangian now corresponding to the NJL model. For the derivation of the grand potential we refer to the literature, e.g. [6], and merely state the result here
Ω = − T ln L V = U chiral + U + Ωq q ,(1)
where
U chiral = λ 2 4 (σ 2 + π 2 − v 2 ) 2 − Hσ(2)
for the PLSM model and
U chiral = (m 0 − M ) 2 2G(3)
for the PNJL model. The parameters in the above equations are fixed by the physical vacuum properties for LSM as H = f π m 2 π , where f π = 93 MeV and m π = 138 MeV. The coupling λ 2 ≈ 20 is determined by the tree level mass m 2 σ = 2λ 2 f 2 π + m 2 π , which is set to be 600 MeV. On the other hand, for NJL part we fix the bare quark mass to be m 0 = 5.5 MeV and the coupling G = 10.08 GeV −2 . Furthermore, the constitutien mass M is related to theqq condensate as M = m 0 − G qq .
The Polyakov loop is included to both models through the mean field potential
U ≡ U ( , * , T )/T 4 = − b 2 (T ) 2 | | 2 − b 3 6 ( 3 + * 3 ) + b 4 4 (| | 2 ) 2 ,(4)where b 2 (T ) = a 0 + a 1 T 0 T + a 2 ( T 0 T ) 2 + a 3 ( T 0 T ) 3 ,(5)
and the constants a i ,b i are fixed to reproduce pure gauge theory thermodynamics with phase transition at T 0 = 270 MeV; We adopt the values determined in [9], and shown for completeness in table I. Here is the gauge invariant Polyakov loop in the fundamental representation, and while one could also include other loop degrees, say, adjoint or the sextet, we consider here only the mean field potential of the fundamental loop parametrized to describe the pure gauge thermodynamics. Finally there are the interactions between the Polyakov loop and chiral degrees of freedom given by the potential
Ωq q = −2N f T d 3 p (2π) 3 ln(1 + 3( + * e −(E−µ)/T )e −(E−µ)/T + e −3(E−µ)/T ) + ln(1 + 3( * + e −(E+µ)/T )e −(E+µ)/T + e −3(E+µ)/T ) .(6)
In the PNJL model the interaction potential (6) has also an additional vacuum term
− 6N f d 3 p (2π) 3 Eθ(Λ 2 − | p| 2 )(7)
which is controlled by the cut-off Λ. In the above equations we have E = p 2 + M 2 for the PNJL model and similarly E = p 2 + g 2 σ 2 for the PLSM model. In the latter case the coupling constant g is fixed to 3.3 corresponding to the baryon mass ∼ 1 GeV. Thermodynamics is now determined by solving the equations of motion for the mean fields, and then the pressure is given by evaluating the potential on the minimum: p = −Ω(T, µ). The above equations are valid at nonzero densities, but in this work we will not consider finite chemical potential but set µ = 0. This results in the additional simplification = * .
∂Ω ∂σ = 0, ∂Ω ∂ = 0, ∂Ω ∂ * = 0,(8)
B. PLSM model away from the physical point
In this work we want to explore the region in the models where the pion mass differs from its observed value. For this we need a consistent way of setting the model parameters in the non-physical region. In the PLSM model this can be achieved by finding relations f π (m π ), m σ (m π ) and g(m π ) consistent with the known values at the physical point and leaving m π as the only tunable parameter. In this section we present a way to do this relying on lattice data.
First we want to relate the pion mass m π and the bare quark mass m q . From [14] we get the relation
m 2 π a 2 = A 1 (m q a) 1 1+δ + B(m q a) 2 ,(9)
for the pion mass given in terms of the lattice spacing a and m q . In (9) the parameters A 1 , B and δ are fitted from lattice data and we use the fit made in [14] δ = 0.16413
A 1 = 0.82725 B = 1.88687.(10)
The lattice spacing a is determined from the equation
f π a = 0.06672 + 0.221820 × (m q a)(11)
also fitted in [14] to lattice data. Now to retain the relation that f π = 93 MeV corresponds to m π = 138 MeV we shift the value of f π by a constant C = 1.18 MeV since we don't want to alter the fit parameters or the lattice spacing obtained in [14]. This gives us the relation
m q a = ((f π + C)a − 0.06672)/0.221820(12)
with the lattice spacing a = 0.505306 GeV −1 . Combining (9) and (12) gives us the pion mass in relation with the pion decay constant. The sigma mass is determined through the relation
m σ = ξm 2 π + D(13)
based on [15] from which we also obtain the slope ξ = 0.00183 MeV −1 . Requiring that m π = 138 MeV corresponds to m σ = 600 MeV gives us the value of the constant D = 565.15 MeV. Finally, for the coupling constant g we had the relation gf π = M N /3, where the nucleon mass M N can be parametrized in the form
M N = M 0 + 4C 1 m 2 π(14)
as discussed in [16] where it has also been shown that for a good description of nucleon mass one should include terms up to and including O(m 4 π ) which are obtained from chiral perturbation theory at O(p 4 ). We here truncate this fit to O(m 2 π ), which gives a fairly poor description of M N (m π ) but the resulting effect on g is small, at 30% level and we will comment on the possible effect this has on our numerical results. Also, we want to keep the number of parameters to the minimum, so this approximation seems a reasonable starting point. To retain the relation (m π = 138 MeV, f π = 93 MeV) ⇔ g = 3.3 we set the parameters in (14) to M 0 = 860 MeV and C 1 = 0.8 GeV −1 .
C. PNJL model away from the physical point
In the PLSM model we first used the lattice relation (9) to connect the pion mass to the bare quark mass. In the PNJL model the bare quark mass is a direct input parameter and the pion mass arises from the model so they are already related. The PNJL model yields a relation for the masses that is consistent with the lattice result (9). A comparison between these two is shown in Figure 1 where the PLSM line now corresponds to the lattice formula (9). The deviation between the models is small, but grows slowly towards higher m q being ≈ 10 % at m q = 60 MeV and ≈ 20 % at m q = 250 MeV. Note that we obtain this consistency without tuning the coupling or the cutoff. Actually, we have checked numerically that as a function of both G and Λ, the values in Table I give the best agreement for the pion mass with the lattice curve. Altering G or Λ from these values will increase m π for a given m q and thus adjusting these parameters would only move m π (m q ) further away from the corresponding lattice curve. Now we have a parametrization of both models in a way which allows us to study chiral symmetry breaking through one parameter, namely the quark mass m q . In the PLSM model we have exchanged this parameter with the mass of the pion through a formula consistent with lattice result since in the case of PLSM model this is most convenient as the pion appears as an explicit degree of freedom. We now turn to investigation of explicit chiral symmetry breaking and its effect on the thermodynamics of these models.
III. EFFECT ON THERMODYNAMICS
Using the parametrization described in the previous section we can now study the relation between the pion mass and the critical temperatures of the models. There are a priori two transitions in the models: The chiral transition due to (approximate) restoration of chiral symmetry at finite temperature and the deconfinement transition encoded into the parameters of the potential for the polyakov field . To both of these transitions one can assign their own critical temperature. Since over most of the parameter space the transition is a crossover, the definition of a critical temperature is vague. We have taken as definition of the critical temperature of the transition to be the temperature in which the temperature derivative of the corresponding field has its peak value. This definition however has some problems since a rapid increase in the field will yield a peak in the derivative regardless of the absolute value of the field at this point. To illustrated possible ambiguities, consider Figure 2 where the polyakov field and its temperature derivative are shown. At large m q a double peak structure is visible in the derivative with a broad peak and a sharp peak (a similar effect arises also when m q is less than 5 MeV, but not shown in Figure 2). When one looks at the values of the field φ, the sharp peaks correspond to the discontinuous jumps in the field at large m q while the broader peaks signify the location of the largest continuous rise in the value of the field. It is not clear which of these rapid changes should actually be used to determine the critical temperature. Some insight to the problem can be found by looking at the Figure 3, where the constituent mass and its derivative are shown. One immediately notices that at large m q the discontinuities in the constituent mass in Figure 3 correspond to the jumps in the Polyakov field φ in Figure 2. This suggests that the sharper peaks in the Polyakov field derivative are merely an effect produced by the rapidly changing constituent mass and its interaction with the Polyakov field. Also since the overall behaviour of the Polyakov field at different values of m q is not significantly altered by the appearence of these sharp peaks it seems that the broader peak is a better indicator for the critical temperature. This can be further justified by considering that deconfinement is characterised by ∼ 1 and the broader peak is roughly at ∼ 1/2 which is the point where the system is becoming dominantly deconfined in contrast to being confined when ∼ 0. From here on we will call the temperature associated with the broader peak as the (primary) T c and temperature associated with the sharper peak the induced T c . We observed a similar double peak structure also in [6] and it seems to be a feature of these effective models, but may not be true in two-flavor QCD. Note that the T c determined from the derivative of the chiral field is dependent on the bare quark mass, or equivalently on m π . For PLSM this behavior can be affected by increasing the value of coupling g: larger the value of g, slower the increase of T c as m π increases. However, since g ∝ M N it turns out that our simple parametrization yields larger value for g than what one would get if one would fit M N more carefully from lattice [16]. Projecting this on our analysis implies that while we observe PLSM and PNJL transition temperatures to behave qualitatively similarly up to at least m π = 800 MeV, it is plausible that quantitatively the critical temperature in PLSM might grow more rapidly with m q than in PNJL model. Larger values of g also lead to sharper transitions and may be responsible for the discontinuous transitions encountered in the PLSM case.
Turning to the PNJL case, then, we first observe that the double peak problem does not arise here, although observed also in the PNJL model at finite µ in [6]. A related difference with respect to the PLSM case is observed comparing Figures 3 and 5: The constituent mass behaves very differently in the two models. In the PNJL model the chiral transition softens as we go towards higher m q which can be seen as a broadening of the peaks in the derivative. In the PLSM case, on the other hand, the transition was observed to become sharper and finally discontinuous as m q is ramped up. It can be attributed to this difference that the double peak structure in the Polyakov field arises in the PLSM model but not in the PNJL model. Despite the difference in the sharpness of the chiral transition the transition temperatures determined from the peaks of the derivatives are relatively close to each other. Let us then sketch the phase diagrams of these theories. The phase boundaries of both models are shown in Figure 6 for a range of pion masses. From previous studies we know that when m π = 138 MeV both, chiral and polyakov, transitions occur at a temperature of 211 MeV in the PLSM model and at 229 MeV in the PNJL model. Away from the physical point these two models behave similarly as a function of pion mass to about m π = 800 MeV from which upwards the PLSM chiral T c has a rising curve while the PNJL model curve starts to level. Also the Polyakov field behaviour is similar in both models, although it seems to be more strongly coupled to the chiral sector in the PLSM case when m π < 600 MeV. At larger m π the Polyakov field 'decouples' from the chiral sector in both cases and its T c seems to level off at near the pure gauge value. Note that in Figure 6 for the field φ in the PLSM, the critical temperature is the primary T c discussed above. The induced T c follows the chiral transition curve and is not shown separately. In addition to the PLSM and PNJL models Figure 6 shows the critical temperatures for the bare NJL and LSM models as well as for the pure gauge potential. Comparing the NJL and pure gauge critical temperatures with the PNJL model, one can see that the critical temperatures are shifted closer together at small m π . One can also see that the interaction induces a m π dependence on the Polyakov field critical temperature which is independent of m π in the pure gauge case. Also naturally the m π dependence of the chiral field is altered. At large m π both of these effects die out as the different transitions separate. The chiral part of PNJL approaches the NJL curve and the Polyakov field draws near to the pure gauge curve. The same things can be said from the PLSM case where the chiral and Polyakov sector critical temperatures are shifted towards each other at small m π , but at larger m π separate and approach the bare LSM and pure gauge curves respectively. Only difference being again the stronger interaction between the two sectors in the range 200 − 600 MeV of the pion mass.
IV. CONCLUSIONS
We have considered the PLSM and PNJL models at non-physical pion masses in order to determine how these models react to explicit chiral symmetry breaking. The motivations for this study were in general terms to determine how well these models encode the QCD dynamics of chiral symmetry breaking and, in more specific terms, to test if the intertwining of deconfinement and chiral symmetry restoration in these models survives away from the physical value of the pion mass. For the PLSM model this analysis was made possible by a lattice-based parametrization which allowed us to set the model parameters in a consistent way away from the physical pion mass. In the PNJL model the pion mass arises from the model and can be changed by adjusting the bare quark mass which is a direct input parameter of the model. As we have shown, both models lead to similar behavior for m π (m q ) and are in this extent consistent with existing lattice data.
In section III we studied the thermodynamics, or more precisely the transition temperatures and phase diagrams, of the two models as a function of pion mass. We found that the behaviour of the polyakov sector is almost identical in the two models, as expected. The only real difference being the notable interaction between the chiral and polyakov sectors at low pion masses in the PLSM model. However the overall behaviour of the polyakov field was not largely affected by this interaction and the primary Polyakov T c agreed with the one acquired from the PNJL model. In contrast to the Polyakov sector, the chiral transition temperatures showed a different behaviour in the two models at pion masses larger than 800 MeV, but this should rather be taken as indication of the fact that such large pion masses are beyond the validity of simple chiral models.
For the pion masses above 400 MeV our analysis for the PLSM model confirms the results found in [5] and qualitatively the results obtained for the PNJL model are similar. Since such masses make the applicability of these chiral models questionable, one should concentrate more on the range below m π = 400 MeV. In this range we observe that in qualitative agreement both PLSM and PNJL models imply coincident chiral symmetry breaking and deconfinement as well as modestly increasing critical temperature as a function of the pion mass. Within this range the models agree quantitatively within ∼ 10% although it should be noted that the critical temperature grows more rapidly with increasing pion mass in PLSM model than in PNJL model. Since the quark mass can be thought of a more basic parameter of these models than the pion mass and since the pion mass corresponding to given quark mass differ in these two models, it is informative to also plot T c as a function of the quark mass. This is shown in Figure 7, and clearly confirms the result that near the chiral limit the crossover transition T c can be determined either using the chiral or Polyakov loop degrees of freedom. As one moves further away from the chiral limit, above m q ∼ 100 MeV corresponding to m π ∼ 400 MeV (see Figure 1), the description using chiral fields becomes more uncertain. The results obtained from the Polyakov loop for T c are similar in the two models which is not surprising since the mean field potential for the Polyakov loop is the same in both models and only weakly affected by the value of the quark mass through the interactions with chiral fields. Existing lattice data for two-flavor QCD thermodynamics at pion masses above 400 MeV [17] together with our results in this mass range suggests that the dependence of the critical temperature on the pion mass is better described by the Polyakov loop dynamics. In the model framework considered here this is natural consequence of the underlying coupled dynamics. Lattice data closer towards the physical value of the pion mass would be highly desirable in order to better constrain these effective models.
FIG. 1 :
1Pion mass as a function of the bare quark mass in the PLSM and PNJL models.
FIG. 2 :FIG. 3 :
23The Polyakov field φ and its derivative in the PLSM model for various values of mq. The derivatives have been normalized to unity for clarity. Constituent mass and its derivative in the PLSM model for various values of mq. The derivatives have been normalized to unity for clarity.
FIG. 4 :
4The polyakov field φ and its derivative in the PNJL model for various values of mq. The derivatives have been normalized to unity for clarity.
FIG. 5 :
5Constituent mass and its derivative in the PNJL model for various values of mq. The derivatives have been normalized to unity for clarity.
FIG. 6 :
6The critical temperatures of the chiral and the deconfinement transitions shown against the pion mass in both models.
FIG. 7 :
7The critical temperatures of the chiral and the deconfinement transitions shown against the quark mass in both models.
TABLE I :
IThe parameters used for the effective potential
AcknowledgmentsThe financial support for T.K. from the Väisälä foundation is gratefully acknowledged.
. A Mocsy, F Sannino, K Tuominen, arXiv:hep-ph/0308135Phys. Rev. Lett. 92182302A. Mocsy, F. Sannino and K. Tuominen, Phys. Rev. Lett. 92, 182302 (2004) [arXiv:hep-ph/0308135];
. A Mocsy, F Sannino, K Tuominen, arXiv:hep-ph/0306069JHEP. 040344A. Mocsy, F. Sannino and K. Tuominen, JHEP 0403, 044 (2004) [arXiv:hep-ph/0306069];
. A Mocsy, F Sannino, K Tuominen, arXiv:hep-ph/0301229Phys. Rev. Lett. 9192004A. Mocsy, F. Sannino and K. Tuominen, Phys. Rev. Lett. 91, 092004 (2003) [arXiv:hep-ph/0301229].
. O Kaczmarek, arXiv:hep-lat/9908010Phys. Rev. D. 6234021O. Kaczmarek et al., Phys. Rev. D 62, 034021 (2000) [arXiv:hep-lat/9908010].
. F Karsch, M Lutgemeier, arXiv:hep-lat/9812023Nucl. Phys. B. 550449F. Karsch and M. Lutgemeier, Nucl. Phys. B 550, 449 (1999) [arXiv:hep-lat/9812023].
. F Sannino, K Tuominen, arXiv:hep-ph/0403175Phys. Rev. D. 7034019F. Sannino and K. Tuominen, Phys. Rev. D 70, 034019 (2004) [arXiv:hep-ph/0403175].
. A Dumitru, D Roder, J Ruppert, arXiv:hep-ph/0311119Phys. Rev. D. 7074001A. Dumitru, D. Roder and J. Ruppert, Phys. Rev. D 70 (2004) 074001 [arXiv:hep-ph/0311119].
. T Kahara, K Tuominen, arXiv:0803.2598Phys. Rev. D. 7834015hep-phT. Kahara and K. Tuominen, Phys. Rev. D 78 (2008) 034015 [arXiv:0803.2598 [hep-ph]].
. Y Nambu, G Jona-Lasinio, Phys. Rev. 122345Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122 (1961) 345;
. Y Nambu, G Jona-Lasinio, Phys. Rev. 124246Y. Nambu and G. Jona-Lasinio, Phys. Rev. 124 (1961) 246.
. K Fukushima, arXiv:hep-ph/0310121Phys. Lett. B. 591277K. Fukushima, Phys. Lett. B 591, 277 (2004) [arXiv:hep-ph/0310121];
. K Fukushima, arXiv:hep-ph/0303225Phys. Rev. D. 6845004K. Fukushima, Phys. Rev. D 68, 045004 (2003) [arXiv:hep-ph/0303225];
. K Fukushima, arXiv:hep-ph/0209311Phys. Lett. B. 55338K. Fukushima, Phys. Lett. B 553, 38 (2003) [arXiv:hep-ph/0209311].
. C Ratti, M A Thaler, W Weise, arXiv:hep-ph/0506234Phys. Rev. D. 7314019C. Ratti, M. A. Thaler and W. Weise, Phys. Rev. D 73, 014019 (2006) [arXiv:hep-ph/0506234];
. C Ratti, S Roessner, W Weise, arXiv:hep-ph/0701091Phys. Lett. B. 64957C. Ratti, S. Roessner and W. Weise, Phys. Lett. B 649 (2007) 57 [arXiv:hep-ph/0701091].
. S Roessner, T Hell, C Ratti, W Weise, arXiv:0712.3152Nucl. Phys. A. 814118hep-phS. Roessner, T. Hell, C. Ratti and W. Weise, Nucl. Phys. A 814 (2008) 118 [arXiv:0712.3152 [hep-ph]].
. M Ciminale, G Nardulli, M Ruggieri, R Gatto, arXiv:0706.4215Phys. Lett. B. 65764hep-phM. Ciminale, G. Nardulli, M. Ruggieri and R. Gatto, Phys. Lett. B 657 (2007) 64 [arXiv:0706.4215 [hep-ph]].
. P Costa, M C Ruivo, C A De Sousa, ; P Costa, C A Sousa, M C Ruivo, H Hansen, arXiv:0801.3417arXiv:0801.3616hep-ph. hep-phP. Costa, M. C. Ruivo and C. A. de Sousa, arXiv:0801.3417 [hep-ph], P. Costa, C. A. de Sousa, M. C. Ruivo and H. Hansen, arXiv:0801.3616 [hep-ph].
. Y Sakai, K Kashiwa, H Kouno, M Matsuzaki, M Yahiro, arXiv:0902.0487hep-phY. Sakai, K. Kashiwa, H. Kouno, M. Matsuzaki and M. Yahiro, arXiv:0902.0487 [hep-ph].
. T W Chiu, T H Hsieh, arXiv:hep-lat/0305016Nucl. Phys. Proc. Suppl. 673217Nucl. Phys. BT. W. Chiu and T. H. Hsieh, Nucl. Phys. B 673, 217 (2003) [Nucl. Phys. Proc. Suppl. 129, 492 (2004)] [arXiv:hep- lat/0305016].
. T Kunihiro, SCALAR CollaborationS Muroya, SCALAR CollaborationA Nakamura, SCALAR CollaborationC Nonaka, SCALAR CollaborationM Sekiguchi, SCALAR CollaborationH Wada, SCALAR CollaborationarXiv:hep-ph/0310312Phys. Rev. D. 7034504T. Kunihiro, S. Muroya, A. Nakamura, C. Nonaka, M. Sekiguchi and H. Wada [SCALAR Collaboration], Phys. Rev. D 70 (2004) 034504 [arXiv:hep-ph/0310312].
. M Procura, T R Hemmert, W Weise, arXiv:hep-lat/0309020Phys. Rev. D. 6934505M. Procura, T. R. Hemmert and W. Weise, Phys. Rev. D 69 (2004) 034505 [arXiv:hep-lat/0309020].
. F Karsch, E Laermann, A Peikert, arXiv:hep-lat/0012023Nucl. Phys. B. 605579F. Karsch, E. Laermann and A. Peikert, Nucl. Phys. B 605 (2001) 579 [arXiv:hep-lat/0012023].
|
[] |
[
"Chiral effective field theory on the light front Chiral effective field theory on the light front",
"Chiral effective field theory on the light front Chiral effective field theory on the light front"
] |
[
"J.-F Mathiot [email protected] \nLaboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance\n",
"Valencia \nLaboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance\n",
"Spain \nLaboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance\n",
"J.-F Mathiot \nLaboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance\n"
] |
[
"Laboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance",
"Laboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance",
"Laboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance",
"Laboratoire de Physique Corpusculaire\nUniversité Blaise Pascal\n63177Aubière CedexFrance"
] |
[] |
We propose a new approach to describe baryonic structure in terms of an effective chiral Lagrangian. The state vector of a baryon is defined on the light front of general position ω·x = 0, where ω is an arbitrary light-like four vector. It is then decomposed in Fock components including an increasing number of pions. The maximal number of particles in the state vector is mapped out to the order of decomposition of the chiral effective Lagrangian to have a consistent calculation of both the state vector and the effective Lagrangian. An adequate Fock sector dependent renormalization scheme is used in order to restrict all contributions within the truncated Fock space. To illustrate our formalism, we calculate the anomalous magnetic moment of a fermion in the Yukawa model in the three-body truncation. We present perspectives opened by the use of a new regularization scheme based on the properties of fields as distributions acting on specific test functions.International Workshop on Effective Field Theories: from the pion to the upsilon
|
10.22323/1.086.0093
|
[
"https://arxiv.org/pdf/0907.2665v1.pdf"
] | 10,023,236 |
0907.2665
|
7baf624021f6e8401c3de9603567aac98caeb72b
|
Chiral effective field theory on the light front Chiral effective field theory on the light front
February 2-6 2009 15 Jul 2009
J.-F Mathiot [email protected]
Laboratoire de Physique Corpusculaire
Université Blaise Pascal
63177Aubière CedexFrance
Valencia
Laboratoire de Physique Corpusculaire
Université Blaise Pascal
63177Aubière CedexFrance
Spain
Laboratoire de Physique Corpusculaire
Université Blaise Pascal
63177Aubière CedexFrance
J.-F Mathiot
Laboratoire de Physique Corpusculaire
Université Blaise Pascal
63177Aubière CedexFrance
Chiral effective field theory on the light front Chiral effective field theory on the light front
February 2-6 2009 15 Jul 2009
We propose a new approach to describe baryonic structure in terms of an effective chiral Lagrangian. The state vector of a baryon is defined on the light front of general position ω·x = 0, where ω is an arbitrary light-like four vector. It is then decomposed in Fock components including an increasing number of pions. The maximal number of particles in the state vector is mapped out to the order of decomposition of the chiral effective Lagrangian to have a consistent calculation of both the state vector and the effective Lagrangian. An adequate Fock sector dependent renormalization scheme is used in order to restrict all contributions within the truncated Fock space. To illustrate our formalism, we calculate the anomalous magnetic moment of a fermion in the Yukawa model in the three-body truncation. We present perspectives opened by the use of a new regularization scheme based on the properties of fields as distributions acting on specific test functions.International Workshop on Effective Field Theories: from the pion to the upsilon
Introduction
The calculation of baryon properties within the framework of chiral perturbation theory is the subject of active theoretical developments. Since the nucleon mass is not zero in the chiral limit, all momentum scales are involved in the calculation of baryon properties (like masses or electro-weak observables) beyond tree level. This is at variance with the meson sector for which a meaningfull power expansion of any physical amplitude can be done. While there is not much freedom, thanks to chiral symmetry, for the construction of the effective Lagrangian in Chiral Perturbation Theory (CPT), L CPT , in terms of the pion field -or more precisely in terms of the U field defined by U = e iτ.π/ f π where f π is the pion decay constant -one should settle an appropriate approximation scheme in order to calculate baryon properties. Up to now, two main strategies have been adopted. The first one is to force the bare (and hence the physical) nucleon mass to be infinite, in Heavy Baryon Chiral Perturbation Theory [1]. In this case, by construction, an expansion in characteristic momenta can de developped. The second one is to use a specific regularization scheme [2] in order to separate contributions which exhibit a meaningfull power expansion, and hide the other parts in appropriate counterterms. In both cases howether, the explicit calculation of baryon properties relies on an extra approximation in the sense that physical amplitudes are further calculated by expanding L CPT in a finite number of pion field.
Moreover, it has recently been realized that the contribution of pion-nucleon resonances, like the ∆ and Roper resonances, may play an important role in the understanding of the nucleon properties at low energies [3]. These resonances are just added "by hand" in the chiral effective Lagrangian. This is also the case for the most important 2π resonances, like the σ and ρ resonances.
We would like to propose in the following a new formulation in order to describe baryon properties in a systematic way. Since in the chiral limit the pion mass is zero, any calculation of πN systems demands a relativistic framework to get, for instance, the right analytical properties of the physical amplitudes. The calculation of bound state systems, like a physical nucleon composed of a bare nucleon coupled to many pions, relies also on a non-perturbative eigenvalue equation. While the mass of the system can be determined in leading order from the iteration of the πN selfenergy calculated in first order perturbation theory, as indicated in Fig.1.a, this is in general not possible in particular for πN irreducible contributions as shown on Fig.1.b.
Light-front chiral effective field theory
The general framework to deal with these requirements is Light Front Dynamics (LFD). Relativistic description admits some freedom in choosing the space-like hyper-surface on which the state vector is defined [4]. In the standard version of LFD, the state vector is defined on the plane t + z = 0, invariant with respect to Lorentz boosts along the z axis. Since the vacuum state in LFD coincides with the free vacuum, one can construct any physical system in terms of combinations of free fields, i.e. the state vector is decomposed in a series of Fock sectors with an increasing number of constituents. Note that the triviality of the vacuum in LFD does not prevent from nonperturbative zero-mode contributions to field operators [5].
This decomposition of the state vector in a finite number of Fock components implies to consider an effective Lagrangian which enables all possible elementary couplings between the pion and the nucleon fields. This is indeed easy to achieve in chiral perturbation theory since each derivative of the U field involves one derivative of the pion field. In the chiral limit, the chiral effective Lagrangian of order p involves p derivatives and at least p pion fields. In order to calculate the state vector in the N-body truncation, with one fermion and (N − 1) pions, one has to include contributions up to 2(N − 1) pion fields in the effective Lagrangian, as shown on Fig. 2. We thus should calculate the state vector in the N-body truncation with an effective Lagrangian, denoted by L N e f f , and given by
L N e f f = L p=2(N−1) CPT . (2.1)
For a non zero pion mass, one should extend this mapping by taking m 2 π of order p = 2, as it is done in CPT. While the effective Lagrangian in Light-Front Chiral Effective Field Theory (LFχ EFT) can be mapped out to the CPT Lagrangian of order p, the calculation of the state vector does not rely on any momentum decomposition. It relies only on an expansion in the number of pions in flight at a given light-front time. In other words, it relies on an expansion in the fluctuation time, τ f , of such contribution. From general arguments, the more particles we have at a given light-front time, the smaller the fluctuation time is. At low energies, when all processes have characteristic interaction times larger than τ f , this expansion should be meaningfull.
(N − 1) bosons (N − 1) bosons
It is interesting to illustrate the general features of LFχ EFT calculations. At order N = 2, we already have to deal with irreducible contributions as shown on Fig.1.b. The calculation at order N = 3 explicitly incorporates contributions coming from ππ interactions, as well as all low energy πN resonances, like the ∆ or Roper resonances. Indeed, in the |ππN > Fock sector, the πN state can couple to both J = T = 3/2 as well as J = T = 1/2 states. We can generate therefore all πN resonances in the intermediate state without the need to include them explicitly, provided the effective Lagrangian has the right dynamics to generate these resonances. This is the case, by construction, in CPT.
To settle a general framework based on LFD, one has however to address three different problems: i) one has to control in a systematic way the violation of rotational invariance; ii) one needs an adequate renormalization scheme consistent with the truncation of the Fock space; iii) one should use an appropriate systematic regularization scheme which preserves all symetries. We shall adress these three problems in the following.
Covariant formulation of light-front dynamics
In the Covariant Formulation of Light-Front Dynamics (CLFD) [6], the state vector is defined on the light-front plane of general orientation ω·x = 0, where ω is an arbitrary light-like four-vector ω 2 =0. The state vector, φ Jσ ω (p), of a bound system corresponds to definite values for the mass m, the four-momentum p, and the total angular momentum J with projection σ onto the z axis in the rest frame, i.e., the state vector forms a representation of the Poincaré group. The four-dimensional angular momentum operator,Ĵ, is represented as a sum of the free and interaction parts:
J ρν =Ĵ (0) ρν +Ĵ int ρν . (3.1)
From the general transformation properties of both the state vector and the LF plane, we have:
J int ρν φ Jσ ω (p) =L ρν (ω)φ Jσ ω (p) , (3.2) whereL ρν (ω) = i ω ρ ∂ ∂ ω ν − ω ν ∂ ∂ ω ρ . (3.3)
The equation (3.2) is called the angular condition. This equation does not contain the interaction Hamiltonian, once φ satisfies the Poincaré group equations [6]. The construction of the wave functions of states with definite total angular momentum becomes therefore a purely kinematical problem. The dynamical dependence of the wave functions on the light-front plane orientation now turns into their explicit dependence on the four-vector ω. Such a separation, in a covariant way, of kinematical and dynamical transformations is a definite advantage of CLFD as compared to standard LFD on the plane t + z = 0. The eigenvalue equations for the Fock components can be obtained from the Poincaré group equation [7] P 2 φ (p) = m 2 φ (p) .
(3.4)
We now decompose the state vector of a physical system in Fock sectors. Schematically, we have:
φ (p) ≡ |1 + |2 + . . . + |n + . . . (3.5)
Each term on the r.h.s. denotes a state with a fixed number of particles. It is proportional to the Fock component, or many-body wave function, φ n . The spin structure of φ n is very simple, since it is purely kinematical, but it should incorporate ω-dependent components in order to fulfill the angular condition (3.2). In the Yukawa model we have, for N = 2:
Γ (2) 1 = a 1ū (k 1 )u(p) , (3.6) Γ (2) 2 =ū(k 1 ) b 1 + b 2 m ω ω·p u(p) ,(3.7)
since no other independent spin structures can be constructed. In
Non-perturbative renormalization scheme in light-front dynamics
In order to be able to make definite predictions for physical observables, one should also define a proper renormalization scheme. This should be done with care since the Fock decomposition of the state vector is truncated to a given order. Indeed, looking at Fig. 3 for the calculation of the fermion propagator in second order perturbation theory, one immediately realizes that the cancellation of divergences between the self-energy contribution (of order two in the Fock decomposition) and the fermion Mass Counterterm (MC) (of order one) involves two different Fock sectors [7]. This means that any MC and, more generally, any Bare Coupling Constant (BCC) should be asso- This procedure, which we call Fock Sector Dependent Renormalization, is a well defined, systematic and non-perturbative scheme [7]. The MC is determined from Eq.(3.4), while the BCC g (N) 0 is determined by demanding that the ω-independent part of the two-body vertex function Γ 2 at s 2 ≡ (k 1 + k 2 ) 2 = m 2 is proportional to the physical coupling constant g, b 1 (s 2 = m 2 ) ≡ g √ N 1 where N 1 is the normalization of the one-body Fock component φ 1 .
+ + δm
The anomalous magnetic moment in the Yukawa model
In order to address the calculation of a true non-perturbative system, we investigate the system composed of a fermion coupled to scalar bosons for the three-body, N = 3, Fock space truncation [8]. The system of equations one has to solve is given in Fig. 4, where the use of Fock sector dependent counterterms is shown. We use the Pauli-Villars (PV) regularization scheme, as detailed in [7]. The anomalous magnetic moment is calculated as a function of the boson PV mass, in the limit of infinite PV fermion mass. While the calculation for a boson-fermion coupling constant α = 0.2 shows nice convergence as a function of the PV mass, in the limit of large masses, the results for α = 0.5 are an indication that higher order Fock components may start to be sizeable.
Perspectives
It has been proposed recently [9,10] to use the definition of quantum fields as operator valued distributions acting on test functions as a regularization/renormalization procedure. This construction is particularly suited for light-front calculations since the test functions can be included from the very begining in the definition of the Fock components, leading to a very transparent, and general, formulation [11]. Let us outline here the main steps of this formulation. The physical field ϕ(x) is defined in terms of the translation, T x , of the original distribution Φ acting on the test function ρ:
Γ (3) 1 = Γ (3) 1 δm (3) + Γ (3) 2 g (3) 0 Γ (3) 2 = Γ (3) 1 g (3) 0 + Γ (3) 2 δm (2) + Γ (3) 3 g (2) 0 Γ (3) 3 = Γ (3) 2 g (2) 0 + Γ (3) 3 δm (1)ϕ(x) ≡ T x Φ(ρ) = d 4 yφ (y)ρ(x − y) . (6.1)
We shall concentrate here on distributions singular in the UV domain. Any physical amplitude can be written in a schematic way (in one dimension for simplicity) as:
A = ∞ 0 dX T (X) f (X) ,(6.2)
where f is the Fourier transform of the test function, and T (X) is a distribution which may lead, without any regularization procedure, to a divergent integral in the UV domain when X → ∞. In LFD, X is for instance proportional to k 2 , where k is the three-momentum of any of the constituents. The choice of the test function -of compact support and with all its derivatives equal to zero at the boundary -should be done with care. In particular, one should make sure that the physical amplitudes are independent of the choice of the test function. This is achieved by using test functions which are partitions of unity, i.e. functions which are 1 everywhere except at the boundary where they should go to zero [10]. The choice of the boundary of the test function, H, should also respect scaling invariance which tells us that the limit X → ∞ can be reached in many different ways since in this limit η 2 X → ∞ for an arbitrary scale η 2 > 1. In order to do that, it is necessary to consider a boundary condition for which H depends on the kinematical variable X [10]. We can choose, for example: H(X) ≡ η 2 X α . (6.3)
In the limit where α → 1 − , the maximum value of X, defined by H(X max ) = X max goes to infinity, and the test function goes to 1 in the whole kinematical domain. The extension of the distribution T (X) in the UV domain can thus be done easily using the general Lagrange formula given by
f (X) = − X k! ∞ 1 dt t (1 − t) k ∂ k+1 X X k f (Xt) ,(6.4)
for any integer k ≥ 0. The physical amplitude writes in that case:
A = ∞ 0 dX (−1) k k! X k ∂ k+1 X XT (X) η 2 X α−1 1 dt t (1 − t) k f (Xt) → ∞ 0
dXT (X) . (6.5) in the limit where α → 1 − . This defines the extensionT > (X) of the distribution T (X) in the UV domain. The value of k to be used depends on the degree of singularity of the initial distribution. The amplitude (6.5) is now completely finite. Note that we do not need the explicit form of the test function in the derivation of the extended distributionT > (X). We only rely on its mathematical properties and on its dynamical construction.
Figure 1 :
1Iteration of the self-energy contribution in first order perturbation theory (a); irreducible contribution to the bound state equation (b).
Figure 2 :
2General vertex including a maximum of (N − 1) pion fields in the initial and final states.
n
= (s n − M 2 )φ n where s n = (k 1 + . . . k n ) 2 . Here u's are free bispinors of mass m, a 1 , b 1 , and b 2 are scalar functions determined by the dynamics.
Figure 3 :
3Renormalization of the fermion propagator in second order perturbation theory. ciated with the number of particles present (or "in flight") in a given Fock sector. In other words, all MC's and BCC's must depend on the Fock sector under consideration. The original MC δ m and the fermion-boson BCC g 0 should thus be extended to a whole series: δ m → δ m (i) , g 0 → g = 1, 2, . . . N refers to the number of particles in "flight". A calculation of order N involves δ m (1) . . . δ m (N) and g The quantities δ m (i) and g (i) 0 are calculated by solving the systems of equations for the vertex functions in the N = 1, 2, 3 . . . approximations successively.
Figure 4 :Figure 5 :
45System of equations for the vertex functions in the Yukawa model for the three-body Fock space truncation. Dashed lines correspond to scalar bosons. The fermion anomalous magnetic moment as a function of the Pauli-Villars boson mass µ 1 for the N = 3 Fock space truncation. We separate the contributions from the two-body (dashed line) and three-body (dotted line) vertex function to the total result (solid line). The lines are just drawn to guide the eyes. The results on the left correspond to α = 0.2 while the results on the right correspond to α = 0.5. The calculation are done with a fermion (m) and a boson (µ) mass of 1 GeV.
. E Jenkins, A V Manohar, Phys. Lett. 255558E. Jenkins and A.V. Manohar, Phys. Lett. B255, 558 (1991).
. R J Ellis, H B Tang, Phys. Rev. 573356R.J. Ellis and H.B. Tang, Phys. Rev. C57 (1998) 3356;
. T Becher, H Leutwyler, Eur. Phys. J. 9643T. Becher and H. Leutwyler, Eur. Phys. J. C9 (1999) 643.
. E Jenkins, A V Manohar, Phys. Lett. 259353E. Jenkins and A.V. Manohar, Phys. Lett. B259 (1991) 353.
. P A M Dirac, Rev. Mod. Phys. 21392P.A.M. Dirac, Rev. Mod. Phys. 21 (1949) 392.
. T Heinzl, S Krusche, E Werner, Phys. Lett. 25655T. Heinzl, S. Krusche and E. Werner, Phys. Lett. 256, 55 (1991).
. V A Karmanov, Zh. Eksp. Teor. Fiz. 71399Sov. Phys. JETPV.A. Karmanov, Zh. Eksp. Teor. Fiz. 71 (1976) 399. [transl.:Sov. Phys. JETP 44 (1976) 210] ;
. J Carbonell, B Desplanques, V A Karmanov, J.-F Mathiot, Phys. Rep. 300215J. Carbonell, B. Desplanques, V.A. Karmanov and J.-F. Mathiot, Phys. Rep. 300 (1998) 215.
. V A Karmanov, J.-F Mathiot, A V Smirnov, Phys. Rev. 7785028V.A. Karmanov, J.-F. Mathiot and A.V. Smirnov, Phys. Rev.D77 (2008) 085028.
. V A Karmanov, J.-F Mathiot, A V Smirnov, in preparationV.A. Karmanov, J.-F. Mathiot and A.V. Smirnov, in preparation.
. H Epstein, V Glaser, Ann Inst, Henri Poincaré, Xix A, 211H. Epstein and V. Glaser, Ann. Inst. Henri Poincaré, XIX A (1973) 211;
. J M Gracia-Bondia, Math. Phys. Anal. Geom. 659J.M. Gracia-Bondia, Math. Phys. Anal. Geom. 6 (2003) 59.
. P Causality, E Grangé, Werner, arXiv:math-ph/0612011Quantum fields as Operator Valued Distributions and Causality, P. Grangé and E. Werner, arXiv: math-ph/0612011.
. P Grangé, J.-F Mathiot, B Mutet, E Werner, in preparationP. Grangé, J.-F. Mathiot, B. Mutet and E. Werner, in preparation.
|
[] |
[
"Biomedical Document Clustering and Visu-alization based on the Concepts of Diseases",
"Biomedical Document Clustering and Visu-alization based on the Concepts of Diseases"
] |
[
"Setu Shah ",
"Xiao Luo "
] |
[] |
[
"ACM Reference format"
] |
Document clustering is a text mining technique used to provide be er document search and browsing in digital libraries or online corpora. A lot of research has been done on biomedical document clustering that is based on using existing ontology. But, associations and co-occurrences of the medical concepts are not well represented by using ontology. In this research, a vector representation of concepts of diseases and similarity measurement between concepts are proposed. ey identify the closest concepts of diseases in the context of a corpus. Each document is represented by using the vector space model. A weight scheme is proposed to consider both local content and associations between concepts. A Self-Organizing Map is used as document clustering algorithm. e vector projection and visualization features of SOM enable visualization and analysis of the clusters distributions and relationships on the two dimensional space. e experimental results show that the proposed document clustering framework generates meaningful clusters and facilitate visualization of the clusters based on the concepts of diseases.CCS CONCEPTS•Information systems → Clustering and classi cation; Document topic models;
| null |
[
"https://arxiv.org/pdf/1810.09597v1.pdf"
] | 53,038,022 |
1810.09597
|
1a253ceeaef6ebf582b04bffba62283839705a98
|
Biomedical Document Clustering and Visu-alization based on the Concepts of Diseases
2016. August 2017
Setu Shah
Xiao Luo
Biomedical Document Clustering and Visu-alization based on the Concepts of Diseases
ACM Reference format
ACM KDD conference, Data-Drive Discovery WorkshopHalifax, NS, CANADA8pages2016. August 201710.1145/nnnnnnn.nnnnnnnDocument ClusteringClustering VisualizationConcept Similarity Measure
Document clustering is a text mining technique used to provide be er document search and browsing in digital libraries or online corpora. A lot of research has been done on biomedical document clustering that is based on using existing ontology. But, associations and co-occurrences of the medical concepts are not well represented by using ontology. In this research, a vector representation of concepts of diseases and similarity measurement between concepts are proposed. ey identify the closest concepts of diseases in the context of a corpus. Each document is represented by using the vector space model. A weight scheme is proposed to consider both local content and associations between concepts. A Self-Organizing Map is used as document clustering algorithm. e vector projection and visualization features of SOM enable visualization and analysis of the clusters distributions and relationships on the two dimensional space. e experimental results show that the proposed document clustering framework generates meaningful clusters and facilitate visualization of the clusters based on the concepts of diseases.CCS CONCEPTS•Information systems → Clustering and classi cation; Document topic models;
more than 10,000 articles are added to MEDLINE weekly [19]. ere are continuously needs for development of techniques to discover, search, access and share knowledge from these documents and articles. Text clustering techniques enable us to group similar text documents in an unsupervised manner.
Most of the research related to the biomedical document clustering focuses on either reforming the representation of biomedical documents or improving the clustering algorithms. Biomedical document clustering is di erent from the general text document clustering task, because in the la er, semantic similarities between words or phrases are not usually considered. One medical concept of disease might be represented in di erent forms, and some medical concepts of diseases might be highly correlated. For example, 'Type 2 Diabetes' is the same concept of disease as 'Diabetes Mellitus Type 2'. 'Hypertension' might co-occur o en with 'Stroke'. In order to capture the semantic similarities between words or phrases, previous research on document representation reforming [12] [18] [20] o en use existing ontology such as MeSH or WordNet to identify the semantic relationships. However, ontology doesn't re ect the co-occurrences of medical concepts. is paper focuses on biomedical document clustering based on the concepts of diseases. e proposed similarity measure between the concepts of diseases is based on the Word2vec model [13]. is similarity measure identi es the closest concepts based on co-occurrences of the concepts. e proposed concept weighting scheme is the linear combination of the TF-IDF value which re ects the content similarity between documents and the similarity score based on the proposed similarity measurements that re ect the semantic similarity between documents. e unsupervised learning algorithm Self-Organizing Map (SOM) [10] has been used as the clustering technique. SOM has properties of both vector quantization and vector projection. e neurons of an SOM can be presented on a two dimensional space. By projecting the input data instances to their best matching units (BMUs) on the SOM map, the distribution of the inputs can be visualized on the two dimensional space with the U-matrix and hit histogram of the SOM. e relationships of the clusters based on the concepts of diseases can be visualized.
is clustering visualization is a bene cial feature for biomedical literature search and browsing based on concepts of diseases. e rest of the paper is organized as followings. In section 2, related work is described. Section 3 demonstrates how the concepts of diseases are extracted by using UMLS MetaMap. Section 4 and 5 detail the measurement of concepts similarity and weighting scheme for each concept in the document representation. Experimental se ings and results are given in section 6. Section 7 concludes his research and discusses potential future work.
RELATED WORK
A lot of research has been done in biomedical document clustering in past decades. Some of it focused on document presentation reforming based on medical ontology or on using di erent weighting scheme other than TF-IDF, while some others focused on investigating various clustering algorithms.
Zhang et al. [20] reviewed three di erent ontology based term similarity measurements: path based [17], information content based [15], and feature based [9] and then proposed their own similarity measurement and term re-weighting scheme. K-means algorithm is used for document clustering. Based on the results comparison, some of them are slightly worse than the word based scheme. e authors mentioned that it might because of the limitation of the domain ontology, term extraction and sense disambiguation. Visualization of the relationships between the clusters were not included in this research.
Yoo et al. [19] used a graphical representation method to represent a set documents based on the MeSH ontology, and proposed the document clustering and summarization with this graphical representation. e document clustering and summarization model gained comparable results on clustering and also provided some visualization on the documents cluster model based on the relationships of the terms. However, this visualization relies largely on the MeSH ontology instead of the document relationships themselves.
Logeswari et al. [12] proposed a concept weighting scheme based on the MeSH ontology and tri-gram extraction to extract concepts from the text corpus. e semantic relationship between tri-grams are weighted through a heuristic weight assignment of four prede ned semantic relationships. e K-means clustering algorithm results show that concept based representation was be er than word based representation. Visualization of the clustering results was not investigated.
Gu et al. [8] proposed a concept similarity measurement by using a linear combination of multiple similarity measurements based on MeSH ontology and local content which include TF-IDF weighting and co-e cient calculation between related article sets. A semi-supervised clustering algorithm was employed at the stage of document clustering. eir focus was not clustering visualization. Some research has been done about the visualization process to support biomedical literature search. Gorg et al. [7] developed a visual analytics system, named Bio-Jigsaw by using the MeSH ontology. is research demonstrated how visual analytics can be used to analyze a search query on a gene related to breast cancer. Neither document representation nor document clustering were discussed.
To the best of authors' knowledge, this research is the rst to present concepts of diseases using vectors based on the Word2vec model instead of using an ontology. e proposed similarity measurement and the concept weighting scheme are rst applied to the biomedical document clustering. e SOM based clustering is employed to visualize the distribution of document clusters based on the concepts of diseases.
CONCEPTS OF DISEASES EXTRACTION
In this work, the focus is on clustering biomedical documents based on the concepts of diseases that are addressed by or mentioned in the documents. To extract the concepts of diseases from the documents, Uni ed Medical Language System (UMLS) MetaMap is used. UMLS MetaMap [2] is a natural language processing tool that makes use of various sources such as UMLS Metathesaurus [1] and SNOMED CT [4] to map the phrases or terms in the text to di erent semantic types. Figure 1 provides an example of mapping phrases to di erent semantic types using UMLS MetaMap. In this example, eight terms or phrases in the sentence have been mapped to six semantic types. e phrase 'Haemophilus in uenzae type b meningitis' in the sentence has been identi ed as semantic type 'disease or syndrome' and mapped to phrase 'Type B Hemophilus in uenzae Meningitis' based on the lexicon that UMLS MetaMap uses. In this research, if a term or phrase has been mapped to semantic types 'Disease or Syndrome' or 'Neoplastic Process', the corresponding phrase in the lexicon produced by MetaMap is extracted.
CONCEPTS SIMILARITY MEASURE
In the biomedical literature, same concepts of a disease can be presented by di erent terms or combinations of words. For example, 'cancer of breast' and ' breast cancer' are two phrases that present the concept of the same disease. However, they are treated as two di erent concepts if typical vector space model and TF-IDF weighting scheme are used for document presentation, and the semantic similarity between them is not measured. In this research, a semantic similarity measure between di erent concepts of diseases is proposed. Given a total of L concepts of diseases extracted from the raw text corpus, the similarities between any two concepts are stored in the similarity matrix S as presented in Equation 1. Each entry s i, j in the matrix S represents the similarity between concept C i and C j .
S L, L = s 1,1 s 1,2 · · · s 1, L s 2,1 s 2,2 · · · s 2, L . . . . . . . . . . . . s L,1 s L,2 · · · s L, L(1)
To calculate the similarity between two concepts, rst, each word is represented by a vector (as proposed in Equation 2). is vector representation is learned by training the Word2Vec model. e Word2Vec training algorithm was developed by a team of researchers at Google led by Tomas Mikolov [13]. It is a computationallye cient algorithm to generate vectors of real numbers to present words in a given raw text corpus. ese vector representations are learned using three-layer neural networks using either a continuous bag-of-words approach or a skip-gram architecture. e vectors preserve the distances between words in the vector space so that the words that share common contexts in the raw text corpus are located in close proximity to one another. e dimension of the vector created depends on the number of neurons in the hidden layer of the neural network when training a Word2Vec model.
W ord = (w 1 , w 2 , . . . , w m )(2)
m: the dimension of the vector.
In this research, a trained Word2Vec model [14] created from a subset of PubMed literature database and a subset of PubMed Central (PMC) Open Access database is employed. ese two text corpus contains a large number of biomedical documents. e trained model creates 200 dimensional vectors to present the words extracted in the two text corpus. e skip-gram architecture with a window size of 5 is adopted for the learning process [14].
Although some of concepts of diseases contain only one word, many of them span multiple words. In this work, if a concept of disease spans multiple words, a concept vector is generated by aggregating the vectors of all the words in the concept, as shown in Equation 3. For example, for the disease 'diabetes mellitus', the vector for 'diabetes' and the vector for 'mellitus' are aggregated by adding them together.
C = M i=1 W ord i(3)
M: the total number of words in a concept C. e similarity score between the concepts are calculated using the cosine distance between the vectors as shown in Equation 4.
S i, j = C i · C j m i=1 C 2 i · m i=1 C 2 j(4)
By presenting concepts in vector and using this similarity measure, it is observed that the more the diseases are associated, the higher the similarity scores between them are. Table 1 provides some examples of concepts of diseases and the their top 3 closest concepts based on the similarity scores. 'Hypertension' is o en associated with 'hyperlipidaemia' in the literature, so the similarity between them is higher than that between 'Hypertension' and other concepts of diseases.
DOCUMENT REPRESENTATION AND WEIGHTING SCHEME
In this research, the typical vector space model is used to present a biomedical document, each entry of the vector corresponding to a concept of disease which is identi ed through the UMLS MetaMap. e proposed weight (W ei ht C i,d ) that is given to each concept (C i,d ) is calculated as equation 5:
W ei ht C i,d = t f C i ,d × log |D | d f C i + M j=1 S i, j if t f C i ,d > 0 N j=1 N −(j−1) N S i, j if t f C i ,d = 0 (5) d f C i : the number of documents in which concept C i occurs at least once t f C i,d : frequency of concept C i in document d |D|:
total number of documents in the corpus S i, j : the similarity between C i and concept C j that both occur in document d. C j is the j th frequent concept in the document d.
M: the total number of concepts in document d. N : top N closest concepts of C i . In this research, N = 3. If a concept occurs in a document, the weighting scheme uses the TF-IDF value to underline the occurrence of the concept in the local content. e M j=1 S i, j calculates the sum of similarity scores between the occurred concept C i,d and other concepts (C j,d , j = 1, ,M) that also occurs within the document. If a concept does not occur in the document, the weight is calculated by a weighted sum of the top 3 closest concepts (C j,d , j = 1, ,3) that appear in the document based on the similarities scores. By using this weighting scheme, the representation measures the occurrences of di erent representations of the same or similar concepts. For example, 'diabetes' occurs in one document, but 'diabetes mellitus' occurs in another document. By using the traditional TF-IDF weighting scheme, their values would be 0 for documents in which the concept does not appear. However, by using the proposed weighting scheme, they are weighted based on the similarity between the concept and its closest concepts. us, for the document that does not contain the concept 'diabetes mellitus', instead of using 0, the similarity score between 'diabetes mellitus' and other concepts that appear in the document is used.
CLUSTERING ALGORITHM
Self-Organizing Map (SOM) is used for document clustering visualization [11]. SOM implements the topologically ordered display of the data to facilitate understanding the structures of the input data set. It is also readily explainable and easy to visualize. e visualization of the multidimensional data is one of the main application areas of SOM [10]. ese features make SOM an appropriate choice as a clustering algorithm for this paper.
A basic SOM consists of M neurons located on a low dimensional grid (usually 1 or 2 dimensional) [10]. e algorithm responsible for the formation of the SOM involves three basic steps a er initialization: sampling, similarity matching, and updating. ese three steps are repeated until formation of the feature map has completed. Each neuron i has a d-dimensional prototype weight vector W i = W i1 ,W i1 , ...,W id . Given X is a d-dimensional sample data(input vector), the algorithm is summarized as follows:
• Initialization:
Choose random values to initialize all the neuron weight vectors W i (0), i = 1, 2, ..., M, where M is the total number of neurons in the map.
• Sampling:
Draw a sample data X from the input space with a uniform probability.
• Similarity Matching:
Find the best matching unit (BMU) or winner neuron of X , denoted here by b which is the closest neuron (map unit) to X in the criterion of minimum Euclidean distance, at time step n (n t h training iteration).
• Updating: Adjust the weight vectors of all neurons by using the update formula 7, so that the best matching unit (BMU) and its topological neighbors are moved closer to the input vector X in the input space.
W i (n + 1) = W i (n) + η(n)h b,i (n)(X − W i (n))(7)
Where η(n) denotes the learning rate and h b,i (n) is the suitable neighborhood kernel function centered on the winner neuron. e distance kernel function can be, for example, Gaussian:
h b,i (n) = e − ||r b −r i || 2 2σ 2 (n)(8)
Where r b and r i denote the positions of neuron b and i on the SOM grid and σ (n) is the width of the kernel or neighborhood radius at step n. σ (n) decreases monotonically along the steps as well. e initial value of neighborhood radius σ (0) should be fairly wide to avoid the ordering direction of neurons to change discontinuously. σ (0) can be properly set to be equal to or greater than half the diameter of the map. Formula 9 gives the initial value of the neighborhood radius for a map of size a by b.
σ (0) = √ a 2 + b 2 2 (9)
• Continuation: Continue with sampling until no noticeable changes in the feature map are observed or the pre-de ned maximum number of iterations is reached. e most commonly used visualization techniques of SOM are the U-Matrix and Hit histogram. Figure 2 shows the U-matrix of a trained map on a input data set that has two clusters. e lighter the color in the hexagon connecting any two neurons, the smaller is the distance between them. From the U-matrix, two large light regions can be visualized. One is towards the le , while the other is to the right. ese regions present the two clusters obtained on training the input data set. e U-matrix gives a direct visualization of the number of clusters and their distribution. Figure 3: Hit histogram of a trained SOM e hit histogram of the input data set on the trained map provides a visualization that details the distribution of input data across the clusters. Each input data instance in the data set can be projected to the closest neuron on a trained SOM map. e closest neuron is called the best matching unit (BMU) of the input data instance. e hit histogram is constructed by counting the number of hits each neuron receives from the input data set. Figure 3 shows the hit histogram of an input data set on the trained SOM map. Each hexagon represents one neuron on the map. e size of the marker indicates the number of hits the neuron receives. us, a larger marker is representative of a larger number of hits on that neuron. Based on the hit histogram, it is visualized that most of the input data hits neurons in the le and right regions. ese two regions correspond to the two clusters on the U-matrix shown in Figure 2.
EXPERIMENT SETTING AND RESULT ANALYSIS
To evaluate the proposed biomedical document clustering framework that is based on the concepts of diseases, two subsets of large biomedical document collections have been used: PubMed Central Open Access and Ohsumed collection. e details of these two document collections and the corresponding clustering results and visualization are detailed in the following subsections.
Datasets
PubMed Central Open Access data set has been used by many research projects to examine tasks of biomedical literature clustering and classi cation [20] [21]. It is also a part of the training corpus for the Word2Vec model used in this research. Ohsumed document collection is a data set that has been used by many researchers [6] [16] for text mining. Although the Ohsumed collection includes documents that are not up to date, it is used to evaluate the robustness of the proposed document clustering framework on a data set where concepts might be presented di erently than the ones included in the training corpus for Word2Vec. [3] is a subset of over 1 million articles from the total collection of articles in PMC. For this research, a set of 600 articles were randomly selected from the 'A-B' subset which includes articles from journals whose names start with le er 'A' or 'B'. e number of selected articles from each journal is shown in Table 2.
PubMed Central Open Access (PMC-OA). e PubMed Central Open Access
To be consistent with the data set -Ohsumed Collection, only content in 'Title' and 'Abstract' sections from these documents are used. 658 unique concepts of diseases are identi ed by UMLS MetaMap. Figure 4 shows the distribution of these concepts based on the number of words in each concept.
Ohsumed
Collection. e Ohsumed collection [5] used here includes the abstracts of 20,000 articles. ese articles are related to cardiovascular diseases and are further categorized into 23 cardiovascular disease categories. For this research, a subset of 600 documents is randomly selected. ese documents cover all the 23 categories. Table 3 shows the number of documents selected from each category.
A er concepts of diseases are identi ed by using the UMLS MetaMap, 67 documents have no disease-related concept identi ed. ese documents are not included in the experiments. In total, 1449 concepts of diseases are identi ed and extracted from the 533 documents. Figure 4 shows the distribution of these concepts based on the number of words in the concepts in comparison with the concepts extracted from PMC-OA. Although the total number of concepts of diseases extracted from the Ohsumed collection is higher than that are extracted from the PMC-OA. e distribution based on the number of words in the concepts is very similar. Ohsumed collection has a slightly higher radio of concepts of one word whereas PMC-OA collection has a higher radio of concepts
Clustering, Visualization and Discussion
SOM has been used for document clustering a er concepts extraction and document representation using the proposed weighting scheme. e size of the map is 10 by 10 which contains 100 neurons. e training iterations are set to be 50,000. Figure 5 shows the clustering results of document collection PMC-OA. Among 658 concepts that are extracted by using the UMLS MetaMap, 180 concepts occur in more than 1 document. Compare to the Ohsumed collection, a larger number of concepts have document frequency more than 1.
at means the TF-IDF value in the weighting scheme has more impact on PMC-OA dataset than it does on the Ohsumed collection. e U-matrix of the trained SOM map shows more clear boundaries than that of the Ohsumed collection.
From the U-matrix and hit histogram, 8 clusters can be clearly identi ed based on the darker colored neurons surrounding them. Cluster 5 includes a large cluster with documents consisting of concepts of diseases like 'obesity', 'diabetes', 'hypertension', 'hyperglycemia', and so on. is cluster also includes other diseases such as 'coronary artery disease', since these diseases are highly related.
e 'coronary artery disease' might be an outcome of 'hypertension', 'hyperglycemia' or their combination. Cluster 6 contains documents discussing infections related to diseases such as 'Malaria', and other respiratory infection diseases such as 'tuberculosis' and 'bronchitis' are also included in this cluster. Cluster 8 is a smaller cluster which mainly includes chest infection related documents. Cluster 7 is another smaller cluster about concepts of infections such as avian in uenza. Cluster 6, 7 and 8 are close to each other since they are all about infections, but each is focused on a smaller areas of infections. Cluster 4 contains three neurons which present three di erent types of concepts. e le region is dominated by documents which discuss 'dysphagia' and similar concepts such as 'laryngospasm'. e concepts in the right half of the cluster include spinal disorders like 'stenosis', 'scoliosis' and 'spinal instability'. e top of the cluster is dominated by pain related diseases and syndromes. Cluster 3 contains documents that are related to 'paralysis' and damage of nerves. Many of them discuss paralysis of the face, hands ('carpel tunnel syndrome'), spine ('spinal cord atrophy'), brain ('celebral palsy'), legs ('spastic foot'), and so on. All these concepts are related and close to each other, and thus the cluster is well formed. Cluster 2 is a cluster of documents with concepts about di erent cardiovascular diseases such as 'hypertension', 'myocardial infarction', 'coronary artery disease', 'coronary heart disease', and 'ischemic strokes'. is cluster also consists of a few documents that talk about brain strokes arising from lesions in the brain, or lead to speech disorders. More than half of the documents in this cluster discuss strokes and closely related cardiac concepts. One interesting nding is that cluster 1 contains all the documents in which the only one concept is identied by UMLS MetaMap is 'stroke'. However, further analysis shows that these documents have nothing to do with 'stroke' as a disease.
is shows the UMLS MetaMap cannot always accurately map all concepts to the semantic types. Figure 6 shows the clustering results of Ohsumed collection based on the concepts of diseases that are extracted. Based on the document frequencies of the concepts of Ohsumed collection, there are 1108 concepts out of the total 1449 that occurs only in one document and there are total 1414 concepts occur in less than ve documents. at means the weights of these concepts rely heavily on the similarity measurement between the concepts.
Based on the original data set description, all documents are related to cardiovascular diseases. is lead to the shorter distances between neurons which is re ected by the color of the U-matrix. By analyzing the U-matrix and hit histogram of the trained map, 8 clusters are identi ed. A majority of the documents in cluster 7 are about infections and infectious diseases, with half of them from the categories of bacterial infections and mucoses (C01), virus diseases (C02) and parasitic diseases (C03). e rest of the documents from this cluster discuss other infections from categories like respiratory tract diseases (C08) and digestive system diseases (C06).
ere are also a few documents from immunologic diseases (C19) in this cluster. Notably, all of the documents talk about infections of di erent types. Cluster 1 has documents that discuss diseases about the nervous system. Whereas, the documents in cluster 2 discuss neoplasms which include di erent types of cancers of the brain, prostate, neck and so on. e documents in this cluster are from all the categories except virus diseases (C02) and diseases of environmental origin (C21). Cluster 3 includes documents about diseases related to hormone secretion and distribution. is cluster also includes diseases of the bones and blood, since these concepts are closely related. Cluster 4 contains documents with diseases about the ear, nose, throat, head and surrounding areas of the face. Cluster 5 is the smallest cluster and the documents concentrate on di erent types of tuberculosis and sexually transmi ed diseases like AIDS, HPV, etc. Documents about 'cryptococcosis', which is o en seen in patients with HIV whose immunity has been lowered, also fall in this cluster. Cluster 6 consists of documents with concepts relating to diabetes. Documents containing concepts like 'nephropathy', 'impaired glucose tolerance', 'non-insulin dependent diabetes' are in the le half of the cluster. Whereas, the right half of the cluster is dominated by documents with concepts such as 'Crohn's disease', 'renal ulceration', and 'kidney stone'. Cluster 8 has documents about diseases related to di erent heart conditions Figure 6: Clustering results of Ohsumed Collection and obstruction in the ow of blood. Since the theme of documents in Ohsumed collection is cardiovascular concepts, this cluster has documents from all of the categories except parasitic diseases (C03), neoplasms (C04) and digestive system diseases (C06).
It is worth noting that only concepts of diseases are extracted from both data sets and used for document clustering. While the original category labels of the Ohsumed collection might not be assigned based on the concepts of diseases, thus, these labels are used to evaluate the clustering performance.
Overall, the proposed document clustering and visualization framework works well on both data sets. Although Ohsumed collection has much more concepts diseases, and the majority of them have very low document frequency. On the U-matrix, the colors of the neurons surrounding the clusters demonstrate how separated these clusters are. e darker the color is, the more separated they are. at means clusters are more unrelated. e clusters on the U-matrix of the PMC-OA appear to be more separated than those of the Ohsumed collection. One reason could be that all documents are related to cardiovascular diseases, so the clusters locate more closely on the U-matrix.
CONCLUSION AND FUTURE WORK
In this paper, a biomedical document clustering framework based on concepts of diseases is proposed. e concepts of diseases are identi ed by using UMLS MetaMap. Instead of using an existing ontology to generate concept representation, the concepts are represented by using vectors based on a combination of TF-IDF and Word2Vec models. e proposed similarity measure is based on the vector representations of the concepts and shows that closely associated concepts of diseases have higher similarity scores than others. A representation of documents that considers the local content and semantic similarity between the concepts within the documents is used. A weighting scheme using TF-IDF combined with similarity score between the concepts is proposed. Instead of focusing on clustering performance evaluation, clustering visualization is explored in this research. Self-Organizing Map is a clustering algorithm that provides a visualization aid to understand the clusters and distribution of the clusters, and is thus used in this research. e results show that the clustering occurs along concepts of similar nature, of similar area and organs of the body, and concepts which are synonymous to one another. Nearby clusters are related in most cases, as well. is kind of visualization will help researchers explore related articles based on concepts of diseases.
Potential future work includes visualizing clusters of larger corpora by using a hierarchical clustering architecture, evaluating this visualization aid for the task of biomedical document search and extending this framework to biomedical document clustering based on concepts of symptoms and treatments.
Figure 1 :
1Semantic Mapping of using UMLS MetaMap
W i (n)||, i = 1, 2, ..., M
Figure 2 :
2U-matrix of a trained SOM e U-matrix[10] holds all distances between neurons and their immediate neighbor neurons.
Figure 4 :
4Distribution of the concepts based on the number of words spanning two words. e percentages of concepts with 3 or more words are almost the same.
Table 1 :
1Examples of concepts and the top 3 closest concepts based on the similarity scoresConcept
Closest Concepts
Score of
Similarity
essential hypertension
0.813
hypertension hyperlipidaemia
0.692
dyslipidemia
0.659
endothelial
dysfunction
0.739
dysfunction renal dysfunction
0.660
cortical dysfunction
0.639
carpal tunnel bilateral carpal tunnel syndrome 0.970
syndrome
cts carpal tunnel syndrome
0.957
carpal tunnel
0.941
diabetes mellitus
0.918
diabetes
diabetes mellitus type ii
0.868
dm diabetes mellitus
0.845
cardiovascular cardiac diseases
0.8181
disease
metabolic diseases
0.8179
heart diseases
0.787
Table 2 :
2PMC-OA Data SetName of journal
# of documents
American Journal of Hypertension
13
Augmentative and alternative communication 2
Ancient Science of Life
3
Bioinformatics and biology insights
45
Allergy and asthma proceedings
28
BoneKEy reports
4
Anesthesia, essays and researches
135
Biological trace element research
31
Bone Marrow Research
1
Brain and language
1
American journal of physiology.
11
Endocrinology and metabolism
Aphasiology
3
Annals of rehabilitation medicine
323
Table 3 :
3Ohsumed CollectionCategory
Label # of documents
Bacterial Infections and Mycoses
C01 22
Virus Diseases
C02 23
Parasitic Diseases
C03 29
Neoplasms
C04 26
Musculoskeletal Diseases
C05 25
Digestive System Diseases
C06 23
Stomatognathic Diseases
C07 23
Respiratory Tract Diseases
C08 24
Otorhinolaryngologic Diseases
C09 29
Nervous System Diseases
C10 23
Eye Diseases
C11 28
Urologic and Male Genital Diseases C12 26
Female Genital Diseases and
C13 27
Pregnancy Complications
Cardiovascular Diseases
C14 27
Hemic and Lymphatic Diseases
C15 28
Neonatal Diseases and Abnormalities C16 25
Skin and Connective Tissue Diseases C17 28
Nutritional and Metabolic Diseases C18 27
Endocrine Diseases
C19 30
Immunologic Diseases
C20 26
Disorders of Environmental Origin C21 26
Animal Diseases
C22 25
Pathological Conditions, Signs
C23 29
and Symptoms
Fact Sheet -UMLS Metathesaurus. n. d.[n. d.]. Fact Sheet -UMLS Metathesaurus. h ps://www.nlm.nih.gov/pubs/ factsheets/umlsmeta.html.
MetaMap -A Tool For Recognizing UMLS Concepts in Text. n. d.[n. d.]. MetaMap -A Tool For Recognizing UMLS Concepts in Text. h ps://metamap. nlm.nih.gov/.
Open Access Subset. n. d.[n. d.]. Open Access Subset. h ps://www.ncbi.nlm.nih.gov/pmc/tools/open list/.
. Snomed Ct, SNOMED CT. h ps://www.nlm.nih.gov/healthit/snomedct/.
Text Categorization Corpora. n. d.[n. d.]. Text Categorization Corpora. h p://disi.unitn.it/moschi i/corpora.htm.
Learning ontologies to improve text clustering and classi cation. In From data and information analysis to knowledge engineering. Stephan Bloehdorn, Philipp Cimiano, Andreas Hotho, SpringerStephan Bloehdorn, Philipp Cimiano, and Andreas Hotho. 2006. Learning ontolo- gies to improve text clustering and classi cation. In From data and information analysis to knowledge engineering. Springer, 334-341.
Visualization and language processing for supporting analysis across the biomedical literature. Carsten Görg, Hannah Tipney, Karin Verspoor, William A BaumgartnerJr, John Cohen, Lawrence E Stasko, Hunter, International Conference on Knowledge-Based and Intelligent Information and Engineering Systems. SpringerCarsten Görg, Hannah Tipney, Karin Verspoor, William A Baumgartner Jr, K Bre- tonnel Cohen, John Stasko, and Lawrence E Hunter. 2010. Visualization and language processing for supporting analysis across the biomedical literature. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems. Springer, 420-429.
E cient semisupervised MEDLINE document clustering with MeSH-semantic and globalcontent constraints. Jun Gu, Wei Feng, Jia Zeng, Hiroshi Mamitsuka, Shanfeng Zhu, IEEE transactions on cybernetics. 43Jun Gu, Wei Feng, Jia Zeng, Hiroshi Mamitsuka, and Shanfeng Zhu. 2013. E cient semisupervised MEDLINE document clustering with MeSH-semantic and global- content constraints. IEEE transactions on cybernetics 43, 4 (2013), 1265-1276.
Perspectives on ontology-based querying. Rasmus Knappe, Henrik Bulskov, Troels Andreasen, International Journal of Intelligent Systems. 22Rasmus Knappe, Henrik Bulskov, and Troels Andreasen. 2007. Perspectives on ontology-based querying. International Journal of Intelligent Systems 22, 7 (2007), 739-761.
e self-organizing map. Teuvo Kohonen, Neurocomputing. 21Teuvo Kohonen. 1998. e self-organizing map. Neurocomputing 21, 1 (1998), 1-6.
Self organization of a massive document collection. Teuvo Kohonen, Samuel Kaski, Krista Lagus, Jarkko Salojarvi, IEEE transactions on neural networks. 11Jukka Honkela, Vesa Paatero, and An i SaarelaTeuvo Kohonen, Samuel Kaski, Krista Lagus, Jarkko Salojarvi, Jukka Honkela, Vesa Paatero, and An i Saarela. 2000. Self organization of a massive document collection. IEEE transactions on neural networks 11, 3 (2000), 574-585.
Biomedical document clustering using ontology based concept weight. S Logeswari, Premalatha, Computer Communication and Informatics (ICCCI), 2013 International Conference on. IEEE. S Logeswari and K Premalatha. 2013. Biomedical document clustering using ontology based concept weight. In Computer Communication and Informatics (ICCCI), 2013 International Conference on. IEEE, 1-4.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Je Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Je Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111-3119.
Distributional semantics resources for biomedical text processing. Tapio Salakoski2 Sophia Spfgh Moen, Ananiadou, SPFGH Moen and Tapio Salakoski2 Sophia Ananiadou. 2013. Distributional semantics resources for biomedical text processing. (2013).
Semantic similarity in a taxonomy: An informationbased measure and its application to problems of ambiguity in natural language. Philip Resnik, J. Artif. Intell. Res.(JAIR). Philip Resnik et al. 1999. Semantic similarity in a taxonomy: An information- based measure and its application to problems of ambiguity in natural language. J. Artif. Intell. Res.(JAIR) 11 (1999), 95-130.
Categorical proportional di erence: A feature selection method for text categorization. Mondelle Simeon, Robert Hilderman, Proceedings of the 7th. the 7thMondelle Simeon and Robert Hilderman. 2008. Categorical proportional di er- ence: A feature selection method for text categorization. In Proceedings of the 7th
. Australasian Data Mining Conference. 87Australian Computer Society, IncAustralasian Data Mining Conference-Volume 87. Australian Computer Society, Inc., 201-208.
Verbs semantics and lexical selection. Zhibiao Wu, Martha Palmer, Proceedings of the 32nd annual meeting on Association for Computational Linguistics. the 32nd annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsZhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, 133-138.
A comprehensive comparison study of document clustering for a biomedical digital library MEDLINE. Illhoi Yoo, Xiaohua Hu, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries. the 6th ACM/IEEE-CS joint conference on Digital librariesACMIllhoi Yoo and Xiaohua Hu. 2006. A comprehensive comparison study of docu- ment clustering for a biomedical digital library MEDLINE. In Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries. ACM, 220-229.
A coherent graph-based semantic clustering and summarization approach for biomedical literature and a new summarization evaluation method. X Yoo, I Hu, I.-Y. Song, 10.1186/1471-2105-8-s9-s4First International Workshop on Text Mining in Bioinformatics Proceedings. X. Yoo, I. Hu and I.-Y. Song. 2006. A coherent graph-based semantic clustering and summarization approach for biomedical literature and a new summarization evaluation method. In First International Workshop on Text Mining in Bioinfor- matics Proceedings. 84-89. h ps://doi.org/10.1186/1471-2105-8-s9-s4
A comparative study of ontology based term similarity measures on PubMed document clustering. Xiaodan Zhang, Liping Jing, Xiaohua Hu, Michael Ng, Xiaohua Zhou, Advances in Databases: Concepts, Systems and Applications. Xiaodan Zhang, Liping Jing, Xiaohua Hu, Michael Ng, and Xiaohua Zhou. 2007. A comparative study of ontology based term similarity measures on PubMed document clustering. Advances in Databases: Concepts, Systems and Applications (2007), 115-126.
Enhancing MEDLINE document clustering by incorporating MeSH semantic similarity. Shanfeng Zhu, Jia Zeng, Hiroshi Mamitsuka, Bioinformatics. 25Shanfeng Zhu, Jia Zeng, and Hiroshi Mamitsuka. 2009. Enhancing MEDLINE document clustering by incorporating MeSH semantic similarity. Bioinformatics 25, 15 (2009), 1944-1951.
|
[] |
[
"A Physical Picture of Bispectrum Baryon Acoustic Oscillations in the Interferometric Basis",
"A Physical Picture of Bispectrum Baryon Acoustic Oscillations in the Interferometric Basis"
] |
[
"Hillary L Child \nHEP Division\nArgonne National Laboratory\n60439LemontILUSA\n\nDepartment of Physics\nUniversity of Chicago\n60637ChicagoILUSA\n",
"Zachary Slepian \nDepartment of Astronomy\nUniversity of Florida\n211 Bryant Space Sciences Center32611GainesvilleFLUSA\n\nLawrence Berkeley National Laboratory\n1 Cyclotron Road94720BerkeleyCAUSA\n\nBerkeley Center for Cosmological Physics\nUniversity of California\n94720Berkeley, BerkeleyCAUSA\n",
"† ",
"Masahiro Takada \nKavli Institute for the Physics and Mathematics of the Universe (WPI)\nUTIAS\nThe University of Tokyo\n277-8583ChibaJapan\n"
] |
[
"HEP Division\nArgonne National Laboratory\n60439LemontILUSA",
"Department of Physics\nUniversity of Chicago\n60637ChicagoILUSA",
"Department of Astronomy\nUniversity of Florida\n211 Bryant Space Sciences Center32611GainesvilleFLUSA",
"Lawrence Berkeley National Laboratory\n1 Cyclotron Road94720BerkeleyCAUSA",
"Berkeley Center for Cosmological Physics\nUniversity of California\n94720Berkeley, BerkeleyCAUSA",
"Kavli Institute for the Physics and Mathematics of the Universe (WPI)\nUTIAS\nThe University of Tokyo\n277-8583ChibaJapan"
] |
[
"MNRAS"
] |
We present a picture of the matter bispectrum in a novel "interferometric" basis designed to highlight interference of the baryon acoustic oscillations (BAO) in the power spectra composing it. Triangles where constructive interference amplifies BAO provide stronger cosmic distance constraints than triangles with destructive interference. We show that the amplitude of the BAO feature in the full cyclically summed bispectrum can be decomposed into simpler contributions from single terms or pairs of terms in the perturbation theory bispectrum, and that across large swathes of our parameter space the full BAO amplitude is described well by the amplitude of BAO in a single term. The dominant term is determined largely by the F (2) kernel of Eulerian standard perturbation theory. We present a simple physical picture of the BAO amplitude in each term; the BAO signal is strongest in triangle configurations where two wavenumbers differ by a multiple of the BAO fundamental wavelength.
| null |
[
"https://arxiv.org/pdf/1811.12396v1.pdf"
] | 119,088,960 |
1811.12396
|
8751ee7732abd2b8eadbf30e42e39de87620dbaa
|
A Physical Picture of Bispectrum Baryon Acoustic Oscillations in the Interferometric Basis
2018
Hillary L Child
HEP Division
Argonne National Laboratory
60439LemontILUSA
Department of Physics
University of Chicago
60637ChicagoILUSA
Zachary Slepian
Department of Astronomy
University of Florida
211 Bryant Space Sciences Center32611GainesvilleFLUSA
Lawrence Berkeley National Laboratory
1 Cyclotron Road94720BerkeleyCAUSA
Berkeley Center for Cosmological Physics
University of California
94720Berkeley, BerkeleyCAUSA
†
Masahiro Takada
Kavli Institute for the Physics and Mathematics of the Universe (WPI)
UTIAS
The University of Tokyo
277-8583ChibaJapan
A Physical Picture of Bispectrum Baryon Acoustic Oscillations in the Interferometric Basis
MNRAS
0002018Accepted XXX. Received YYY; in original form ZZZPreprint 30 November 2018 Compiled using MNRAS L A T E X style file v3.0dark energy -cosmological parameters -distance scale -cosmology: theory
We present a picture of the matter bispectrum in a novel "interferometric" basis designed to highlight interference of the baryon acoustic oscillations (BAO) in the power spectra composing it. Triangles where constructive interference amplifies BAO provide stronger cosmic distance constraints than triangles with destructive interference. We show that the amplitude of the BAO feature in the full cyclically summed bispectrum can be decomposed into simpler contributions from single terms or pairs of terms in the perturbation theory bispectrum, and that across large swathes of our parameter space the full BAO amplitude is described well by the amplitude of BAO in a single term. The dominant term is determined largely by the F (2) kernel of Eulerian standard perturbation theory. We present a simple physical picture of the BAO amplitude in each term; the BAO signal is strongest in triangle configurations where two wavenumbers differ by a multiple of the BAO fundamental wavelength.
INTRODUCTION
The Baryon Acoustic Oscillation (BAO) method Blake & Glazebrook 2003;Hu & Haiman 2003;Linder 2003;Seo & Eisenstein 2003) has become a central means of pursuing the essential nature of dark energy, a mysterious substance making up roughly 72% of the present-day Universe. The BAO method uses the imprint of sound waves in the early Universe on the late-time clustering of galaxies to probe the cosmic expansion history, which through general relativity is linked to dark energy.
The BAO method has been applied to the galaxy 2-point correlation function (2PCF) and power spectrum (Ross et al. 2017;Alam et al. 2017), as well as more recently to the galaxy 3-point correlation function (3PCF) (Gaztanaga et al. 2009;Slepian et al. 2017a,b) and bispectrum (Pearson & Samushia 2018), measuring respectively 2and 3-point clustering over random in configuration space and Fourier space. While measurements of the bispectrum and 3PCF have improved BAO constraints over those of the E-mail: [email protected] (HC) † E-mail: [email protected] (ZS) power spectrum alone, optimal constraints are difficult to obtain given the large number of bispectrum triangles. Bispectrum covariance matrices are often estimated from mock catalogs (Gaztanaga et al. 2009;Pearson & Samushia 2018; though other approaches exist, see Slepian et al. 2017a,b). In order to properly estimate covariance matrices, the number of mock catalogs must greatly exceed the number of triangles (Percival et al. 2014); when many triangles are used to constrain BAO, the number of mocks needed is unrealistic with present computational resources. For example, Pearson & Samushia (2018) measured the bispectrum for a large number of triangles, but noted that their error bars were limited by the number of mocks available to estimate the covariance matrix; more mocks would improve their 1.1% precision joint constraints substantially to 0.7%, a gain of more than 30%.
With better understanding of which bispectrum triangles are most sensitive to BAO, future studies could obtain better BAO constraints with a smaller set of triangles and, therefore, a smaller covariance matrix. These triangles could be identified by measuring all bispectrum triangles and their covariances, but such an approach also faces the problem of limited mock catalogs. Because fully N-body mocks cannot presently provide a good estimate of the full covariance matrix of all bispectrum triangles, we must select a set of optimal triangles for BAO constraints-without knowledge of the full covariance matrix.
Recently, our work in Child et al. (2018) proposed one technique to select bispectrum measurements that are sensitive to BAO. We highlighted that BAO in the bispectrum constructively interfere in certain triangle configurations, amplifying the BAO signal. In a short work, we measured bispectra only on triangles where the BAO signal is amplified. With this relatively small set of bispectrum measurements we found substantial improvements in BAO constraints over power spectrum measurements alone, equivalent to lengthening BOSS by roughly 30%. Our method for triangle selection greatly reduced the number of bispectrum measurements necessary to obtain such an improvement.
In detail, at leading order the bispectrum involves products of two power spectra; each power spectrum introduces an oscillatory BAO feature. When the oscillations are in phase, they amplify the BAO signal. In our earlier paper, we showed that this constructive interference increases the amplitude of BAO in the bispectrum. We introduced a new parametrization of bispectrum triangles: instead of the three triangle sides k 1 , k 2 , and k 3 , we use the length of one triangle side k 1 , the difference in length of the second from the first in units of the BAO fundamental wavelength, and the angle between them θ. We computed the BAO amplitude for a selection of triangle configurations, producing a map of the root-mean-square (RMS) amplitude as a function of the length difference and the opening angle. Using this RMS map, we can surgically identify the configurations most suitable for studies of BAO in the bispectrum.
The "interferometric basis" proposed in our earlier work promises other applications beyond improvement in BAO constraints with relatively few bispectrum measurements. First, since our method identifies the triangles that are most sensitive to BAO, it offers an approach to more efficiently investigate the independence of bispectrum information from that obtained via reconstruction Noh et al. 2009;Padmanabhan et al. 2009Padmanabhan et al. , 2012 and the covariance between the bispectrum and power spectrum. Second, our parameterization allows intuitive visualization of BAO in the bispectrum. Third, our interferometric approach is sensitive to phase shifts associated with N e f f (such as that driven by relativistic neutrinos at high redshift (Baumann et al. 2017)), spinning particles in the early Universe (Moradinezhad Dizgah et al. 2018), or relative velocities between baryons and dark matter (Dalal et al. 2010;Tseliakhovich & Hirata 2010;Yoo et al. 2011;Slepian et al. 2018).
In this paper, we investigate more fully the physics of BAO in the interferometric basis. The bispectrum is a sum of three cyclic terms, but for many triangles, the cyclic sum is dominated by only one or two of the three terms. Which term dominates is determined primarily by the F (2) kernel of Eulerian standard perturbation theory (SPT). Products of power spectra also enter the leading-order perturbation theory (PT) bispectrum, but their role in the dominance structure is secondary; instead, they introduce the oscillatory features whose interference is highlighted by our basis.
When the bispectrum is dominated by only one or two terms, the amplitude of BAO in the full bispectrum can be approximated by the BAO amplitude in the dominant term or terms. We show analytically that because BAO are a small feature in the power spectrum, the difference between the RMS amplitude computed under this approximation and the full RMS amplitude vanishes at leading order. We can therefore decompose the full RMS map into approximate "eigen-RMSes" computed from individual terms. Numerical work verifies that this decomposition successfully reproduces the primary features of the full RMS map.
To understand these features, we study the behavior of BAO in each term and pair of terms that can dominate the bispectrum. The structure of the power spectrum, in particular BAO and their envelope due to Silk damping, determines the RMS amplitude. For each triangle configuration, the BAO amplitude in each term is driven by one of four interactions between power spectra: interference, incoherence, feathering, or single power spectrum. The first, interference, can dramatically amplify BAO amplitude in certain configurations, like those used in Child et al. (2018) to constrain the BAO scale.
In general, then, the F (2) kernel determines which pairs of power spectra set the amplitude of BAO in the measured bispectrum. The remainder of the paper details this broad picture as follows. In §2, we review the interferometric basis as presented in our earlier work, and define the RMS amplitude of the ratio of physical to "no-wiggle" bispectrum we use in our analysis. §3 introduces notation used throughout. §4 shows which triangle configurations are dominated by a single term of the bispectrum cyclic sum; in these regions, the BAO signal in the bispectrum simplifies to the BAO signal in a single term, as shown analytically in §5. In §6, we numerically calculate the BAO amplitude in each term of the cyclic sum. These "eigen-RMSes" approximate the BAO amplitude in the full bispectrum in the regions where the corresponding terms dominate, and they assemble into a picture that matches the full map of BAO amplitude. §7 presents implications of our study for the reduced bispectrum, the 3PCF, and multipole expansions of the 3PCF and bispectrum. §8 concludes.
Throughout we adopt a spatially flat ΛCDM cosmology at z = 0 consistent with the WMAP-7 (Komatsu et al. 2011) parameters of the MockBOSS simulations (Sunayama et al. 2016) used in Child et al. (2018): Ω m = 0.2648, Ω b h 2 = 0.02258, n s = 0.963, σ 8 = 0.80, and h = 0.71.
INTERFEROMETRIC BASIS
Several sets of triangle parameters have been used in previous works for the isotropic bispectrum and three-point correlation function (3PCF). Sefusatti et al. (2010), Kayo et al. (2013), and Baldauf et al. (2015) considered the bispectrum for equilateral or isosceles triangles. Scoccimarro et al. (1998), Scoccimarro (2000, Bernardeau et al. (2002), Sefusatti et al. (2010), Baldauf et al. (2012), Gil-Marín et al. (2012), and Hikage et al. (2017) used one side k 1 , the ratio of a second side to the first k 2 /k 1 , and the angle between the two θ 12 ; the third side can also be parameterized simply as k 3 (Pearson & Samushia 2018) or by its ratio to k 1 (Jeong & Komatsu 2009;Baldauf et al. 2015). Scoccimarro (2000) and Sefusatti et al. (2006) allowed each triangle side to be any integer multiple of the bin width ∆k 0.015 h/Mpc.
Like the bispectrum, the 3PCF can be parametrized Figure 1. The triangle parameter θ is defined in equation (3) as the exterior angle between k 1 and k 2 .
using one side r 1 , the ratio of a second side to the first r 2 /r 1 , and the opening angle θ (Kulkarni et al. 2007;Marín 2011;Marin et al. 2013). Other parameterizations use two sides r 1 and r 2 . The third parameter can be the opening angle θ between them (McBride et al. 2011a,b;Guo et al. 2014Guo et al. , 2015, the cosine µ of the opening angle (Gaztanaga et al. 2009), a shape parameter combining the three side lengths (Peebles 1980;Jing & Börner 2004;Wang et al. 2004;Nichol et al. 2006), or a multipole expansion of the dependence on opening angle (Szapudi 2004;Pan & Szapudi 2005;Slepian et al. 2017a,b). Many studies of the anisotropic bispectrum and 3PCF, which retain information about the line of sight, have also employed a multipole basis with respect to the line of sight in redshift space (e.g. Gagrani & Samushia 2017;Yamamoto et al. 2017;Castorina & White 2018;Desjacques et al. 2018;Nan et al. 2018;Yankelevich & Porciani 2018). Slepian & Eisenstein (2018) and Sugiyama et al. (2018) use a spherical harmonic expansion of the 3PCF and bispectrum, which includes information on both the internal angle and the angles to the line of sight.
In Child et al. (2018), we introduced a new basis for bispectrum work motivated by the physics of BAO in the bispectrum. The bispectrum B involves products of linear matter power spectra P (e.g. Scoccimarro et al. 1998), as B(k 1 , k 2 , k 3 ) = 2P(k 1 )P(k 2 )F (2) (k 1 , k 2 ;k 1 ·k 2 ) + cyc.
(1)
We refer to 2P(k 1 )P(k 2 )F (2) (k 1 , k 2 ;k 1 ·k 2 ) as the pre-cyclic term, and to the terms denoted by cyc. as the post-cyclic terms. F (2) is the Eulerian standard perturbation theory kernel that generates a second-order density field when integrated against two linear density fields. The F (2) kernel depends only on two side lengths k i and k j and the angle between them (through the dot productk i ·k j ):
F (2) (k i , k j ;k i ·k j ) = 5 7 + 1 2 k i k j + k j k i (k i ·k j ) + 2 7 (k i ·k j ) 2 . (2)
We will refer to the middle term of equation (2), (k i /k j + k j /k i ), as the dipole contribution to F (2) . The power spectra in equation (1) have BAO features that oscillate sinusoidally. These features can thus interfere, motivating us to consider a parameterization of the bispectrum that transparently captures the phase structure. In particular, we set up our parametrization to capture the phase structure of the pre-cyclic term (first term in equa-tion 1) in k 1 and k 2 , as follows:
k 1 , k 2 − k 1 = δ λ f 2 , cos θ =k 1 ·k 2 .(3)
Throughout this work we assume without loss of generality that δ is positive, so k 2 > k 1 . The external angle θ is shown in Figure 1. The fundamental wavelength of the BAO in Fourier space λ f is given by
λ f = 2π s f ≈ 0.0574 h/Mpc,(4)
wheres f = 109.5 Mpc/h is the effective sound horizon evaluated at a fiducial wavenumber k f = 0.2 h/Mpc. As discussed in , the lowest-wavenumber nodes of the baryonic transfer function occur at higher wavenumber than the nodes of sin ks. The effective sound horizon grows with k for k 0.05 h/Mpc and asymptotes to the sound horizon s for k 0.05 h/Mpc; we define λ f according to its asymptotic value at k f = 0.2 h/Mpc.
We use "configuration" to describe a set of triangles with fixed δ and θ over the range 0.01 ≤ k 1 /[h/Mpc] ≤ 0.2. For each configuration, we divide k 1 into 100 bins of width 1.9 × 10 −3 h/Mpc. The other two wavenumbers k 2 and k 3 are calculated from each k 1 according to equation (3). We study configurations with 0 ≤ δ ≤ 4.25 and 0 ≤ θ ≤ 1.1, with 80 points in θ and 80 in δ for a total of 6400 configurations. We sample many (δ, θ) points simply to produce well-resolved figures, but we note that configurations that are close to each other in (δ, θ) space are highly covariant.
We restrict k 1 to the range 0.01 ≤ k 1 /[h/Mpc] ≤ 0.2 and δ to be less than about 4 to capture most of the effects of BAO. For most configurations, k 1 is smaller than the other two triangle sides, but for small δ and θ/π ∼ 1, k 3 can be smaller than k 1 . In our previous paper, these configurations were not used to constrain BAO because they are subject to cosmic variance and covariant with the power spectrum (as discussed under "Simulations" in Child et al. 2018).
Below our minimum wavenumber of 0.01 h/Mpc, cosmic variance becomes significant; given the mock catalogs we use one cannot make a sufficient number of subdivisions to estimate covariances on these large scales. Of course at large scales the covariance should be dominated by the Gaussian Random Field (GRF) contribution, so a template could be used to model the covariance (e.g., as Slepian & Eisenstein 2015 does for the isotropic 3PCF and Slepian & Eisenstein 2018 for the anisotropic 3PCF).
However, even with an adequate covariance, the contribution of low-wavenumber modes to BAO constraints should be small given the small number of large-scale modes in the volume of a survey such as DESI 1 (DESI Collaboration et al. 2016). Our minimum wavenumber corresponds to a physical scale of 628 Mpc/h; DESI will have volume of order 50 [Gpc/h] 3 equivalent to a box side length of roughly 3700 Mpc/h. Thus there are of order 200 modes of wavelength 628 Mpc/h in the box, enabling measurement to about 7% precision. The contribution of lowwavenumber bispectrum modes to BAO constraints will therefore be negligible compared to the 0.1% precision DESI will achieve using power spectrum BAO at higher wavenumber. It is the case that the other two triangle sides can probe higher wavenumbers-our maximum δ studied is 4, so k 1 = 0.01 h/Mpc corresponds to at most k 2 = 2λ f + 0.01 h/Mpc = 0.125 h/Mpc and k 3 = k 1 + k 2 = 0.135 h/Mpc. These wavenumbers do access BAO scales, but nonetheless, the bispectrum error bar will not be competitive with DESI power spectrum precision as the total bispectrum error bars of such configurations will be dominated by the cosmic variance of the shortest side k 1 .
At higher wavenumbers than our maximum, even at the level of linear theory Silk damping (Silk 1968 Wavenumbers above k NL ∼ 0.1 h/Mpc are nonlinear, so perturbation theory no longer provides an accurate model of the bispectrum at these scales (Rampf & Wong 2012). Effective field theory (EFT) models perform reasonably well up to k ∼ 0.3 h/Mpc; Carrasco et al. (2012) describes the power spectrum to the percent level for k 0.3 h/Mpc. In the case of the bispectrum (e.g. Bertolini & Solon 2016;Nadler et al. 2018;de Belsunce & Senatore 2018), the maximum wavenumber at which EFT agrees with simulations depends on configuration and cosmology, but EFT models of the real-space matter bispectrum perform well up to k ∼ 0.2 h/Mpc (Angulo et al. 2015;Baldauf et al. 2015). In redshift space, however, perturbation theory models break down at yet smaller wavenumbers, differing from bispectrum measurements at the 10% level by k = 0.1 h/Mpc (Smith et al. 2008).
Baryonic effects, which are not as yet satisfactorily modeled, also become important at wavenumbers above our maximum. For k 1 h/Mpc, hydrodynamical simulations find a 5 − 15% alteration in the power spectrum relative to darkmatter-only simulations (Chisari et al. 2018). As the bispectrum scales roughly as P 2 with P the power spectrum, this ∼10% uncertainty in the power spectrum likely translates to ∼20% in the bispectum, which at tree level is proportional to the square of the power spectrum (see also the hierarchical ansatz of Groth & Peebles 1977). In the absence of a theoretical model for baryonic effects, the uncertainty in highwavenumber models of the bispectrum is much too large to measure BAO to sub-percent precision.
Overall, then, the range of wavenumbers we consider is a conservative cut to isolate the regime where BAO effects are most prominent and the bispectrum is best understood. Within this range of scales, our interferometric basis identifies the configurations where constructive interference of power spectra amplifies the BAO "wiggles." To quantify the presence of BAO in each configuration, we compute the RMS amplitude of the ratio R of the bispectrum B(k 1 , δ, θ) to its no-wiggle analog B nw (k 1 , δ, θ). We have
R(k 1 , δ, θ) = B(k 1 , δ, θ) B nw (k 1 , δ, θ) ,(5)
where the numerator is computed using power spectra P(k) from CAMB (Lewis et al. 2000) and the denominator using power spectra P nw (k) from the fitting formula for the no-wiggle transfer function of . The Top-The root-mean-square amplitude A (equation 6) of the bispectrum BAO feature in triangle configurations parameterized by (δ, θ). Maxima and minima are set by the constructive and destructive interference of BAO oscillations in the bispectrum. Bottom-In many regions of the RMS map, a single term or pair of terms in the bispectrum cyclic sum (1) dominates the sum. The boundaries between these regions (white lines) correspond to changes in the behavior of the RMS map. The numerals indicate the number of terms that must be considered to accurately approximate the bispectrum.
variance is
A 2 (δ, θ) ≡ ∫ 0.2 0.01 R(k 1 , δ, θ) −R(δ, θ) 2 dk 1 [h/Mpc] ,(6)
whereR(δ, θ) is the mean of R(k 1 , δ, θ) on the same range, 0.01 ≤ k 1 /[h/Mpc] ≤ 0.2. Figure 2 shows the rootmean-square amplitude A for a selection of configurations. Throughout this work we will refer to Figure 2 as the rootmean-square (RMS) map. Our basis is a transformation of the triangle sides (k 1 , k 2 , k 3 ); the axes θ and δ of our RMS map correspond roughly to k 3 and k 2 , respectively. In our basis, the wavenumbers k 2 and k 3 depend on k 1 , δ, and θ as
k 2 = k 1 + δλ f /2, k 3 = k 2 1 + k 2 2 + 2k 1 k 2 cos θ = k 1 (1 + cos θ) 2k 1 + δλ f + δλ f /2 2 ,(7)
where the first equality for k 3 stems from the orientation of θ shown in Figure 1 and the law of cosines. Configurations with the same k 3 therefore lie along sloped curves in the (δ, θ) plane. As the power spectrum depends only on the magnitude of the wavenumber, these curves are also traces of constant P(k 3 ).
The parameter δ was chosen to produce constructive interference in the precyclic term of the bispectrum (first term in equation 1). However, constructive interference is not limited only to this single term: we expect interference as well where k 2 and k 3 , or k 3 and k 1 , differ by integer multiples n of the BAO wavelength λ f . We calculate the configurations for which these conditions are satisfied. Curves where k 2 = k 1 + nλ f are horizontal lines in the (θ, δ) plane, as shown in the left panel of Figure 3; that is,
k 2 = k 1 + nλ f where δ/2 = n.(8)
The curves where k 2 = k 3 + nλ f , shown for n = 0 in Figure 3, are given by
δ = −2k 2 1 cos θ − k 2 1 − 2nλ f k 1 + n 2 λ 2 f nλ 2 f + λ f k 1 cos θ .(9)
We only show the n = 0 case as higher harmonics of k 2 = k 3 + nλ f do not correspond to features in the RMS map, as discussed in §6.1.2 below. The curves where k 3 = k 1 + nλ f , shown as dashed curves in the right panel of Figure 3, follow
δ = 2 λ f k 1 k 1 cos 2 θ + nλ f + n 2 λ 2 f − k 1 (1 + cos θ) .(10)
In general, equations (9) and (10) depend on both k 1 and θ. In the special case of the equilateral configuration, the k 1 dependence cancels; that is, when θ/π = 2/3 = 0.67 and n = 0, equation (9) gives δ = 0. For these configurations, k 3 equals k 2 for all k 1 . For all other configurations, however, equations (9) and (10) can only be satisfied for a single k 1 . When necessary, we choose a representative value of k 1 = 0.1 h/Mpc to compute the configurations for which equations (9) and (10) hold.
NOTATION
Here we define notation for several combinations of power spectra, F (2) kernels, and bispectra that will be used throughout.
BAO in the bispectrum come only from oscillations in the power spectrum, which we isolate as
P BAO i = P(k i ) P nw (k i ) ≡ 1 + w i ,(11)
where w i is defined through this equality and represents the BAO-only piece of the power spectrum. We note that w i 1; the baryon fraction f b in our Universe is small Figure 3. In our basis, k 2 depends only on δ, while k 3 varies with both δ and θ. The behavior of these two wavenumbers in the (δ, θ) basis is critical for understanding both the power spectrum and F (2) kernel. As P(k) ∼ k, the structure of P(k) in the δ-θ plane is similar to that of the individual wavenumbers, while F (2) is a more complicated function (as shown in Figure 7). k 2 and k 3 are calculated according to equation (7) with k 1 = 0.1 h/Mpc. Dashed lines in the left panel show configurations for which k 2 = k 1 + nλ f (equation 8) for n = 1 and 2; n = 0 coincides with the θ axis. In the right panel, dashed curves show configurations for which k 3 = k 1 + nλ f (equation 10) for n = 0, 1, and 2. Solid curves show configurations for which k 2 = k 3 ; the color is red where k i is larger than k 1 , blue where k i is smaller than k 1 , and white where k i = k 1 .
0.0 0.5 θ/π 0 1 2 3 4 δ k 2 0.0 0.5 θ/π k 3 −3 −2 −1 0 1 2 3 (k i − k 1 )/λ f( f b ≡ Ω b /Ω m ∼ 20%)
, so the BAO are a small feature in the power spectrum. Each term in the cyclic sum (1) is denoted by
B i j = 2P i j F (2) i j(12)
with
P i j = P(k i )P(k j )(13)
and
F (2) i j = F (2) (k i , k j ;k i ·k j ).(14)
The ratio of each term to its no-wiggle analog is
R i j = B i j B nw i j = P i j P nw i j = P BAO i P BAO j ,(15)
where the second equality holds because the F (2) kernel is unaltered going from a physical to a "no-wiggle" cosmological model. The kernel stems from the Newtonian gravity solution of the equations of perturbation theory assuming an Einstein-De Sitter (matter-dominated) cosmology, and is thus independent of the input linear power spectrum. We note that all cosmological parameters of the no-wiggle model, including the matter density, are identical to the physical model. Using the definition (11) of w, we may rewrite
R i j = 1 + w i + w j + w i w j ≡ 1 + w i j(16)
where the last equality defines w i j , the oscillatory piece of one term of the bispectrum (1).
To refer to a ratio where one term in the sum is negligible, we use
R i j+jk = B i j + B jk B nw i j + B nw jk .(17)
The sum R 12 + R 23 + R 31 is not equal to R of equation (5). A term dominates ( §4.1) if the median of its ratio with each of the other two terms is at least 5. In our color scheme, primary colors (red, yellow, and blue) represent single terms, while the secondary colors (orange, green, and purple) represent pairs of terms. For an equilateral configuration (δ = 0, θ/π = 0.67), all three terms are identical so none can dominate; the black region surrounds this configuration. Symbols indicate representative configurations that are discussed in detail in §4.3. The white curve shows k 2 = k 1 (equation 9), where at least two terms must be of comparable magnitude ( §4.3.2).
REGIONS OF DOMINANCE
In order to understand the behavior of the BAO amplitude shown in the RMS map ( Figure 2), we seek to identify configurations where the cyclic sum of the perturbation theory bispectrum (1) simplifies. That is, we ask whether there are any regions where the behavior of the full bispectrum is determined by only one or two of the three terms in the cyclic sum. Figure 4, our "dominance map," shows that many of the configurations are indeed dominated by a single term (red and blue regions), and others are dominated by two terms while the third is negligible (green and purple regions). The RMS map (Figure 2) reflects the dominance structure. The horizontal bands at θ/π 0.4 (red region, B 12 dominant) transition to sloped bands in the purple and black regions. The blue region (B 31 dominant) corresponds to a pattern of small maxima and minima in the RMS map that we call "feathering." Finally, in the green region at low δ and high θ, RMS amplitude is maximized for triangles where two wavevectors are nearly antiparallel and the third is small. The mechanisms that drive these different patterns in each region are described in detail in §6 below.
In this section, we detail the calculation and behavior of the dominance map ( Figure 4). In §4.1, we present our definitions of dominance, which require the choice of a factor f . The specific ways in which triangle geometry determines which term dominates are discussed in detail in §4.3; the dominance map is driven primarily by the behavior of the F (2) kernel reinforced by the broadband behavior of the power spectrum, as we will further detail in §4.2. In most regions of the dominance map, the maximum and minimum terms in the F (2) kernel also maximize or minimize the power spectrum; the exceptions are discussed in §4.4. In general, in the squeezed limit where one side of the triangle can be much larger than the smallest, terms including the largest wavenumber are small. In the other limit, an equilateral triangle, all three sides are similar so all functions of them are similar as well and no sides dominate. The dominance plot shows the transitions between these two regimes.
We note that we assume positive δ, and for δ > 0, no region is dominated by B 23 (yellow) or the pair of terms B 12 + B 23 (orange). When δ < 0, k 1 and k 2 interchange. This would correspond to mirroring across the θ-axis; blue would become yellow.
Definition of "Dominance" and Choice of Dominance Ratio f
We identify dominant terms by comparing the magnitudes of terms B nw i j across k 1 . The dominance structure is determined by the broadband behavior of the bispectrum terms, so we use the no-wiggle bispectrum B nw i j to fully isolate the broadband. Results are similar when the full bispectrum B i j is used instead, as BAO are small relative to the broadband.
At each (δ, θ) configuration, we calculate the ratios between each pair of terms as a function of k 1 . We then compare the medians, denoted med, of these ratios to a factor f . We use the median because it is a smooth function of our parameters δ and θ, unlike the mean, which can be skewed by large ratios between the terms at small k 1 . The median is more representative of the typical ratio across all k 1 we consider.
Dominance criterion-If B i j exceeds each other term by at least a factor of f , that is, if
med B nw i j B nw jk > f , med B nw i j B nw ik > f ,(18)
we consider B i j dominant. Double dominance criterion-If two of the terms that enter the bispectrum determine its behavior while the third is relatively small, we say that two terms are double dominant. Two ratios must be within a factor of f of each other but both exceed the third by at least a factor of f . Because terms can be either positive or negative, this comparison alone is not obviously sufficient; one term could be large and positive, and the other large and negative, such that their sum is smaller than the third term. Therefore we also require the sum of the two dominant terms to exceed the third term by a factor f . Our double dominance criterion is thus a set of four conditions: B i j and B ik are both dominant and only B jk is negligible when
med B nw i j B nw jk > f , med B nw ik B nw jk > f , med B nw i j B nw ik < f , med B nw i j + B nw ik B nw jk > f .(19)
In practice, the final condition is not relevant for any configuration we test; the differences between large positive and ). As f increases from the left panel to the right, the black and purple regions (where no single term exceeds all others by at least a factor of f ) expand. That is, when the criterion for a single term to dominate is more strict, fewer configurations are dominated by a single term.
negative terms remain much larger than the third term, for example in the region described in further detail in §4.3.4.
No term dominant-If the medians of all three ratios are within f of each other, then no term is dominant.
The dominance region plot is weakly dependent on the choice of the factor f , as shown in Figure 5. As the threshold for dominance rises, less of the plane is dominated by a single term; the B 12 + B 31 -dominant, B 23 + B 31 -dominant, and no-term-dominant regions encroach on the single-termdominant regions. We choose f = 5 as our standard threshold for dominance, as it is sufficiently large to separate the term that dominates the RMS amplitude. With this choice of f , the non-dominant terms are typically less than 20% of the dominant term, so the ratio of the bispectrum to the no-wiggle bispectrum (5) can be Taylor-expanded about the ratio R i j of a single term in the cyclic sum to its no-wiggle analog (as we do in §5).
F (2) Kernel Drives Dominance Structure
As seen in Figure 6, the structure of the full dominance plot strongly resembles that of a dominance plot for F (2) alone, which itself reflects the behavior of F (2) in the δ-θ plane ( Figure 7).
Including P i j expands some regions (near their borders, P i j can move the maximum term from just under to just over 5× the next-largest term). In Figure 6, we choose f = √ 5 for the P i j and F (2) panels to agree with our choice of f = 5 for the product P i j F (2) . As discussed in §4.3 (equation 20), the P i j and F (2) dominance criteria cannot simply be multiplied together to determine dominance in B i j , but these two contributions independently illuminate the full bispectrum dominance map.
The dynamic range of F (2) is larger than that of P i j , so the F (2) kernel determines most of the dominance map of Figure 4. The middle term of the F (2) kernel, (2), varies the most be-tween configurations: it can be positive or negative, and can be very large when one side is much smaller than the other (for example, surrounding δ = 0, θ/π = 1 in Figure 7). Alternatively, F (2) can approach arbitrarily close to zero (black curves in Figure 7).
k i /k j + k j /k i (k i ·k j ) in equation
Regions of the Dominance Map
In this section, we step through each region of the dominance plot of Figure 4 from left to right to discuss the dominance behavior. In general, the relative magnitudes of the B 12 , B 23 , and B 31 differ across configurations due to differences in the (δ, θ) dependence of the three wavenumbers k 1 , k 2 and k 3 (given in equation (7) and Figure 3).
For each region, we discuss the behavior of the P i j and
F (2)
i j that enter the bispectrum. We build up understanding of each region by first analyzing their behavior separately, then considering the implications for the full dominance plot. We take this approach because the power spectrum products behave very differently from the F (2) kernels, even though dominance is determined by the median ratios of terms B nw i j /B nw jk , which are medians of products and not products of medians:
med B nw i j B nw jk = med P nw i j F (2) i j P nw jk F (2) jk med P nw i j P nw jk × med F (2) i j F (2) jk .(20)
We note that the power spectrum is maximal at k peak ≈ 0.015 h/Mpc; above k peak , P nw (k) declines monotonically as 1/k. Since our analysis covers the range 0.01 ≤ k 1 /[h/Mpc] ≤ 0.2, it is a good approximation that in our k-range of interest the broadband power spectrum falls as P(k) ∝ 1/k. This approximation fails only in the low-δ, high-θ region where k 3 can be sufficiently small that P(k 3 ) increases with k 3 (discussed in §4.3.4 below).
Red region, B 12 dominant
When θ/π 0.5, Figure 4 shows that B 12 is the dominant term in the bispectrum cyclic sum (1). In this red region, configurations are constructive where P BAO 1 and P BAO 2 (defined in equation 11) are in phase, and destructive where they are out of phase. B 12 dominates because in both the F (2) kernel and the products of power spectra P i j , the precyclic terms are largest. F
12 is much larger than F
(2) 23 and F 12 is the sign of the dipole contribution (k i /k j + k j /k i ) positive. The dot product of the unit vectorsk 1 andk 2 , which determines the sign of the dipole contribution, approaches +1 in F 12 will be the largest F
(2) i j . For example, where δ vanishes as well as θ (in the lower left corner, × symbol), two sides, k 1 and k 2 , are equal, while k 3 = 2k 1 . With these side lengths and dot products, F
12 = 2 and F (2) 31 = F (2) 23 = −0.25. F (2)(2)
12 exceeds the other two F
(2) i j by a factor of eight. (chosen for consistency with f = 5 for the product P i j F
0.0 0.5 θ/π 0 1 2 3 4 δ P ij 0.0 0.5 θ/π F (2) ij 0.0 0.5 θ/π B ij = P ij × F(
(2) i j ). For the third panel, a term is dominant if the median of its ratio with each of the other terms is at least 5 (as in Figure 4). Figure 7. The F (2) kernel drives the structure of the dominance plot. As shown in the left panel, F (2) depends only on the angle between two sides through θ (Figure 1) and the ratio of their lengths k j /k i . The kernel can be positive or negative, and crosses zero (black curves). The dynamic range therefore exceeds that of the power spectrum product P i j , which varies only by a factor of 500 across the triangles shown. The remaining three panels show the F (2) i j that enter the bispectrum, evaluated at k 1 = 0.1 h/Mpc. These three panels determine the behavior of the F (2) dominance plot (middle panel of Figure 6): F The effect of the power spectrum products is to further separate the three cyclic terms. For 0 ≤ θ/π < 0.5, the triangle is obtuse (see θ in Figure 1). As θ → 0 and the triangle fully opens, k 3 approaches k 1 + k 2 . For these obtuse triangles k 3 > k 2 > k 1 (see Figure 3) because k 2 always exceeds k 1 . The power spectrum is monotonically decreasing, so the k i ordering implies P(k 1 ) > P(k 2 ) > P(k 3 ). Thus P 12 > P 31 > P 23 , reinforcing the order of the F (2) i j . For nonzero δ (e.g., + symbol in Figure 4), B 31 grows with δ, but B 12 remains dominant by our dominance criterion of f = 5. For large δ, k 1 is small relative to δλ f /2. The other two wavenumbers k 2 and k 3 are both larger than k 1 (Figure 3), so P(k 3 ) approaches P(k 2 ). As a result, P 31 and P 12 are of similar magnitude (as in the purple region at low θ and high δ in the leftmost panel of Figure 6). The magnitudes of the F (2) 12 and F
0.0 0.5 θ/π 10 −1 10 0 10 1 k j /k i F (2) 0 0.5 1 θ/π 0 1 2 3 4 δ F (2) 12 0 0.5 1 θ/π F (2) 23 0 0.5 1 θ/π F (2) 31 −6 −4 −2 0 2 4 6
(2) 31 kernels also grow as δ increases, with the F (2) 31 kernel approaching but remaining smaller than F (2) 12 . As δ continues to increase, B 31 comes within a factor of 10 of B 12 , causing a purple region to appear in the upper left corner of the right panel of Figure 5. We note that this effect is too small to appear when the dominance criterion is f = 5, our choice in the main analysis of this work (as in Figure 4). Around θ/π = 0.6, at most one term in the cyclic sum can be neglected. In the black region at low δ, all three terms are of comparable magnitude; in the purple region at larger δ, B 23 shrinks, but B 12 and B 31 are still large and of similar magnitude. The black region contains triangles that are nearly equilateral; triangles with θ/π = 2/3 and δ = 0 are equilateral for all k 1 . Since all three sides and angles are equal, all three terms in the bispectrum are identical, and no term can dominate any other.
In the purple region, B 23 is negligible compared to B 12 and B 31 . As δ increases along the k 2 = k 3 line shown in Figures 3 and 4, k 2 and k 3 grow larger than k 1 . Since k 2 = k 3 , B 12 and B 31 remain equal; their ratio will deviate little from unity. But as δ increases, both F
(2) 23 and P 23 shrink relative to the other terms. In particular, F
(2) 23 approaches zero. For large δ along k 2 = k 3 , the unit vectorsk 2 andk 3 are antiparallel as k 1 is relatively small, and their dot product
(k 2 ·k 3 ) becomes −1. Since k 2 = k 3 , the dipole contribution (k 2 /k 3 + k 3 /k 2 ) = 2 in the F (2)
23 kernel, and F
(2) 23 therefore vanishes. Furthermore, Figure 3 shows that at large δ, k 2 and k 3 are much larger than k 1 . The product of power spectra P 23 is therefore smaller than the other two products P 12 and P 31 , which both involve the much larger P(k 1 ). Both F
(2) 23 and P 23 shrink as δ increases, so B 23 becomes smaller than the other two terms and can be neglected. Thus B 12 and B 31 dominate the bispectrum cyclic sum.
Blue region, B 31 dominant
At the right side of Figure 4, where θ is large (circle symbol), B 31 dominates the cyclic sum. Both F (2) 31 and P 31 are large relative to the other F (2) kernels and products of power spectra.
In the F (2) kernel, as shown in Figure 7, F
12 vanishes, and negative contributions to F
(2) 23 make it smaller than F
(2) 31 . F (2)
12 vanishes because in this region, triangles are in the squeezed limit, where k 2 ≈ −k 1 . The third wavenumber k 3 approaches k 2 − k 1 , meaning
k 3 → δλ f 2 ,(21)
so k 3 is small relative to the other two wavenumbers (see Figure 3). At the same time, the dot productk 1 ·k 2 approaches −1, so F
(2) 12 behaves as
F (2) 12 → 1 − 1 2 k 1 k 2 + k 2 k 1 ∼ 0.(22)
In equation (22), the difference δλ f /2 between k 1 and k 2
(3) is much smaller than k 1 , so k 1 ∼ k 2 and F
12 vanishes. Meanwhile, the other two terms do not vanish;k 2 ·k 3 = −1 butk 1 ·k 3 = +1, so F
(2)
23 and F (2) 31 approach F (2) 23 → 1 − 1 2 k 2 k 3 + k 3 k 2 (23) F (2) 31 → 1 + 1 2 k 3 k 1 + k 1 k 3 .(24)
The magnitudes of k 1 and k 2 are comparable, so the dipole contributions (k 2 /k 3 + k 3 /k 2 ) in F
23 (equation 23) and (k 3 /k 1 + k 1 /k 3 ) in F (2) 31 is larger than F
(2) 23 . The P i j reinforce the behavior of the F (2) kernel. k 3 is small, but still large enough that P(k 3 ) is monotonically decreasing; for δ of a few, equation (21) is near k Peak ( §4.3). In this limit k 2 = k 1 + k 3 , so k 2 must be the largest of the three wavenumbers. In the power spectrum, then, P(k 2 ) < P(k 3 ), P(k 1 ). P 12 and P 23 , which include P(k 2 ), are therefore smaller than P 31 , which does not. The largest power spectrum term is therefore P 31 as in the F (2) kernel, and B 31 dominates the bispectrum.
Green region, B 23 and B 31 dominant
In the lower right corner of Figure 4 around δ = 0, θ/π = 1 (star symbol), the dominant terms are B 23 and B 31 . Dominance is once again driven by the F (2) kernel; F 23 is more comparable to that of F
(2) 31 than it is at higher δ. Along the line where θ/π = 1, the blue region transitions to green at roughly δ ≈ 1 (square symbol in Figure 4). Here, k 3 = λ f /2 ≈ 0.0285 h/Mpc, which is nearing the lowest k 1 in our range (k 1 = 0.01 h/Mpc). Both k 2 and k 1 therefore exceed k 3 by up to a factor of 10. As in the blue region, triangles in the green region are squeezed, so the dot products between unit wavevectors are the same as in the blue region. The kernels then behave as equations (22-24), and F
(2) 12 (equation 22) vanishes. Unlike in the blue region, however, the dipole contribution (k i /k j + k j /k i ) is large enough that the constants in equations (23) and (24) become insignificant. F
23 is large and negative (k 2 ·k 3 = −1). This region satisfies our double dominance criterion (19), as even the sum B 23 + B 31 exceeds the very small third term B 12 -for example, by a factor of 10 5 at δ = 0.01, θ/π = 1.
Our assumption that P(k) falls monotonically with k breaks down in the green region. Because k 3 is proportional to δ (as in equation 21), k 3 becomes very small for small δ (see Figure 3). Our previous analysis assumed all wavenumbers were large enough that the power spectrum is monotonically decreasing, but in the green region, k 3 can be in the regime where P(k) increases with k. Both k 1 and k 2 are still greater than k Peak , in the regime where P(k) falls with k. For very small k 3 , then, P(k 3 ) may be smaller than P(k 1 ) and P(k 2 ). As a result, P 12 can be the largest product of power spectra, despite the fact that k 1 and k 2 are much larger than k 3 . However, the power spectrum is overshadowed by the behavior of the F (2) kernel. Even for δ = 0.01, where P 12 can exceed the other two products of power spectra by a factor (1) is determined by both P i j and F (2) (as shown in Figure 6), and the two almost always act in the same direction. The maximum term never differs between P i j and F (2) (orange regions, none shown), but the minimum terms are swapped in two regions (black). In these regions, discussed in §4.4, the term with minimum P i j has the second-largest F
(2) i j .
of 25 at small k 1 , the median ratio of power spectra products, med [P 12 /P 23 ], is only 5. But the median ratio of F (2) kernels, med F
(2) 12 /F
(2) 23 , is of order 10 −8 because k 1 so nearly equals k 2 , driving F
(2) 12 to vanish. The dominance structure is thus driven primarily by the F (2) kernel.
Ordering of Subdominant Terms
The B i j of equation (1) with maximum median P i j is everywhere also the term with maximum median |F (2) i j |. The ordering of terms differs only in the regions shown in Figure 8, where the two subdominant terms are swapped. These regions arise because the F (2) kernel can be either positive or negative. In the range of wavenumbers of interest, the power spectrum is always positive but monotonically decreasing, so the products P i j change smoothly. P 23 and P 31 , smaller than P 12 for small θ (see the left panel of Figure 6 and the top panel of Figure 9), cross above P 12 around the equilateral configuration (θ/π = 0.67). P 31 is the first to cross above P 12 because k 2 ≥ k 1 , so P 31 ≥ P 23 . P 23 lags behind (see the top panel of Figure 9).
The behavior of the F (2) kernel (equation 2) is not as simple, as shown in the lower panel of Figure 9. First, it can be either positive or negative (see Figure 7), explaining the θ/π ∼ 0.4 region in Figure 8 where the ordering of the subdominant terms differs between F (2) and P i j . The difference arises because the absolute values of the two subdominant terms F Figure 9. In the shaded region, the ordering of subdominant terms differs between P i j and F
(2) i j : P 12 is the smallest P i j , while the minimum F
(2) i j is F
(2) 23 (compare Figure 8). As discussed in §4.4, this region arises due to differences in the behavior of the median between the power spectrum and the F (2) kernel. Around θ/π ∼ 0.6, both F
(2) 12 and F
(2) 31 are positive for all k 1 , so their medians cross at the same θ as the medians of P 12 and P 31 . At θ/π = 0.78, however, P 23 crosses above P 12 (solid vertical line in top panel); F
(2) 23 lags behind, crossing above F (2) 31 must therefore cross above F
(2) 23 , and it does so in the same region around θ/π ∼ 0.4 where both terms cross zero-but the terms are not equal to zero where they cross each other. As shown in Figure 7, the value of θ at which F
i j crosses zero depends on the ratio between the two sides k i and k j , so F
(2) 23 becomes positive at slightly lower θ than does F (2) 31 . As F
(2) 31 approaches zero, its absolute value falls below the small and positive F
(2) 23 at θ/π = 0.42 (with δ = 2.1, for example, as in Figure 9). After F
(2) 31 becomes positive, it crosses F (2) 23 at θ/π = 0.44. Meanwhile in the product of power spectra, the median P 31 exceeds the median P 23 for all θ. Therefore, in this narrow region where the median F
(2) 31 falls below the median F
(2) 23 , the smallest P i j term is not the smallest F
(2) i j term.
The order of the subdominant terms also differs around θ/π ∼ 0.8 (Figure 8). Figure 9 shows that this region arises due to differences in the behavior of the median between the power spectrum and the F (2) kernel. The power spectrum decreases monotonically, so the median P i j occurs at the median k 1 . Therefore P 23 and P 12 are equal when k 3 = k 1 is evaluated at the median k 1 (see equation 10). In Figure 9, P 23 crosses above P 12 at θ/π = 0.78 (with δ = 2.1). Unlike the power spectrum, the F (2) kernel can be positive or negative, and in some configurations it is positive for some values of k 1 and negative for others. For these configurations, the absolute value of F (2) is not a monotonic function of k 1 , so its median does not necessarily occur at the median k 1 . Therefore F
(2) 23 crosses above F (2) 12 at θ such that the k 3 that corresponds to the median F
(2) 23 equals the k 1 that corresponds to the median F
(2) 12 . In the example of Figure 9, the solution is θ/π = 0.83. P 12 becomes the minimum P i j at θ/π = 0.78 while F
(2) 12 does not become the minimum F
(2) i j until θ/π = 0.83. Therefore in the shaded region of Figure 9 between these two crossings (0.78 ≤ θ/π ≤ 0.83), the order of the subdominant terms differs.
In contrast, around θ/π ∼ 0.6, both F
(2) 12 and F
(2) 31 are positive for all k 1 . Their medians are both found at the median value of k 1 , so F
(2) 31 crosses above F
(2) 12 where k 2 = k 3 (equation 9, evaluated at the median k 1 ). As the median P i j occurs at the median k 1 , P 31 also crosses above P 12 where equation (9) is evaluated at the median k 1 . F
(2) 31 therefore becomes the maximum F
(2) i j at the same value of θ where P 31 becomes the maximum P i j .
Though we set out to explain the regions where the minimum F
(2) i j differs from the minimum P i j , our analysis also explains why the largest P i j is always also the largest F
(2) i j (see Figure 8). The median behaves most simply for F (2) 12 and P 12 , which are both the maximum term at low θ. As θ increases, P 31 is always the first to cross P 12 , and F
(2) 31 is always the first to cross F (2) 12 . F
(2) 23 and P 23 are never the maximum term because k 2 ≥ k 1 . The complexity of the ordering of subdominant terms arises from the F (2) 23 and P 23 terms, but because these terms are never the maximum terms, the maximum term never differs between P i j and F
(2) i j .
DECOMPOSITION INTO EIGEN-ROOT-MEAN-SQUARE PLOTS
We now show how the RMS amplitude A (equation 6) can frequently be approximated as a linear combination of three terms. We first require an expression for the ratio R (equation 5) of the full bispectrum (1) to its no-wiggle analog. We wish to leverage the fact that the BAO are a small fractional feature in the power spectrum, so we write
P i j = P nw i j [1 + w i j ](25)
with P nw i j the product of two no-wiggle power spectra and w i j the BAO feature in the product P i j (13) of linear power spectra. In particular we split the matter transfer function T m into smooth and oscillatory pieces as
T m (k) = T sm (k) + ω(k) j 0 (ks),(26)
where T sm (k) and ω(k) are smooth functions of k , ω is small (because Ω b /Ω m 1), and j 0 (x) = sin(x)/x is the order zero spherical Bessel function.
The power spectrum is proportional to the primordial power spectrum P pri (k) and the matter transfer function as
P(k) = P pri (k)T 2 m (k).(27)
We suppress the redshift dependence of the power spectrum for simplicity, as it does not affect our analysis. The products of power spectra are then
P i j =P pri (k i )P pri (k j )T 2 sm (k i )T 2 sm (k j ) × 1 + ω(k i ) j 0 (k i s) T sm (k i ) 2 1 + ω(k j ) j 0 (k j s) T sm (k j ) 2 .(28)
Taylor-expanding the fractions ω(k i ) j 0 (k i s)/T sm (k i ) to leading order in ω we have
w i j = P i j P nw i j − 1 ≈ 2 ω(k i ) j 0 (k i s) T sm (k i ) + ω(k j ) j 0 (k j s) T sm (k j ) ,(29)
where we used the fact that the no-wiggle power spectrum is simply P nw (k) = P pri (k)T 2 sm (k). In the remainder of this section, we show that in regions where only one or two terms dominate the bispectrum, the variance A 2 of the full bispectrum is approximated to leading order by the variance of only the dominant term or terms. We first consider the case where one term is dominant and the other two negligible, and we then consider the case where one term is negligible and the other two must be retained.
Single Term Dominant
We first consider the case where one term in the bispectrum dominates over the other two; without loss of generality we take this to be the first term.
We calculate the RMS amplitude A (equation 6) from the variance of the ratio R, defined in equation (5)
Our goal is to show that the variance A 2 12 of the approximate ratio
R 12 = B 12 B nw 12 = 1 + w 12(31)
is the same as that of the full ratio, A 2 , to leading order in one or the other of two small parameters we define,
i j ≡ B i j B 12 , nw i j ≡ B nw i j B nw 12 .(32)
We first notice that the variance of R 12 is A 2 12 = w 2 12 − w 12 2 ;
the constant term in equation (31) of course contributes no variance. Since A 2 12 is second-order in the small parameter w 12 , we neglect all corrections at third order and higher. We will find that the difference between the full variance and the variance of R 12 vanishes at second order.
Factoring out the dominant term in the numerator and denominator of equation (31)
) 2 + O( 3 ) .(34)
In the second, approximate equality, we have Taylorexpanded the denominator to second order in . We include the second-order term for the moment but see that it drops out of our end result. We now seek to exploit the fact that the BAO feature itself is small, i.e. w i j 1 (equation 29 with small ω(k)). Using w i j as defined in equation (16)
1 + w 12 − 1 ≈ nw 23 (w 23 − w 12 + w 2 12 − w 12 w 23 ) = nw 23 ∆ w 23,12 (1 − w 12 ),(35)
where to obtain the first equality we substituted the definitions (32) and to obtain the second we Taylor-expanded to leading order in w. In the third equality, we defined
∆ w 23,12 = w 23 − w 12 ,(36)
which is O(w). The analog of equation (35) holds for the 31 term by switching 23 to 31 everywhere. We know the variance at leading order is O(w 2 ) from equation (33), so we only retain terms that are second order in a combination of and w. Our approximate expression for the ratio R is now R ≈ R 12 1 + nw 23 ∆ w 23,12 + nw 31 ∆ w 31,12 − ( 23 + 31 )( nw 23 + nw 31 )
+ ( nw 23 + nw 31 ) 2 .(37)
The first term is O(1), the second and third O( w), and the fourth and fifth O( 2 ). These last two terms cancel each other to second order; i j − nw i j (equation 35) is itself second order, so at leading order the second to last term is equal to the last. Equation (38) then simplifies:
R ≈ R 12 1 + nw 23 ∆ w 23,12 + nw 31 ∆ w 31,12 .(38)
The variance of R is then .
R 2 − R 2 ≈ R 2 12 − R 12 2(
Recalling that R 12 = 1 + w 12 (equation 31) and denoting the variance of R 12 as A 2 12 , we find
A 2 − A 2 12 ≈(43)
2 × (1 + w 12 ) 2 nw 23 ∆ w 23,12 − 1 + w 12 (1 + w 12 ) nw 23 ∆ w 23,12 + (1 + w 12 ) 2 nw 31 ∆ w 31,12 − 1 + w 12 (1 + w 12 ) nw 31 ∆ w 31,12 .
To second order, the difference (43) cancels. Thus, the difference between A 2 and A 2 12 is suppressed by one order relative to A 2 12 . Therefore A 2 12 is the leading contribution to the variance A 2 . When a single term dominates the bispectrum cyclic sum, the variance of that single term is a good approximation of the variance of the full bispectrum. In §6, we use this fact to better understand the behavior of the RMS map (Figure 2) in regions dominated by a single term.
Double Dominance
Our goal is to show that if the sum of two terms dominates the third in R, then the variance of R, A 2 , is well approximated by that of the two dominant terms, and that the error in making this approximation is one order higher than the result itself. The requirement that the sum of two terms is much larger than the remaining term is only one condition of our double dominance criterion (19), but this condition is sufficient to show that A is well approximated by the contribution of only two terms. Our double dominance criterion is more strict in order to distinguish regions where two terms are both large from those where one term nearly dominates the full bispectrum, and its sum with either of the other two (comparably small) terms is much larger than the remaining term.
Without loss of generality we take B 31 B 12 + B 23 . We again begin from equation (30) where to obtain the second line we Taylor-expanded the denominator to second order in 1. We defined and its no-wiggle analog as
We also defined R 12+23 as the first factor in the first line of equation (44), as in equation (17). Multiplying out equation (44) and dropping O( 3 ) terms, we obtain
R ≈ R 12+23 1 + ∆ − ∆ × , ∆ ≡ 31/(12+23) − nw 31/(12+23)
,
∆ × ≡ nw 31/(12+23) 31/(12+23) − nw 31/(12+23) 2 .(46)
The two small parameters 31/(12+23) and nw 31/(12+23) differ only in the BAO feature, which we again parameterize by w i j (equation 16):
where in the second, identical equality we are simply noting that we will drop the subscripts onw. Physically,w is the weighted average of the BAO features in the B 12 and B 23 terms in the bispectrum (1). Expanding equation (47) to second order in w andw, we find
31/(12+23) ≈ nw 31/(12+23) 1 + (w 31 −w) +w 2 − w 31w .(49)
So we see that
∆ ≈ nw 31/(12+23) (w 31 −w)(1 −w), ∆ × ≈ nw 31/(12+23) 2 (w 31 −w)(1 −w).(50)
Retaining only terms at second order and lower, we then have
∆ ≈ nw 31/(12+23) [w 31 −w] , ∆ × ∼ O( 2 w)(51)
Now we have that to second order R = R 12+23 (1 + ∆ ), and we find that
R 2 ≈ R 2 12+23 + 2 R 2 12+23 ∆ , R 2 ≈ R 12+23 2 + 2 R 12+23 R 12+23 ∆(52)
including all terms at second order and below. Thus to second order the variance A 2 12+23 in the dominant terms differs from the variance A 2 in the full bispectrum only by
A 2 − A 2 12+23 ≈ 2 R 2 12+23 ∆ − R 12+23 R 12+23 ∆ .(53)
As in §5.1 above, the leading contribution to the difference has R 12+23 ≈ 1, in which case the two terms on the right side of equation (53) cancel. The error is therefore O ( w) 3/2 , where our notation ( w) 3/2 indicates that the error is third order in small quantities but can have any combination of and w reaching that order. In contrast, the result A 2 12+23 is second-order in w. Thus we have shown that the error of approximating the variance of the full ratio R by that of the ratio of the first two terms, R 12+23 , is one order smaller than the variance itself.
NUMERICAL EIGEN-ROOT-MEAN-SQUARE CALCULATIONS
As shown analytically in §5, in the regions of single and double dominance identified in §4, the RMS amplitude of BAO in the full bispectrum simplifies to the RMS amplitude of BAO in the dominant term or terms of the bispectrum cyclic sum (1). We calculate RMS maps for each single term B i j and pair of terms B i j + B jk . In Figure 10, we combine the single-and double-term RMS maps in the corresponding single-and double-dominance regions; the result matches the full RMS map of Figure 2 reasonably well. The simplified maps of A therefore provide a good approximation to the full RMS map. In the remainder of this section, we fully detail the RMS maps for each single term ( §6.1) and pair of terms ( §6.2) over the full (δ, θ) plane.
Single Term Dominant
While the BAO amplitude in the full bispectrum (1) is a complicated function of triangle configuration, many configurations have only a single term dominant, as discussed in §4. In those regions, the behavior of the RMS amplitude can be understood through the interaction between pairs of oscillating power spectra. In the red (θ 0.5π) and blue (θ/π ∼ 1) regions of the dominance map (Figure 4) respectively, B 12 and B 31 dominate. We expect that in these regions, the RMS map ( Figure 2) is well approximated by the RMS amplitude of BAO in the dominant term only (left panel of Figure 10). Figure 11 zooms in on the RMS maps for each of the single terms, that is, the RMS amplitude (6) of the ratio of each term to its no-wiggle analog (15). No region with δ > 0 has B 23 dominant (see §4), but we discuss this term as well, both for completeness and because it nonetheless shares interesting physics with B 31 .
Each labeled region of the single-term-dominant RMS maps ( Figure 11) is driven by one of the following mechanisms: interference (region B, §6.1.1), incoherence (region A, §6.1.2), feathering (region C, §6.1.3), or single power spectrum (region D, §6.1.4). Only the first mechanism, interference, applies to B 12 , while all four mechanisms occur in B 23 and B 31 . The incoherence, feathering, and single power spectrum mechanisms arise from differences in the rate at which k 3 varies with k 1 across a configuration. At fixed δ and θ, the wavenumbers k 2 and k 3 vary with k 1 according to equation (7); their derivatives with respect to k 1 at fixed δ and θ are
dk 2 dk 1 = 1, dk 3 dk 1 = (k 1 + k 2 )(1 + cos θ) k 3 = (2k 1 + δλ f /2)(1 + cos θ) k 1 (1 + cos θ) 2k 1 + δλ f + δλ f /2 2 .(54)
The behavior of dk 3 /dk 1 differs across the three regions marked in the right two panels of Figure 11. First, at the left edge of the RMS map (region B) where θ = 0, k 1 and k 2 are parallel, so k 3 = k 1 + k 2 and cos θ = 1. The derivative in equation (54) to k 1 , so k 3 is the difference between the other two sides:
k 3 = k 2 − k 1 = δ.
In other words, as θ → π in these configurations, dk 3 /dk 1 → 0, and k 3 is independent of k 1 for any fixed δ and θ configuration. Third, the only configuration where dk 3 /dk 1 is unity for all k 1 is the equilateral triangle in region A, where θ/π = 2/3 and δ = 0, implying k 1 = k 2 = k 3 . In general, the rate of change of k 3 with k 1 increases as θ decreases or δ increases.
In regions approaching θ = 0 or θ/π = 1, therefore, k 3 may vary across a configuration twice as quickly as k 1 , or not at all. When shown as a function of k 1 , the oscillations in P BAO 3 are consequently stretched (as θ → π) or compressed (as θ → 0) relative to the oscillations in P BAO 1 . While the wavelength of oscillations in P BAO 1 and P BAO 2 is the BAO fundamental wavelength λ f (equation 4), the wavelength of oscillations in P BAO 3 can be infinitely large or as small as λ f /2. We find that the interference picture is a good description of the interaction between two oscillations when the ratio of their wavelengths is less than roughly 1.4; when the wavelength of P BAO 3 differs from that of P BAO 2 and P BAO 1 by more than this factor, the concept of a phase shift between P BAO 3 and the other power spectrum in the product becomes meaningless because the wavelengths are simply too different. In the products P BAO 2 P BAO 3 and P BAO 3 P BAO 1 , then, the BAO amplitude is no longer determined by any phase shift between the oscillations, but instead by the alignment of the first (and highest, as subsequent peaks will be suppressed by Silk damping) peaks in each.
Interference
Our basis was designed to highlight interference effects between pairs of power spectra, which determine A in the regions marked "A" of Figure 11. As outlined in §2, when two wavenumbers k i and k j differ by a multiple of the fundamental BAO wavelength λ f (equation 4), the two power spectra P BAO i and P BAO j interfere constructively and amplify BAO. In all three terms shown in Figure 12, interference produces bright ridges of amplified BAO corresponding to the configurations given by equations (8-10), where two wavenumbers differ by nλ f .
The left panel of Figure 12 is clearly similar to the low-θ region of the full RMS map (Figure 2) where B 12 dominates. RMS amplitude is maximized for δ = 0, where k 1 = k 2 and the two BAO features are perfectly in phase. Constructive interference repeats wherever the phase difference between the two power spectra is a multiple of the BAO fundamental wavelength λ f , that is, for even integer values of δ. The first two harmonics are marked in the left panel of Figure 12. As δ increases, the maximum A at even δ declines. This is a result of the declining amplitude of the BAO feature at small scales due to Silk damping. As δ rises, the k 1 dependence is unchanged, but k 2 becomes large enough that Silk damping reduces the amplitude of P BAO 2 ; as the BAO wiggle contribution is damped, it provides less enhancement. In practice, nonlinear structure formation would also degrade the BAO signal at large δ, similar to the effects at large k 1 discussed in §2.
Power spectra also interfere constructively and destructively to produce distinct ridges and troughs in the R 23 and R 31 RMS maps (right two panels of Figure 12). As in R 12 , we expect the RMS amplitude in R 23 to be maximized when P BAO 2 and P BAO 3 are in phase or differ by a multiple of the wavelength, and the RMS amplitude in R 31 to be maximized when P BAO 3 and P BAO 1 are in phase or differ by a multiple of λ f . The solutions (equations 9 and 10) to k 2 = k 3 + nλ f and k 3 = k 1 + nλ f , however, depend not only on δ and θ, but also on k 1 . As a result, for a single choice of (δ, θ), it is not possible for k 2 to equal k 3 (or k 3 to equal k 1 ) for all k 1 in a configuration. We therefore choose k 1 = 0.1 Mpc/h (that is, in the middle of our k 1 range) as a representative value of k 1 . We evaluate equation (9) at k 1 = 0.1 Mpc/h to compute the curve of k 3 = k 2 shown in the middle panel of Figure 12. This curve does correspond to maximum A, but the k 3 = k 2 + λ f curve does not; it falls in region B (labeled in Figure 11 and discussed in §6.1.2), where the wavelengths of P BAO 3 and P BAO 2 are widely different and the interference picture no longer applies. We also evaluate equation (10) at k 1 = 0.1 Mpc/h to compute the curves of k 1 = k 3 + nλ f shown in the right panel of Figure 12.
Incoherence
In Region B (labeled in Figure 11) of the R 23 and R 31 RMS maps, the RMS amplitude is relatively uniform; A is neither maximized nor minimized for these configurations. Because the wavelength of P BAO 3 is much shorter than that of the other two power spectra in this region, the power spectra entering the products P BAO 2 P BAO 3 and P BAO 3 P BAO 1 are incoherent: they cannot interfere constructively or destructively, and patterns in A arise from the amplitudes of the largest-scale peaks in the power spectra. BAO amplitude can only be enhanced when two peaks-a single pair-in the two power spectra align with and amplify each other, and the greatest amplitude occurs where these peaks are at large scales and therefore minimally Silk-damped. As θ → 0, each configuration spans a wider range of k 3 for a fixed range of k 1 , meaning that any change in k 1 maps to a larger change in k 3 (equation 54). In the θ = 0 limit, for example, k 3 = k 1 + k 2 spans at least twice the range of k 1 . The wavelength of P BAO 3 as a function of k 1 is compressed relative to that of P BAO 2 as a function of k 1 (k 2 everywhere changes with k 1 at the same rate, since these two are related by addition of δ). In the small-θ region, therefore, all the products of power spectra in terms that include k 3 -R 23 and R 31 -are products of oscillations with very different wavelengths (see Figure 13).
Although most of the pattern is washed out in the lowθ region of the R 23 RMS map (middle panel of Figure 12), faint banding is still visible around integer values of δ. The maxima diminish with increasing δ as they do in R 12 -the amplitude of the BAO oscillation in the power spectrum drops at higher wavenumbers. The banding is a result of the relative phase (controlled by δ) between P BAO 2 and P BAO 3 at low k 1 . Because the wavelengths of P BAO 2 and P BAO 3 are very different, the peaks do not align more than once. The RMS amplitude is highest when the pair of aligned peaks are both large, but the amplitude of the oscillation in each P BAO falls with increasing wavenumber. Therefore, BAO are maximized when the lowest-wavenumber peak (or trough) in P BAO 2 aligns with the lowest-wavenumber peak (or trough) in P BAO 3 . When the lowest-wavenumber peak in P BAO 2 aligns with a trough in P BAO 3 , the small-wavenumber contributions cancel, and BAO are minimized.
The θ → 0 region of the R 31 RMS map behaves similarly to the same region in the R 23 RMS map-the wavelength of P BAO 3 is again much shorter than the wavelength of P BAO 1 . The phase of P BAO 1 is fixed, so the faint banding pattern is diminished in R 31 . Silk damping still decreases the amplitude of oscillations in P BAO 3 as δ increases; at large δ, P BAO 3 becomes smooth and approaches unity. The RMS amplitude in R 31 therefore approaches that of P BAO 1 only. Near δ = 2, the small-wavenumber region is a minimum of P BAO 3 . As in R 12 , contribution of the low-wavenumber region to the amplitude is therefore minimized, resulting in a faint minimum in the RMS map.
Feathering
In Region C of the R 23 and R 31 RMS maps in Figure 11, small maxima and minima alternate as δ increases. We refer to this behavior as "feathering," a pattern of bright feathers alternating with regions of lower A. As in Region B ( §6.1.2), the behavior of A in this region results from the difference in wavelength between P BAO 3 and the other two power spectra. In Region C, θ approaches π, so dk 3 /dk 1 (equation 54) is small and k 3 changes little with k 1 . P 31 is then P BAO by less than a factor of 1.4 (as explained in §6.1), which in R 23 is the case only for n = 0. At higher and lower θ where the wavelengths differ more widely, the RMS amplitude is driven not by the phase difference between the two spectra, but instead by the alignment of individual peaks, as described in §6.1.2, §6.1.3, and §6.1.4.
large (see Figure 14), A is maximized. In contrast, A is minimized between the bright feathers, where instead P BAO 3 ∼ 1 for small k 1 , and again P BAO 3 ∼ 1 for the highest k 1 in our range. In this case, the range of P BAO 3 is halved relative to the maximum case, and the amplitude contribution due to the k 3 modulation is minimized.
In region C of the R 31 RMS map (right panel of Figure 11), bright feathers alternate with brighter feathers (for example, the maximum at δ = 2.5, θ/π = 0.89). This alternating pattern arises as P BAO 3 at small k 1 moves from a trough, to unity, to a peak. Because Silk damping reduces BAO amplitude at small scales, P BAO 1 is maximized for small k 1 . If this maximum coincides with a maximum in P BAO 3 , its contribution to the RMS amplitude is larger than when it coincides with a minimum in P BAO 3 . That is, A is greater when P BAO 3 travels from a peak to a trough than vice versa, because Silk damping reduces the contribution of peaks at high k 1 that coincide with the final peak in the latter case. Therefore, while all bright feathers occur where P BAO 3 starts from an extremum, they are brighter where that extremum is a maximum and dimmer where it is a minimum. As δ increases, A declines for the feathers, for the same reason as the R 12 interference described in §6.1.1. At high δ, k 3 is larger, so Silk damping reduces the amplitude of oscillations.
Similar logic holds for R 23 (middle panel of Figure 11). Bright feathers occur where P BAO 3 is either a peak or a trough at low k 1 , and the opposite at high k 1 ; A is minimized where instead P BAO 3 is unity at both small and large wavenumber. However, unlike R 31 , there is no pattern of alternating brighter and dimmer maxima. In R 31 , P BAO 1 is held fixed while the starting point of the oscillation in P BAO 3 varies, so the initial peak in P BAO 1 can correspond to either a trough in P BAO 3 (as in Figure 14) or a peak. But in R 23 , the start-ing points of both P BAO 2 and P BAO 3 depend on δ-and differ by k 1 . In this region the magnitude of k 3 is determined by the difference k 2 − k 1 . At the initial value of k 1 in our range, k 1 = 0.01 h/Mpc, the magnitude of k 3 is roughly k 2 − 0.01 h/Mpc. k 2 and k 3 therefore differ by less than onefourth of the BAO fundamental wavelength λ f . Whether P BAO 3 is maximized or minimized at k 1 = 0.01 h/Mpc, P BAO 2 at k 1 = 0.01 h/Mpc must fall in the same quarter wavelength, so its value must be close to that of P BAO 3 but closer to unity. Since the difference between k 2 and k 3 is fixed at small k 1 (unlike the difference between k 1 and k 3 at small k 1 ), an initial peak or trough in P BAO 3 can only correspond to a more limited range of values of P BAO 2 , removing the mechanism by which the brightest feathers appear in the RMS map for R 31 .
Single Power Spectrum
Configurations in Region D (labeled in Figure 11) are squeezed, and only one power spectrum term contributes BAO. With only one oscillation, there can be no interference to amplify BAO, so A is fairly uniform in Region D of the R 23 and R 31 RMS maps.
When θ/π = 1, equation (54) gives dk 3 /dk 1 = 0; that is, k 3 is independent of k 1 for any choice of δ. R 23 is then simply the oscillation from P BAO 2 alone, with no interference. Similarly, R 31 reduces to P BAO 1 . In both terms, the bispectrum BAO come solely from the oscillation of one P BAO . This oscillation is multiplied by P BAO 3 , which does introduce a slight dependence on the parameter δ. While P BAO 3 is constant as a function of k 1 for any value of δ, the value of that constant does depend on δ: as in equation (21), k 3 = δλ f /2 for all k 1 . The argument of P BAO 3 changes with δ, so the level of (1, 2) term (3, 1) term k 1 k 2 k 3 Figure 13. For a configuration with θ/π = 0.2 and δ = 1, the RMS amplitude in the R 12 (uppermost set of curves above) term is driven by phase differences (i.e., interference, described in §6.1.1), while the pattern in the R 23 (middle set) and R 31 (lower set) terms is a result of wavelength differences (i.e., incoherence, described in §6.1.2). Black curves show the ratio of the linear to the no-wiggle power spectrum, P BAO i = P(k i )/P nw (k i ), for each wavenumber as it varies with k 1 ; the product of each pair of ratios is shown in color (P BAO 1 P BAO 2 in orange, P BAO 2 P BAO 3 in teal, and P BAO 3 P BAO 1 in lavender). For example, the oscillations in P BAO 1 and P BAO 2 are out of phase, so the power spectra interfere destructively in P 12 (orange, discussed in §6.1.1). In contrast, P 23 and P 31 include P BAO 3 . At low θ, k 3 can vary over more than twice the range of k 1 , so the oscillations in P BAO 3 are compressed relative to the others (e.g., compare the short-dashed curve to the dot-dashed curve in the lower set of curves). As can be seen in the lavender P 31 term, the two interfering oscillations have very different wavelengths, so their product is neither "constructive" nor "destructive." The behavior in P 23 (teal) is similar; see §6.1.2 for further discussion. oscillates up and down as δ increases. In R 31 , A along the θ/π = 1 line depends only on the level of P BAO 3 . When P BAO 3 > 1, the entire oscillation in P BAO 1 is stretched vertically by a factor greater than unity, slightly increasing A. The opposite is true when P BAO 3 < 1, which compresses the amplitude of the P BAO 1 oscillation and decreases the RMS amplitude. This effect diminishes at higher δ, as P BAO 3 converges to unity.
In R 23 , again the oscillation of P BAO 3 with changing δ causes the θ/π = 1 RMS amplitude to depend on δ. Additionally, as δ increases, P BAO 2 is increasingly Silk-damped, smoothly decreasing the RMS amplitude in P BAO 2 . The faint banding in both R 31 and R 23 is visible along the rightmost edge of the middle and right panels of Figure 11.
Double Dominance
In the purple (k 2 > k 1 but k 3 ≈ k 2 ) and green (θ/π ∼ 1 and δ small) regions of the dominance map (Figure 4), two terms are of comparable magnitude. In these regions we calculate the RMS amplitude (6), shown in the second panel of Figure 10, of the ratio of the sum of the two dominant terms to its no-wiggle analog (17).
B 12 and B 31 Dominant
The purple region of the dominance map (Figure 4), where B 12 and B 31 are both dominant, is a region of transition between B 12 dominance at smaller θ and B 31 dominance at higher θ. The curve of k 2 = k 3 passes through the center, as shown in Figure 15. Along this curve, B 12 and B 31 are very similar (but not identical, since the fact that k 2 is equal to k 3 for our representative k 1 = 0.1 h/Mpc does not imply that k 2 = k 3 for all k 1 in a configuration). At θ lower than the cutoff defined by the curve of k 2 = k 3 in Figure 15, B 12 begins to dominate. While B 31 is still large, k 3 is close to k 2 , so the oscillations and interference behavior of the two terms are very similar. The RMS amplitude A is maximum on the lines where k 2 = k 1 + nλ f . The reverse holds at θ higher than the k 2 = k 3 curve, where B 31 grows to become dominant. Again, the oscillatory behavior of the two terms is similar, with B 31 becoming dominant as θ continues to grow.
B 23 and B 31 Dominant
In the green region of the dominance map (Figure 4), B 23 and B 31 are dominant and A is maximized, as shown in the RMS map (Figure 2). These are the squeezed configurations: k 2 points back along k 1 , and is slightly longer by δλ f /2, so the magnitude of k 3 is constant for any configuration-that is, when θ/π = 1 and δ is fixed, k 3 = δλ f /2 for all k 1 as in equation (21). The RMS amplitude arises from the sum of B 31 , which is large and positive, and B 23 , which is large and negative (as discussed in §4.3.4). Because k 3 is constant for all k 1 in a configuration, P BAO 3 is a constant, so R 31 = P BAO 3 P BAO 1 ∝ P BAO 1 and R 23 = P BAO 2 P BAO 3 ∝ P BAO 2 . In the ratio of the sum to its no-wiggle analog, R 23+31 (equation 17), oscillations arise from the difference between the B 23 and B 31 contributions: a sine added to a negative sine, only slightly out of phase. As δ is positive, the negative B 23 is always Figure 10) are discussed in §6.2. The curve k 2 = k 3 for k 1 = 0.1 h/Mpc is shown in black. At θ to the left of this curve, the RMS map behaves like that for R 12 , while for higher θ, it is more similar to that for R 31 . In both R 12 and R 31 in this region, A is determined by the interference mechanism of §6.1.1. As δ increases above the black square symbol, only B 31 dominates the cyclic sum.
slightly smaller in magnitude than B 31 , so R 23+31 remains positive.
As δ increases, B 23 shrinks and B 31 becomes dominant near the square symbol in Figures 4 and 15, as discussed in §4.3.4. The RMS map transitions into the B 31 -dominant region described in §6.1.4 above where only one power spectrum contributes BAO to the bispectrum.
No Term Negligible
In the final region around equilateral triangles (black region, Figure 4), all three sides are comparable and no term can be disregarded. We must simply reproduce the calculation of the full RMS map in this region, as shown in the middle right panel of Figure 10. The RMS amplitude A is maximized at the equilateral triangle (δ = 0, θ/π = 2/3), where all three sides of a triangle are equal-and therefore the F (2) kernels are all equal and the power spectra are all in phase.
DISCUSSION
Implications for the Reduced Bispectrum
For many triangle configurations, we find that the full bispectrum RMS map is described well by the behavior of only one or two terms in the cyclic sum. The large dynamic range of the F (2) kernel can separate terms by an order of magnitude, allowing us to disregard smaller terms when computing the RMS amplitude. If this held for the reduced bispectrum (defined in e.g. Bernardeau et al. 2002)
Q(k 1 , k 2 , k 3 ) = B(k 1 , k 2 , k 3 ) P(k 1 )P(k 2 ) + P(k 2 )P(k 3 ) + P(k 3 )P(k 1 ) ,(55)
we could simplify its behavior as well. Our work does show that there are regions where the numerator of Q can indeed be simplified. However, a more useful approximation would be if one could approximate the denominator by just one term rather than the full cyclic sum of power spectra, or even a pair of products. Unfortunately, though, it is primarily the F (2) kernel that drives the dominance of one term relative to the others (see §4.2), and it does not enter the denominator of Q. As the leftmost panel of Figure 6 shows, a large swatch of the δ-θ plane is black (no term negligble) for the relevant products of power spectra P i j . While there are several regions where two terms dominate the others (purple, blue, green), these do not seem to offer a significant simplification as they still produce a complicated denominator in equation (55).
It is only in the red (B 12 dominant) and blue (B 31 dominant) regions that the denominator greatly simplifies, to respectively P 12 or P 31 . Since the dominant term in F (2) is always the same as that in P i j (see §4.4 and Figure 8), in these regions F
(2) 12 and F
(2) 31 , respectively, will be much larger than the other two F
(2) i j . The bispectrum is therefore wellapproximated by respectively F
(2) 12 P 12 and F
(2) 31 P 31 . Thus, in these limited regions, Q reduces to F
(2) 12 and F
(2) 31 . In short, working in the δ-θ basis does highlight convenient triangles where one can directly measure the growth kernel F (2) alone and easily divide out the linear theory density field statistics. The contribution of gravitational growth can thereby be isolated from that of the linear theory density field. This isolation might be especially useful in using the 3PCF as a probe of modified gravity (e.g., Vernizzi et al. 2018, in prep.).
Especially insofar as high-wavenumber details of the power spectrum sourced by baryon physics remain challenging to model, canceling out the power spectrum from measurements of F (2) may be desirable. Of course this must be weighed against the reduction in number of usable configurations, as this cancellation happens only on limited regions of the δ-θ plane. Further, at the wavenumbers where baryons become relevant, a tree-level, linearly biased model of the bispectrum is likely already beginning to falter; the numerator is measuring higher-order perturbation theory kernels and higher-order biasing even in these "simpler," single-dominance regions.
Connection to Real Space
We now briefly discuss the connection of the present paper to the 3PCF in configuration space (i.e., real space without redshift-space distortions). Hoffmann et al. (2018) further discuss differences and similiarites between bispectrum and 3PCF more generally, though with a focus on bias parameters, most relevant for smaller scales than the BAO scales investigated here. The wiggles in the bispectrum ultimately correspond to sharp features in configuration space, in particular the BAO creases where one triangle side is the BAO scale or twice the BAO scale. These are visible in Figure 7 of Slepian & Eisenstein (2017), particularly in the linear bias ( = 1) panel but also more faintly in = 0 and = 2. The intuition is much the same as with the 2PCF and power spectrum, where a bump in configuration space leads to a harmonic series of oscillations in Fourier space. One important difference here is that the F (2) kernel, which weights the products of power spectrum by k ±1 , acts like a derivative in configuration space. The BAO feature in the 3PCF is thus essentially the derivative of a BAO bump: positive as the BAO bump rises, then zero at the BAO scale, and negative at larger separations as the BAO bump falls.
Simplification of Multipole Basis
We now point out an interesting additional implication of our work. The multipole basis (expanding the angular dependence of the 3PCF or bispectrum in Legendre polynomials), proposed in Szapudi (2004) (see also Pan & Szapudi 2005), has recently been exploited in a series of works (Slepian & Eisenstein 2015, 2016Friesen et al. 2017) to accelerate measurement of the 3PCF. However, in practice that approach truncates the multipole expansion of the 3PCF as one measures it. The works cited above chose a maximum multipole of max = 10. In principle, however, even at tree level the 3PCF has support out to infinite , as the expansion is done with respect tok 1 ·k 2 and k 3 and 1/k 3 have an infinite multipole series in this variable.
In practice the 3PCF seems well-converged when summed into a function of opening angle using different numbers of multipoles (see Figure 8 of Slepian & Eisenstein 2015). Our work shows that for certain configurations the multipole support is in principle finite. In the regions dominated by B 12 , the bispectrum multipole expansion has compact support, requiring (at least at tree level) only = 0, 1, and 2. The same will hold for the 3PCF; Szapudi (2004) shows that a given in Fourier space maps to only the same in configuration space. (This immediately follows from the plane wave expansion into spherical harmonics and spherical Bessel functions, use of the spherical harmonic addition theorem, and orthogonality of the spherical harmonics.)
There are two implications here: first, the adequacy of tree level perturbation theory can be easily tested using a very small set of multipoles in the red region of B 12 dominance. Second, within this restricted region, the computational work and covariance matrix dimension can be greatly reduced by measuring the 3PCF in the multipole basis only to max = 2. Of course, the price is the reduced number of configurations (and signal) available. While this level of compression may not be necessary for isotropic statistics, RSD introduce a much richer angular structure at a fixed max (see Slepian & Eisenstein 2018;Sugiyama et al. 2018), so a reduction in max may be of particular value.
CONCLUSION
Our bispectrum basis ( §2), designed to identify triangle configurations that amplify the BAO signal, also provides insight on the structure of BAO in the bispectrum. Our analysis in §4 shows that for certain triangle shapes, the bispectrum is dominated by only one or two terms of the cyclic sum (1). The dominance structure is driven primarily by the F (2) kernel of Eulerian standard perturbation theory ( §4.2), which is highly sensitive to triangle shape. In §5 we show analytically that, because BAO are a small feature relative to the broadband bispectrum, the RMS BAO amplitude in the full bispectrum reduces to the RMS BAO amplitude in the dominant term or terms. The error in this approximation is suppressed by one order relative to the BAO amplitude itself. In §6, we build up the complete RMS map of the dependence of BAO amplitude on triangle parameters from simpler maps. These maps show the RMS BAO amplitude in each of the three terms contributing to the cyclic sum. In regions where the corresponding terms dominate the cyclic sum, the full RMS BAO amplitude is well approximated by the single-term-dominant or double-termdominant maps. We reproduce the full bispectrum RMS map by stitching together these simpler maps in the regions where they provide the dominant contribution to bispectrum BAO, then fully discuss the mechanisms that drive BAO amplitude in each single-term-dominant ( §6.1) and double-termdominant ( §6.2) RMS map.
The BAO amplitude in each single term is determined by one of four mechanisms: interference ( §6.1.1), incoherence ( §6.1.2), feathering ( §6.1.3), or single power spectrum ( §6.1.4). The first mechanism, interference, results from phase differences between two power spectra, and dramatically amplifies the BAO signal. The other three mechanisms occur where the wavelengths of the two BAO features are widely different, so the interaction between the two power spectra cannot consistently amplify BAO.
Finally, in §7 we outline implications of our work for the reduced bispectrum, its connection to the 3PCF, and the potential to simplify the multipole expansion of the bispectrum and 3PCF for certain triangle shapes.
In a previous paper (Child et al. 2018), we used the interferometric basis detailed here to obtain substantial improvement in BAO constraints over the power spectrum alone, using a relatively small number of bispectrum measurements that carry the most BAO information. Ideally, bispectrum measurements on all possible triangles would be used to constrain the BAO scale. However, the number of mock catalogs needed to accurately estimate and invert the covariance matrix scales with the number of triangles (Percival et al. 2014). The number of triangles that can be measured is therefore limited by the number of mock catalogs available, and bispectrum BAO constraints like those of Pearson & Samushia (2018) are limited by the error in the covariance matrix. Since current resources limit the number of triangles that can be used to constrain BAO in the bispectrum, the best constraints will be obtained from the triangles that carry the most BAO information and are most independent from each other. One way to identify these triangles is by measuring the full covariance matrix, but such an approach faces the same initial problem of limited mock catalogs.
We therefore face a circular problem: because it is computationally prohibitive to use fully N-body mocks to constrain the covariance matrix of all bispectrum triangles, we wish to reduce the size of the covariance matrix by selecting a subset of optimal triangles for BAO constraints. But without the full covariance matrix, how can those triangles be identified? Our basis offers a compression to only those triangles that are most sensitive to BAO, enabling a 15% improvement over power spectrum BAO constraints using relatively few bispectrum measurements (Child et al. 2018).
Of course, the optimal set of triangles for BAO measurement depends not only on the amplitude of the BAO signal in each configuration, but also on its signal to noise ratio and its covariance with previously measured configurations. In future work, we will further develop an algorithm for selecting triangle configurations, assuming the number of mock catalogs available limit the number of bispectrum measurements that can be used.
In future work, we will use also use BAO-sensitive triangles to better understand the covariance structure of BAO in the bispectrum and power spectrum. Reconstruction Noh et al. 2009;Padmanabhan et al. 2009Padmanabhan et al. , 2012 is expected to affect the covariance between the power spectrum and bispectrum ; see also Slepian et al. 2018, §8.2), but as reconstruction is a numerical procedure, its effect on covariance is difficult to model analytically. Like the bispectrum measurements discussed in the previous paragraph, a full numerical study of the covariance between the post-reconstruction power spectrum and the pre-reconstruction bispectrum is limited by the number of fully N-body mocks available. Fewer mocks are needed if analysis is restricted to the set of triangles most sensitive to BAO, reducing the dimension of the covariance matrix. We will study the effects of reconstruction on these triangle configurations. This effort will allow us to combine bispectrum measurements with the post-reconstruction power spectrum. Depending on the level of independence, the combination of bispectrum measurements and reconstruction may offer further improvement in BAO constraints over that offered by reconstruction alone.
Our approach offers many further applications to the study of BAO in the bispectrum, which we plan to address in future work. For example, the phase of BAO in the power spectrum is sensitive to N eff , the effective number of relativistic neutrino species (Bashinsky & Seljak 2004;Follin et al. 2015;Baumann et al. 2017Baumann et al. , 2018. Our basis is very sensitive to phase effects, so it may be useful to constrain N eff using the bispectrum (Child et al. 2019, in prep.). Other sources of a phase shift in power spectrum BAO such as relative velocities between baryons and dark matter (Dalal et al. 2010;Tseliakhovich & Hirata 2010;Yoo et al. 2011;Blazek et al. 2016;Schmidt 2016), constrained in the power spectrum by Yoo & Seljak (2013) and Beutler et al. (2017) and in the 3PCF by Slepian et al. (2018), may also be constrained using our interferometric basis. Last, our approach may enable study of massive spinning particles, which, if present during inflation, introduce oscillatory cosine terms in the bispectrum (Moradinezhad Dizgah et al. 2018). These terms depend on the wavenumbers, so they can interfere with each other when cyclically symmed.
Figure 2 .
2Figure 2. Top-The root-mean-square amplitude A (equation 6) of the bispectrum BAO feature in triangle configurations parameterized by (δ, θ). Maxima and minima are set by the constructive and destructive interference of BAO oscillations in the bispectrum. Bottom-In many regions of the RMS map, a single term or pair of terms in the bispectrum cyclic sum (1) dominates the sum. The boundaries between these regions (white lines) correspond to changes in the behavior of the RMS map. The numerals indicate the number of terms that must be considered to accurately approximate the bispectrum.
Figure 4 .
4The dominance map ( §4) shows regions where the bispectrum cyclic sum simplifies to a single term or pair of terms.
Figure 5 .
5The shapes and locations of regions of dominance are not highly sensitive to f , the factor by which a term must exceed all others to be called "dominant" (see §4.1, equations 18 and 19
contain both positive and negative contributions of similar magnitude, all contributions to F (2) 12 are positive, so F(2)
Figure 6 .
6The regions of dominance are determined primarily by the F (2) kernel, as discussed in §4.2; the structure of the full dominance plot (right panel) is very similar to that of the F (2) i j dominance plot (middle panel), with some modification from the products of power spectra P i j (left panel). For the left and middle panels, P i j and F(2) i j , a term is dominant if exceeds the other two by a factor of √ 5
6
6right panel) dominates in the blue region ofFigure 6, and in the green region ofFigure
both contributions have the same sign, so F
for the same reason it does in the blue region, but the magnitude of F(2)
large and positive (k 1 ·k 3 = +1) while F
Figure 8 .
8The dominant term in the cyclic sum composing the bispectrum
spuriously cross when both are small. In detail, we take the absolute value of each F(2)i j , since the magnitudes of each term, not their signs, determine domi-
θ/π = 0.83 (solid vertical line in bottom panel). The ordering also differs around θ/π ∼ 0.4, as further discussed in §4.4. nance. For θ ∼ 0, both F
31/(12+23) = B 31 B 12 + B 23 , nw 31/(12+23) = B nw 31 B nw 12 + B nw 23 .
Figure 10 . 2 Figure 11 .
10211then simplifies to dk 3 /dk 1 = 2. Second, in regions C and D where θ approaches π, k 2 is antiparallel The detailed structure of the full RMS map can be understood by considering the RMS amplitude produced by only single terms or pairs of terms in the bispectrum cyclic sum.Left-The single-term-dominant contribution: BAO amplitude associated with R 12 and R 31 in regions where only B 12 or B 31 (indicated on the plot) dominates, detailed in §6.1. Middle Left-The double-term-dominant contribution: regions where one term is negligible, detailed in §6.2 (upper middle is B 12 + B 31 dominant; lower right is B 23 + B 31 dominant). Middle Right-The no-term-negligible contribution: regions where all terms are of comparable magnitude, detailed in §6.3. Right-By combining the other three panels, we reproduce the full RMS map of Figure In each labeled region of the single-term-dominant RMS maps, the RMS amplitude A of the BAO feature is driven by a different mechanism. A is shown for R 12 (left panel), R 23 (middle panel), and R 31 (right panel). The mechanisms are discussed in detail in §6.1: interference in region A ( §6.1.1), incoherence in region B ( §6.1.2), feathering in region C ( §6.1.3), and single power spectrum in region D ( §6.1.4). The labeled regions are identical for R 23 and R 31 , while interference is the only mechanism in R 12 .
Figure 12 .
12by a stretched-out and slowly varying P BAO 3 . Across the full range of k 1 , P BAO When two power spectra are in phase-that is, when k i and k j differ by a multiple n of the BAO fundamental wavelength λ f -constructive interference increases A, as discussed further in §6.1. The curves show k 2 = k 1 + nλ f (left panel, where n = δ/2 as odd integer values of δ produce destructive interference), k 3 = k 2 + nλ f (middle panel), and k 3 = k 1 + nλ f (right panel). Solid curves have n = 0, dashed n = 1, and dot-dashed n = 2. For R 23 and R 31 (middle and right), the curves are calculated assuming k 1 = 0.1 Mpc/h, i.e., in the middle of the k 1 range used in this work. Curves are shown only where the wavelength of P BAO i as a function of k 1 differs from the wavelength of P BAO 1
Figure 14 .
14For configurations in the "feathering" region (Region C ofFigure 11), BAO amplitude is driven by the long-wavelength oscillation in P BAO 3 . The BAO amplitude A is maximized when P BAO 3 varies fully, from trough to peak (black), and minimized when P BAO 3 covers only half of that range (green). See §6.1.3 for further discussion. P BAO 3
Figure 15 .
15Regions where two terms dominate the bispectrum (same as second panel of
) degrades the BAO signal. The Silk damping scale k Silk (equation 7 of Eisenstein & Hu 1998), is approximately 0.125 h/Mpc for our cosmology; at wavenumbers above k Silk , the BAO signal in the transfer function is increasingly suppressed as exp[−(k/k Silk ) 1.4 ] (equation 21 of Eisenstein & Hu 1998).
4.3.2 Middle region, no term dominant (black) or B 12 and B 31 dominant (purple)
asR =
B 12 + B 23 + B 31
B nw
12 + B nw
23 + B nw
31
.
Now computing the expectation value of R and factoring out R 12 to enable further Taylor expansions, we findR ≈ R 12 1 + R 12
−1 R 12
nw
23 ∆ w
23,12
+ R 12
−1 R 12
nw
31 ∆ w
31,12 .
(39)
We now square the form above and multiply out to find
R 2 ≈ R 12
2 + 2 R 12
× R 12
nw
23 ∆ w
23,12 + R 12
nw
31 ∆ w
31,12 . (40)
Now using equation (38) to compute the expectation value
of R 2 , we obtain
R 2 ≈ R 2
12 + 2 R 2
12
nw
23 ∆ w
23,12 + R 2
12
nw
31 ∆ w
31,12 .
and calculate A 2 in terms of the variance of the two dominant terms, A 2 12+23 . We haveR =
B 12 + B 23
B nw
12 + B nw
23
1 + B 31 /(B 12 + B 23 )
1 + B nw
31 /(B nw
12 + B nw
23 )
(44)
≈ R 12+23 1 + 31/(12+23) 1 − nw
31/(12+23) + nw
31/(12+23)
2
http://desi.lbl.gov MNRAS 000, 1-21 (2018)
MNRAS 000, 1-21(2018)
traverses half a wavelength. If this half wavelength starts from an extremum of P BAO 3 where k 1 is small, and ends at the other extremum of P BAO 3 where k 1 is MNRAS 000, 1-21(2018)
ACKNOWLEDGEMENTSWe thank Salman Habib, Uroš Seljak, and Martin White for useful discussions. ZS thanks Joshua Silver and Roberta Cohen for hospitality in Chicago. HC was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746045 and an international travel allowance through Graduate Research Opportunities Worldwide (GROW). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The work of HC at ANL was supported under the U.S. DOE contract DE-AC02-06CH11357. Support for the work of ZS was provided by the National Aeronautics and Space Administration through Einstein Postdoctoral Fellowship Award Number PF7-180167 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. ZS also received support from the Berkeley Center for Cosmological Physics (BCCP), and is grateful to both BCCP and Lawrence Berkeley National Laboratory for hospitality during this work.
. S Alam, 10.1093/mnras/stx721MNRAS. 4702617Alam S., et al., 2017, MNRAS, 470, 2617
. R E Angulo, S Foreman, M Schmittfull, L Senatore, 10.1088/1475-7516/2015/10/039JCAP. 151039Angulo R. E., Foreman S., Schmittfull M., Senatore L., 2015, JCAP, 1510, 039
. T Baldauf, U Seljak, V Desjacques, P Mcdonald, 10.1103/PhysRevD.86.083540Phys. Rev. D. 8683540Baldauf T., Seljak U., Desjacques V., McDonald P., 2012, Phys. Rev. D, 86, 083540
. T Baldauf, L Mercolli, M Mirbabayi, E Pajer, 10.1088/1475-7516/2015/05/007JCAP. 7Baldauf T., Mercolli L., Mirbabayi M., Pajer E., 2015, JCAP, 1505, 007
. S Bashinsky, U Seljak, 10.1103/PhysRevD.69.083002Phys. Rev. 6983002Bashinsky S., Seljak U., 2004, Phys. Rev., D69, 083002
. D Baumann, D Green, M Zaldarriaga, 10.1088/1475-7516/2017/11/007JCAP. 17117Baumann D., Green D., Zaldarriaga M., 2017, JCAP, 1711, 007
. D Baumann, F Beutler, R Flauger, D Green, M Vargas-Magaña, A Slosar, B Wallisch, C Yèche, Baumann D., Beutler F., Flauger R., Green D., Vargas-Magaña M., Slosar A., Wallisch B., Yèche C., 2018
. F Bernardeau, S Colombi, E Gaztañaga, R Scoccimarro, 10.1016/S0370-1573(02)00135-7Phys. Rep. 3671Bernardeau F., Colombi S., Gaztañaga E., Scoccimarro R., 2002, Phys. Rep., 367, 1
. D Bertolini, M P Solon, 10.1088/1475-7516/2016/11/030JCAP. 161130Bertolini D., Solon M. P., 2016, JCAP, 1611, 030
. F Beutler, U Seljak, Z Vlah, 10.1093/mnras/stx1196MNRAS. 4702723Beutler F., Seljak U., Vlah Z., 2017, MNRAS, 470, 2723
. C Blake, K Glazebrook, 10.1086/376983ApJ. 594665Blake C., Glazebrook K., 2003, ApJ, 594, 665
. J A Blazek, J E Mcewen, C M Hirata, 10.1103/PhysRevLett.116.121303Physical Review Letters. 116121303Blazek J. A., McEwen J. E., Hirata C. M., 2016, Physical Review Letters, 116, 121303
. J J M Carrasco, M P Hertzberg, L Senatore, 10.1007/JHEP09(2012)082JHEP. 0982Carrasco J. J. M., Hertzberg M. P., Senatore L., 2012, JHEP, 09, 082
. E Castorina, M White, 10.1093/mnras/sty410MNRAS. 4764403Castorina E., White M., 2018, MNRAS, 476, 4403
. H L Child, M Takada, T Nishimichi, T Sunayama, Z Slepian, S Habib, K Heitmann, arXiv:1806.11147preprintChild H. L., Takada M., Nishimichi T., Sunayama T., Slepian Z., Habib S., Heitmann K., 2018, preprint, (arXiv:1806.11147)
. N E Chisari, 10.1093/mnras/sty2093MNRAS. 4803962Chisari N. E., et al., 2018, MNRAS, 480, 3962
. N Dalal, U.-L Pen, U Seljak, 10.1088/1475-7516/2010/11/007J. Cosmology Astropart. Phys. 117Dalal N., Pen U.-L., Seljak U., 2010, J. Cosmology Astropart. Phys., 11, 007
. V Desjacques, D Jeong, F Schmidt, arXiv:1806.04015preprintDesjacques V., Jeong D., Schmidt F., 2018, preprint, (arXiv:1806.04015)
. D J Eisenstein, W Hu, 10.1086/305424ApJ. 496605Eisenstein D. J., Hu W., 1998, ApJ, 496, 605
. D J Eisenstein, W Hu, M Tegmark, 10.1086/311582ApJ. 50457Eisenstein D. J., Hu W., Tegmark M., 1998, ApJ, 504, L57
. D J Eisenstein, H.-J Seo, E Sirko, D N Spergel, 10.1086/518712ApJ. 664675Eisenstein D. J., Seo H.-J., Sirko E., Spergel D. N., 2007, ApJ, 664, 675
. B Follin, L Knox, M Millea, Z Pan, 10.1103/PhysRevLett.115.091301Phys. Rev. Lett. 11591301Follin B., Knox L., Millea M., Pan Z., 2015, Phys. Rev. Lett., 115, 091301
. B Friesen, arXiv:1709.00086preprintFriesen B., et al., 2017, preprint, (arXiv:1709.00086)
. P Gagrani, L Samushia, 10.1093/mnras/stx135MNRAS. 467928Gagrani P., Samushia L., 2017, MNRAS, 467, 928
. E Gaztanaga, A Cabre, F Castander, M Crocce, P Fosalba, 10.1111/j.1365-2966.2009.15313.xMon. Not. Roy. Astron. Soc. 399801Gaztanaga E., Cabre A., Castander F., Crocce M., Fosalba P., 2009, Mon. Not. Roy. Astron. Soc., 399, 801
. H Gil-Marín, C Wagner, F Fragkoudi, R Jimenez, L Verde, 10.1088/1475-7516/2012/02/047J. Cosmology Astropart. Phys. 247Gil-Marín H., Wagner C., Fragkoudi F., Jimenez R., Verde L., 2012, J. Cosmology Astropart. Phys., 2, 047
. E J Groth, P J E Peebles, 10.1086/155588ApJ. 217385Groth E. J., Peebles P. J. E., 1977, ApJ, 217, 385
. H Guo, C Li, Y P Jing, G Börner, 10.1088/0004-637X/780/2/139ApJ. 780139Guo H., Li C., Jing Y. P., Börner G., 2014, ApJ, 780, 139
. H Guo, 10.1093/mnrasl/slv020MNRAS. 44995Guo H., et al., 2015, MNRAS, 449, L95
. C Hikage, K Koyama, A Heavens, 10.1103/PhysRevD.96.043513Phys. Rev. D. 9643513Hikage C., Koyama K., Heavens A., 2017, Phys. Rev. D, 96, 043513
. K Hoffmann, E Gaztanaga, R Scoccimarro, M Crocce, 10.1093/mnras/sty187Mon. Not. Roy. Astron. Soc. 476814Hoffmann K., Gaztanaga E., Scoccimarro R., Crocce M., 2018, Mon. Not. Roy. Astron. Soc., 476, 814
. W Hu, Z Haiman, 10.1103/PhysRevD.68.063004Phys. Rev. D. 6863004Hu W., Haiman Z., 2003, Phys. Rev. D, 68, 063004
. D Jeong, E Komatsu, 10.1088/0004-637X/703/2/1230ApJ. 7031230Jeong D., Komatsu E., 2009, ApJ, 703, 1230
. Y P Jing, G Börner, 10.1086/383343ApJ. 607140Jing Y. P., Börner G., 2004, ApJ, 607, 140
. I Kayo, M Takada, B Jain, 10.1093/mnras/sts340MNRAS. 429344Kayo I., Takada M., Jain B., 2013, MNRAS, 429, 344
. E Komatsu, 10.1088/0067-0049/192/2/18ApJS. 19218Komatsu E., et al., 2011, ApJS, 192, 18
. G V Kulkarni, R C Nichol, R K Sheth, H.-J Seo, D J Eisenstein, A Gray, 10.1111/j.1365-2966.2007.11872.xMon. Not. Roy. Astron. Soc. 3781196Kulkarni G. V., Nichol R. C., Sheth R. K., Seo H.-J., Eisenstein D. J., Gray A., 2007, Mon. Not. Roy. Astron. Soc., 378, 1196
. A Lewis, A Challinor, A Lasenby, 10.1086/309179Astrophys. J. 538473Lewis A., Challinor A., Lasenby A., 2000, Astrophys. J., 538, 473
. E V Linder, 10.1103/PhysRevLett.90.091301Physical Review Letters. 9091301Linder E. V., 2003, Physical Review Letters, 90, 091301
. F Marín, 10.1088/0004-637X/737/2/97ApJ. 73797Marín F., 2011, ApJ, 737, 97
. F A Marin, 10.1093/mnras/stt520Mon. Not. Roy. Astron. Soc. 4322654Marin F. A., et al., 2013, Mon. Not. Roy. Astron. Soc., 432, 2654
. C K Mcbride, A J Connolly, J P Gardner, R Scranton, J A Newman, R Scoccimarro, I Zehavi, D P Schneider, 10.1088/0004-637X/726/1/13ApJ. 72613McBride C. K., Connolly A. J., Gardner J. P., Scranton R., New- man J. A., Scoccimarro R., Zehavi I., Schneider D. P., 2011a, ApJ, 726, 13
. C K Mcbride, A J Connolly, J P Gardner, R Scranton, R Scoccimarro, A A Berlind, F Marín, D P Schneider, 10.1088/0004-637X/739/2/85ApJ. 73985McBride C. K., Connolly A. J., Gardner J. P., Scranton R., Scoc- cimarro R., Berlind A. A., Marín F., Schneider D. P., 2011b, ApJ, 739, 85
. Moradinezhad Dizgah, A Lee, H Muñoz, J B Dvorkin, C , 10.1088/1475-7516/2018/05/013JCAP. 13Moradinezhad Dizgah A., Lee H., Muñoz J. B., Dvorkin C., 2018, JCAP, 1805, 013
. E O Nadler, A Perko, L Senatore, 10.1088/1475-7516/2018/02/058JCAP. 180258Nadler E. O., Perko A., Senatore L., 2018, JCAP, 1802, 058
. Y Nan, K Yamamoto, C Hikage, 10.1088/1475-7516/2018/07/038J. Cosmology Astropart. Phys. 738Nan Y., Yamamoto K., Hikage C., 2018, J. Cosmology Astropart. Phys., 7, 038
. R C Nichol, 10.1111/j.1365-2966.2006.10239.xMon. Not. Roy. Astron. Soc. 3681507Nichol R. C., et al., 2006, Mon. Not. Roy. Astron. Soc., 368, 1507
. Y Noh, M White, N Padmanabhan, 10.1103/PhysRevD.80.123501Phys. Rev. D. 80123501Noh Y., White M., Padmanabhan N., 2009, Phys. Rev. D, 80, 123501
. N Padmanabhan, M White, J D Cohn, 10.1103/PhysRevD.79.063523Phys. Rev. D. 7963523Padmanabhan N., White M., Cohn J. D., 2009, Phys. Rev. D, 79, 063523
. N Padmanabhan, X Xu, D J Eisenstein, R Scalzo, A J Cuesta, K T Mehta, E Kazin, 10.1111/j.1365-2966.2012.21888.xMNRAS. 4272132Padmanabhan N., Xu X., Eisenstein D. J., Scalzo R., Cuesta A. J., Mehta K. T., Kazin E., 2012, MNRAS, 427, 2132
. J Pan, I Szapudi, 10.1111/j.1365-2966.2005.09407.xMon. Not. Roy. Astron. Soc. 3621363Pan J., Szapudi I., 2005, Mon. Not. Roy. Astron. Soc., 362, 1363
. D W Pearson, L Samushia, 10.1093/mnras/sty1266MNRAS. 4784500Pearson D. W., Samushia L., 2018, MNRAS, 478, 4500
The large-scale structure of the universe. P J E Peebles, Princeton University PressPrinceton, NJPeebles P. J. E., 1980, The large-scale structure of the universe. Princeton University Press, Princeton, NJ
. W J Percival, 10.1093/mnras/stu112Mon. Not. Roy. Astron. Soc. 4392531Percival W. J., et al., 2014, Mon. Not. Roy. Astron. Soc., 439, 2531
. C Rampf, Y Y Y Wong, 10.1088/1475-7516/2012/06/018J. Cosmology Astropart. Phys. 618Rampf C., Wong Y. Y. Y., 2012, J. Cosmology Astropart. Phys., 6, 018
. A J Ross, 10.1093/mnras/stw2372Mon. Not. Roy. Astron. Soc. 4641168Ross A. J., et al., 2017, Mon. Not. Roy. Astron. Soc., 464, 1168
. F Schmidt, 10.1103/PhysRevD.94.063508Phys. Rev. D. 9463508Schmidt F., 2016, Phys. Rev. D, 94, 063508
. M Schmittfull, Y Feng, F Beutler, B Sherwin, M Y Chu, 10.1103/PhysRevD.92.123522Phys. Rev. 92123522Schmittfull M., Feng Y., Beutler F., Sherwin B., Chu M. Y., 2015, Phys. Rev., D92, 123522
. R Scoccimarro, 10.1086/317248ApJ. 544597Scoccimarro R., 2000, ApJ, 544, 597
. R Scoccimarro, S Colombi, J N Fry, J A Frieman, E Hivon, A Melott, 10.1086/305399Astrophys. J. 496586Scoccimarro R., Colombi S., Fry J. N., Frieman J. A., Hivon E., Melott A., 1998, Astrophys. J., 496, 586
. E Sefusatti, M Crocce, S Pueblas, R Scoccimarro, 10.1103/PhysRevD.74.023522Phys. Rev. D. 7423522Sefusatti E., Crocce M., Pueblas S., Scoccimarro R., 2006, Phys. Rev. D, 74, 023522
. E Sefusatti, M Crocce, V Desjacques, 10.1111/j.1365-2966.2010.16723.xMNRAS. 4061014Sefusatti E., Crocce M., Desjacques V., 2010, MNRAS, 406, 1014
. H.-J Seo, D J Eisenstein, 10.1086/379122ApJ. 598720Seo H.-J., Eisenstein D. J., 2003, ApJ, 598, 720
. J Silk, 10.1086/149449ApJ. 151459Silk J., 1968, ApJ, 151, 459
. Z Slepian, D J Eisenstein, 10.1093/mnras/stv2119Mon. Not. Roy. Astron. Soc. 4544142Slepian Z., Eisenstein D. J., 2015, Mon. Not. Roy. Astron. Soc., 454, 4142
. Z Slepian, D J Eisenstein, 10.1093/mnrasl/slv133Mon. Not. Roy. Astron. Soc. 45531Slepian Z., Eisenstein D. J., 2016, Mon. Not. Roy. Astron. Soc., 455, L31
. Z Slepian, D J Eisenstein, 10.1093/mnras/stx490Mon. Not. Roy. Astron. Soc. 4692059Slepian Z., Eisenstein D. J., 2017, Mon. Not. Roy. Astron. Soc., 469, 2059
. Z Slepian, D J Eisenstein, 10.1093/mnras/sty1063Mon. Not. Roy. Astron. Soc. 4781468Slepian Z., Eisenstein D. J., 2018, Mon. Not. Roy. Astron. Soc., 478, 1468
. Z Slepian, 10.1093/mnras/stw3234Mon. Not. Roy. Astron. Soc. 4681070Slepian Z., et al., 2017a, Mon. Not. Roy. Astron. Soc., 468, 1070
. Z Slepian, 10.1093/mnras/stx488Mon. Not. Roy. Astron. Soc. 4691738Slepian Z., et al., 2017b, Mon. Not. Roy. Astron. Soc., 469, 1738
. Z Slepian, 10.1093/mnras/stx2723Mon. Not. Roy. Astron. Soc. 4742109Slepian Z., et al., 2018, Mon. Not. Roy. Astron. Soc., 474, 2109
. R E Smith, R K Sheth, R Scoccimarro, 10.1103/PhysRevD.78.023523Phys. Rev. 7823523Smith R. E., Sheth R. K., Scoccimarro R., 2008, Phys. Rev., D78, 023523
. N S Sugiyama, S Saito, F Beutler, H.-J Seo, arXiv:1803.02132preprintSugiyama N. S., Saito S., Beutler F., Seo H.-J., 2018, preprint, (arXiv:1803.02132)
. T Sunayama, N Padmanabhan, K Heitmann, S Habib, E Rangel, 10.1088/1475-7516/2016/05/051JCAP. 51Sunayama T., Padmanabhan N., Heitmann K., Habib S., Rangel E., 2016, JCAP, 1605, 051
. I Szapudi, 10.1086/420894ApJ. 60589Szapudi I., 2004, ApJ, 605, L89
. D Tseliakhovich, C Hirata, 10.1103/PhysRevD.82.083520Phys. Rev. D. 8283520Tseliakhovich D., Hirata C., 2010, Phys. Rev. D, 82, 083520
. Y Wang, X Yang, H J Mo, F C Van Den Bosch, Y Chu, 10.1111/j.1365-2966.2004.08141.xMNRAS. 353287Wang Y., Yang X., Mo H. J., van den Bosch F. C., Chu Y., 2004, MNRAS, 353, 287
. K Yamamoto, Y Nan, C Hikage, 10.1103/PhysRevD.95.043528Phys. Rev. D. 9543528Yamamoto K., Nan Y., Hikage C., 2017, Phys. Rev. D, 95, 043528
. V Yankelevich, C Porciani, J Yoo, U Seljak, 10.1103/PhysRevD.88.103520arXiv:1807.07076Phys. Rev. D. 88103520preprintYankelevich V., Porciani C., 2018, preprint, (arXiv:1807.07076) Yoo J., Seljak U., 2013, Phys. Rev. D, 88, 103520
. J Yoo, N Dalal, U Seljak, 10.1088/1475-7516/2011/07/018J. Cosmology Astropart. Phys. 718Yoo J., Dalal N., Seljak U., 2011, J. Cosmology Astropart. Phys., 7, 018
. R De Belsunce, L Senatore, arXiv:1804.06849preprint. This paper has been typeset from a T E X/L A T E X file prepared by the authorde Belsunce R., Senatore L., 2018, preprint, (arXiv:1804.06849) This paper has been typeset from a T E X/L A T E X file prepared by the author.
|
[] |
[
"A Trichotomy for Rectangles Inscribed in Jordan Loops",
"A Trichotomy for Rectangles Inscribed in Jordan Loops"
] |
[
"Richard Evan Schwartz "
] |
[] |
[] |
Let γ be an arbitrary Jordan loop and let G(γ) denote the space of rectangles R which are inscribed in γ in such a way that the cyclic order of the vertices of R is the same whether it is induced by R or by γ. We prove that G(γ) contains a connected set S satisfying one of three properties.1. S consists of rectangles of uniformly large area, including a square, and every point of γ is the vertex of a rectangle in S.2. S consists of rectangles having all possible aspect ratios, and all but at most 4 points of γ are vertices of rectangles in S.3. S contains rectangles of every sufficiently small diameter, and all but at most 2 points of γ are vertices of rectangles in S.
|
10.1007/s10711-020-00516-8
|
[
"https://arxiv.org/pdf/1804.00740v3.pdf"
] | 119,621,360 |
1804.00740
|
e4b7f452c9e26e43d1fd16fe41701ba94eb71057
|
A Trichotomy for Rectangles Inscribed in Jordan Loops
13 Jun 2018 June 14, 2018
Richard Evan Schwartz
A Trichotomy for Rectangles Inscribed in Jordan Loops
13 Jun 2018 June 14, 2018
Let γ be an arbitrary Jordan loop and let G(γ) denote the space of rectangles R which are inscribed in γ in such a way that the cyclic order of the vertices of R is the same whether it is induced by R or by γ. We prove that G(γ) contains a connected set S satisfying one of three properties.1. S consists of rectangles of uniformly large area, including a square, and every point of γ is the vertex of a rectangle in S.2. S consists of rectangles having all possible aspect ratios, and all but at most 4 points of γ are vertices of rectangles in S.3. S contains rectangles of every sufficiently small diameter, and all but at most 2 points of γ are vertices of rectangles in S.
Introduction
A Jordan loop is a set of the form h(S 1 ) where S 1 ⊂ R 2 is the unit circle and h : R 2 → R 2 is a homeomorphism. O. Toeplitz conjectured in 1911 that every Jordan loop contains 4 points which are the vertices of a square. This is sometimes called the Square Peg Problem. There is a long literature on the problem. For historical details and a long bibliography, we refer the reader to the excellent survey article [M] by B. Matschke, written in 2014.
An affirmative answer to the Square Peg Problem is known under various restrictions on the nature of the Jordan loop. For instance, the result is known for polygonal loops and smooth loops. A number of authors have widened the class of Jordan loops for which the result is true. See, for instance, the recent paper of T. Tao [Ta]. A recent paper of C. Hugelmeyer [H] shows that a smooth Jordan loop always has an inscribed rectangle of aspect ratio √ 3. The proof, though short, is far from elementary: It requires a result from Heegard Floer homology! The recent paper [AA] proves that any cyclic quadrilateral can (up to similarity) be inscribed in any convex smooth curve.
We insist that our Jordan loops are always oriented counter-clockwise around the regions they bound. Say that a rectangle R graces a Jordan loop γ if the vertices of R lie in γ and if the counter-clockwise cyclic ordering on the vertices induced by R coincides with the cyclic ordering induced by γ.
Let G(γ) denote the space of counter-clockwise labeled gracing rectangles. The aspect ratio of such a rectangle is the length of the second side divided by the length of the first side. G(γ) is naturally a subset of R 5 and as such inherits a metric space structure.
Theorem 1.1 (Trichotomy) Let γ be an arbitrary Jordan loop. Then G(γ) contains a connected set S satisfying one of three properties.
1. S consists of rectangles of uniformly large area, including a square, and every point of γ is the vertex of a rectangle in S.
2. S consists of rectangles having all possible aspect ratios, and all but at most 4 points of γ are vertices of rectangles in S.
3. S contains rectangles of every sufficiently small diameter, and all but at most 2 points of γ are vertices of rectangles in S.
Remarks:
(i) I would describe the three cases respectively as elliptic, hyperbolic, and parabolic, because the geometry of the situation seems to vaguely resemble the action of these kinds of linear transformations on R 2 . Note that more than one case could occur (e.g. for a circle).
(ii) Ruling out the parabolic case would resolve the Square Peg problem.
(iii) The elliptic case occurs for any curve with 4-fold rotational symmetry but I conjecture that the hyperbolic case also occurs at the same time.
(iv) The following result is an immediate consequence of our proof. For any non-atomic measure µ of total mass 1 on γ there is a rectangle R inscribed in γ such that the total µ-measure of each pair opposite sides of γ cut of by R is 1/2. See §5.7. This is tantalizingly close to the square-peg conjecture, but doesn't imply it.
Here is an immediate corollary.
Corollary 1.2 Let γ be any Jordan loop. Then all but at most 4 points of γ are vertices of rectangles inscribed in γ.
This result is sharp: Consider a non-circular ellipse.
Remark: A paper just appeared on the arXiv, [ACFSST], which shows by completely different methods that every Jordan Loop contains a dense set of points which are vertices of inscribed rectangles. This result also has the result mentioned in Remark (iv) above, at least when γ is rectifiable and µ is arc-length measure normalized to have total length 1.
Our proof of the Trichotomy Theorem derives from taking the limit of what happens for a generic polygon. Now let γ be a polygon. By an arc component of G(γ) we mean a connected component of G(γ) which is homeomorphic to an arc. By proper , we mean that as one moves towards an endpoint of an arc component in G(γ), the aspect ratio tends either to 0 or to ∞.
Theorem 1.3
For an open dense set of polygonal loops γ the space G(γ) is a piecewise smooth 1-manifold whose arc components are proper. The ends of G(γ) are in naturally in bijection with the critical points of the distance function d : (γ × γ) − ∆ → R. Here ∆ is the diagonal in γ × γ.
We say that an arc component A of G(γ) is a sweepout if the aspect ratio of the rectangles in A tends to 0 as the rectangles move towards one endpoint of A endpoint of A and ∞ at the other. Figure 1 shows an example. The operation of cyclically relabeling gives a Z/4 action on the space G(γ) which has no fixed points. We call a compact component of G(γ) elliptic if it is stabilized by the Z/4 action. I am not sure that a elliptic component can exist but I don't know how to rule it out. The Z/4 action freely permutes the arc components. Hence the labeled arc components all come in clusters of 4. When we forget the labelling, we divide by 4. Define the following quantities:
• Ω(γ) is the number of unlabelled gracing squares.
• Ω ′ (γ) is the number of unlabelled sweepouts.
• Ω * (γ) is the number of elliptic components.
We will show that Ω(γ) + Ω ′ (γ) + Ω * (γ) ≡ 0 mod 2
for an open dense set of polygons. The sweepouts and elliptic components are global in nature. For instance every point of γ except at most 4 is the vertex of a rectangle in a sweepout, and every vertex of γ is the vertex of a rectangle in an elliptic component. The fact that G(γ) has one of these global components is what powers the Trichotomy Theorem. By showing that Ω(γ) is odd we establish the folliowing theorem.
Theorem 1.4
For an open dense set of polygons γ the sum Ω ′ (γ) + Ω * (γ) is odd. Thus G(γ) either has a sweepout or an elliptic component (or both).
In §2 we will study the problem of taking four lines L 0 , L 1 , L 2 , L 3 , not necessarily all distinct, and finding points a 0 , a 1 , a 2 , a 3 with a i ∈ L i for all indices i = 0, 1, 2, 3, such that a 0 , a 1 , a 2 , a 3 are the counterclockwise ordered vertices of a rectangle. We will identify a subset X 1 in the space X 0 of all such 4 tuples on which this problem has a nice solution. Lemma 2.3 states the result. Theorem 2.5 says that the complement X 0 − X 1 has codimension 2, and this is the key to our homotopy proof of Theorem 1.4.
In §3 we prove most of Theorem 1.3. The strategy is to define what we mean by a generic polygon precisely enough so that the manifold structure in Theorem 1.3 becomes an easy application of Lemma 2.3.
In 4 we finish the proof of Theorem 1.3 by showing that Ω(γ) is odd. In §5 we prove the Trichotomy Theorem by taking a suitable limit of the either a sweepout or a elliptic component.
One thing I would like to mention is that I discovered all the results in this paper by computer experimentation. I wrote a Java program which computes the space G(γ) in an efficient way for polygonal loops γ having up to about 20 sides.
I would like to thank Peter Doyle, Cole Hugelmeyer, and Sergei Tabachnikov for helpful and interesting conversations about the ideas in this proof. Some of this work was done while I was at the Isaac Newton Institute in Spring 2017, supported by the INI and also by a Simons Sabbatical Fellowship. I was also supported by the National Science Foundation while completing this work. I would like to thank all these institutions for their generous support.
Rectangles and Lines
The Generic Case
Given V = (x, y) ∈ R 2 , we define V ′ = (−y, x). We get V ′ by rotating V counterclockwise by π/2 radians.
We consider 4 lines L 0 , L 1 , L 2 , L 3 ∈ R 2 . These lines need not be distinct. We say that a rectangle R graces our lines if R has vertices a 0 , a 1 , a 2 , a 3 with a j ∈ L j for all j, and these points go in counter-clockwise order around R. Now we find formulas for the rectangles which grace our lines.
We rotate so that none of these lines is vertical. This lets us find constants
m j , b j so that a j (x) = (x, m j x + b j ) ∈ L j(2)
for all x ∈ R. Given s, t, λ ∈ R we define the rectangle R = R(s, t, λ) with vertices a 0 (s), a 1 (t), a 2 (s, t) and a 3 (s, t), where
a 2 (s, t) = a 1 (t) + λV ′ , a 3 (s, t) = a 0 (s) + λV ′ , V = a 1 (t) − a 0 (s). (3)
The side connecting a 0 (s) to a 1 (t) is the first side of R. Hence, according to our definition of aspect ratio given in the introduction, R has aspect ratio λ. The conditions that a j (s, t) ∈ L j for j = 2, 3 lead to two equations in two unknowns:
ρ j + σ j s + τ j t, j = 2, 3.
Each coefficient is linear in r. When these equations do not describe parallel lines in (s, t)-space, their unique solution is given by
s = S 0 + S 1 λ + S 2 λ 2 U 0 + U 1 λ + U 2 λ 2 , t = T 0 + T 1 λ + T 2 λ 2 U 0 + U 1 λ + U 2 λ 2 ,(5)
where
S 0 = −(b 0 −b 3 )(m 1 −m 2 ), T 0 = −(b 1 −b 2 )(m 0 −m 3 ), U 0 = (m 1 −m 2 )(m 0 −m 3 ) S 1 = −b 0 +b 1 −b 2 +b 3 −b 0 m 1 m 2 +b 3 m 1 m 2 +b 0 m 1 m 3 −b 2 m 1 m 3 −b 0 m 2 m 3 +b 1 m 2 m 3 . T 1 = −b 0 +b 1 −b 2 +b 3 −b 1 m 0 m 2 +b 3 m 0 m 2 +b 1 m 0 m 3 −b 2 m 0 m 3 −b 0 m 2 m 3 +b 1 m 2 m 3 U 1 = m 0 − m 1 + m 2 + m 0 m 1 m 2 − m 3 − m 0 m 1 m 3 + m 0 m 2 m 3 − m 1 m 2 m 3 S 2 = −(b 0 − b 1 )(m 2 − m 3 ) T 2 = −(b 0 − b 1 )(m 2 − m 3 ) U 2 = (m 0 − m 1 )(m 2 − m 3 )
We found this equation using Mathematica [W]. Note that the denominators only depend on the slopes.
Definedness
In this generic situation, Equation 5 describes all the gracing rectangles. In the next section we will discuss the special case where it does not.
Lemma 2.1 The functions s(λ) and t(λ) are well defined provided that the lines are not all parallel.
Proof: For well-definedness, all we need is that some U j does not vanish. If both U 0 and U 2 vanish then three of the slopes are the same. When we set three of the m j variables to m and the other one to M we find that
U 1 = ±(1 + m 2 )(M − m)
. The sign depends on which three we have chosen. So, U 1 is nonzero as long as M = m. ♠
We note certain coincidences that might happen for L.
1. Two lines of L are parallel or equal.
2. Two lines of L are perpendicular.
3. Three lines of L have a common intersection.
4. All 4 lines are distinct, and the line through L a ∩ L b and L c ∩ L d is perpendicular to the line L a . These indices are meant to be distinct.
We let χ(L) denote the number of such coincidences which occur. For instance, if three lines of L are parallel then χ(L) ≥ 3.
Lemma 2.2 s(λ) and t(λ) are nontrivial rational functions if χ(L) ≤ 1.
Proof: We first work with s(λ). The condition that s(λ) is constant is a geometric one. It means that there is a single point on L 0 that is the vertex of infinitely many gracing rectangles. We will suppose that this happens and show that χ(L) ≥ 2. Given that the condition is geometric, we can translate the picture so that (0, 0) is the offending point. This gives b 0 = 0 and S 0 = S 1 = S 2 = 0. The condition S 0 = 0 gives b 3 = 0 or m 1 = m 2 . The condition S 2 = 0 gives b 1 = 0 or m 3 = m 2 . If m 1 = m 2 = m 3 then χ(L) ≥ 2. Now we consider the other 3 cases.
• Suppose b 1 = 0 and b 3 = 0. Then S 1 = b 2 (m 1 m 3 + 1). Note that b 2 = 0 because not all the lines contain the origin. This gives m 1 m 3 = −1, making L 1 and L 3 perpendicular. In this case L 0 , L 1 , L 3 contain the origin and L 1 , L 3 are perpendicular. Hence χ(L) ≥ 2.
• Suppose that b 3 = 0 and m 2 = m 3 = m. This gives χ(L) ≥ 1. If χ(L) = 1 then b 1 = 0 and no lines are perpendicular to L 2 or L 3 . In particular, L 0 ∩ L 3 and L 1 ∩ L 2 are unique and distinct points. Thus, we can rotate so that m = 0. The condition S 1 = 0 now implies that b 1 = b 2 . The intersection L 1 ∩ L 2 lies on the y-axis. Since b 0 = 0 and b 3 = 0, the intersection L 0 ∩ L 3 also lies on the y-axis. But then the y-axis, which contains both L 0 ∩ L 3 and L 1 ∩ L 2 , is perpendicular to L 2 . This gives a coincidence of Type 4. Hence χ(L) ≥ 2.
• Suppose that b 1 = 0 and m 2 = m 1 . We get the same situation as in the previous case with the indices 1 and 3 swapped.
This completes the proof for s(λ). The proof for t(λ) is the same but with the indices suitably permuted. Alternatively, we can deduce this case from symmetry. Let ρ denote reflection in the y-axis. Let L * denote the set of lines ρ(L 1 ), ρ(L 0 ), ρ(L 2 ), ρ(L 3 ). Then s(λ) is constant with respect to L * if and only if t(λ) is constant with respect to L. Thus, the result for t(λ) follows from the result for s(λ) and symmetry. ♠
The following result is the key to most of our analysis. Lemma 2.3 Let L 0 , L 1 , L 2 , L 3 be 4 lines with χ(L) ≤ 1. Then there is a finite subset Σ ⊂ R with the following property. A rectangle R λ of aspect ratio λ graces the lines if and only if λ ∈ Σ. In this case, R λ is unique.
Proof: Since s(λ) and t(λ) are nontrivial rational functions, each choice of λ gives a unique rectangle of aspect ratio provided that these functions do not blow up at λ. The blow-up can only happen at finitely many points. ♠
The Coherent Case
The analysis above is incomplete when there is a value of λ such that the two equations in Equation 4 describe the same line. In this case, there is an extra line of solutions not captured by Equation 5, and we call our lines λ-coherent. We call λ the coherent value.
In the λ-coherent case, we have an infinite set of rectangles, all of aspect ratio λ, which grace the 4 lines. In particular, we can find 2 rectangles R 1 and R 2 of aspect ratio λ such that the line L j contains the jth vertex of R 1 and the jth vertex of R 2 , and all these vertices are unequal. Conversely, if we start with 2 such rectangles, and define the lines through their corresponding vertices, we produce a λ-coherent example. In short, all λ-coherent examples arise this way. It is worth recording this as a neat little theorem in similarity geometry.
Theorem 2.4 Let R 1 and R 2 be two labeled rectangles of the same aspect ratio λ, having pairwise distinct vertices. Let L 1 , L 2 , L 3 , L 4 denote the lines such that L j contains the jth vertex of R 1 and R 2 . Then there is a continuous 1-parameter family of rectangles, each of which graces L 1 , L 2 , L 3 , L 4 and has aspect ratio λ.
The λ-coherent case is very pretty, but we want to ignore it. Generically, the lines L 1 , L 2 , L 3 , L 4 are not λ-coherent for any value of λ. Also, for each particular value of λ, the set of λ-coherent lines has codimension 2. We only need to know this fact when λ = 1, so in this case we will justify the codimension 2 claim.
We rotate and translate the picture so that L 0 is the x-axis. That is, m 0 = b 0 = 0. It suffices to show that, amongst the set of 4-tuples with m 0 = b 0 = 0, the set of 1-coherent 4-tuples has codimension 2. If our lines are 1-coherent, then we must have ρ 2 σ 3 = ρ 3 σ 2 and ρ 2 τ 3 = ρ 3 τ 2 . This leads to the equation
m 1 = m 2 − m 3 + m 2 m 3 1 + m 2 + m 2 m 3 , b 1 = b 2 − b 3 + b 2 m 3 1 + m 2 + m 2 m 3 .
These are 2 independent algebraic equations. Hence, the solution set is contained in a finite union of manifolds of codimension 2.
Codimension Two Conditions
Here we prove a result that will be very useful in proving our parity result about the number of sweepouts. Let X 0 be the set of all 4-tuples of lines. This is naturally a connected smooth manifold. Let X 1 ⊂ X 0 denote the set of configurations L such that χ(L) < 1 and L is not 1-coherent.
Theorem 2.5 X 0 −X 1 is contained in a finite union of smooth submanifolds of codimension 2. In particular, X 1 is open dense in X 0 and path connected.
We will prove this result through two smaller lemmas. Since the set of 1-coherent configurations is contained in a finite union of codimension 2 manifolds, it suffices to analyze the configurations in X 0 −X 1 which are not 1coherent. In other words, we just have to show that the set of configurations L having χ(L) > 1 is contained in a finite union of codimension 2 submanifolds.
In the definition of χ, there are finitely many coincidences. We enumerate these coincidences in some order. For instance, Coincidence 0 could be the event that the lines L 0 and L 1 are equal or coincide. Let M i ⊂ X 0 denote the set of those configurations which have coincidence i. There are 4 types of coincidences, and they are listed in §2.2. We define the Type of M i to be the type of the coincidence associated to it.
Let ζ ∈ M be a point. Let L be the configuration corresponding to ζ. We define a rotating deformation as follows: We choose a line of L and rotate that line about some point in the plane. Such a deformation defines a nontrivial tangent vector in the tangent space T ζ (X 0 ). Lemma 2.6 M i is a smooth codimension 1 submanifold.
Proof: We set M = M i . Let ζ ∈ M. We rotate so that none of the lines in the configuration L corresponding to ζ is vertical. We first show that M is the zero set of a smooth function in a neighborhood of ζ. There are 4 cases, corresponding to the types of coincidences.
1. If L a and L b are equal or parallel, then we set F (ζ) = m a − m b .
2. If L a and L b are perpendicular, then we set F (ζ) = m a m b + 1.
3. If L a , L b , L c have a common point, we set F (ζ) = det(V a , V b , V c ), where V a = (m a , −1, b a )
is the vector dual to L a , etc.
4. We have non-parallel lines L a and L b , and nonparallel lines L c and L d such that the line Λ through L a ∩ L b and L c ∩ L d is perpendicular to L a . The slope µ of Λ is a rational function of the slopes m a , m b , m c , m d . Thus, we set F (ζ) = m a µ + 1.
In all cases, a suitable rotating deformation destroys the coincidence. Therefore the differential dF is nonsingular at ζ. Our result now follows from the Implicit Function Theorem. ♠
Lemma 2.7 If i = j then M i ∩ M j is a smooth manifold of codimemsion 2.
Proof: Let L be the configuration corresponding to ζ. The basic idea in the proof is to produce two rotating deformations V i and V j such that each deformation preserves one of the coincidences and destroys the other one. If M i and M j are defined by smooth functions in a neighborhood of ζ then the existence of V i and V j shows that the differentials dF i and dF j are linearly independent at ζ. This proves the result. What follows is a case-by-case analysis. Let (a, b) denote the pair such that M i has Type a and M j has Type b. We take a ≤ b.
1. Suppose b ≤ 2 or (a, b) = (3, 3)
. In this case, there is a line in Coincidence i that is not involved in Coincidence j and vice versa. Any rotating deformations with respect to these lines do the job.
2. Suppose a ≤ 2 and b = 3. The only way the argument in Case 1 breaks down is that both lines L α and L β involved in Coincidence i belong to the triple of lines L α , L β , L γ involved in Coincidence j. In this case, we let V i be the deformation that rotates L α about a point of L α ∩ L β ∩ L γ and let V j be any rotation deformation of L γ .
3. Suppose a ≤ 3 and b = 4. In Coincidence j, the line L α is perpendicular to the line through p αβ = L α ∩ L β and p γδ = L γ ∩ L δ . We call L α the special line and p αβ and p γδ the special points. Let α ′ = α be an index of some line involved in Coincidence i. Let β ′ be the index of some line not involved in Coincidence i. We let V i be the deformation that rotates L α ′ about the special point that contins it and we let V j denote the rotation of L β ′ about a generic point.
have to worry about the case when the special points w.r.t. Coincidence i coincide with the special points w.r.t. Coincidence j. In this case, the special lines are distinct. For k ∈ {i, j} we let V k be the rotation of the special line w.r.t. Coincidence k about the special point it contains.
This completes the proof. ♠
Compatible Perturbations
Remark: This section is rather technical. It will help us in §3.3 when we need to fix the local properties of squares which grace a polygon and share a vertex with that polygon. Later we will call these critical rectangles. We recommend that the reader skim the section on the first pass.
Let L = (L 0 , L 1 , L 2 , L 3 ) be as above. Let Σ be as in Lemma 2.3. Given λ 0 ∈ R − Σ, there is a unique rectangle R(λ 0 ) of aspect ratio λ 0 that graces L. As λ varies in a neighborhood of λ 0 , the vertex a 0 (λ) varies as well. The point a 0 (λ) moves monotonically in a neighborhood of λ 0 provided that s ′ (λ 0 ) = 0. Here s ′ = ds/dλ. Since s ′ is a rational function of λ, there is a finite set Σ 0 = Σ 0 (L) such that s ′ (λ 0 ) = 0 if and only if λ 0 ∈ Σ 0 .
Let L * be another such configuration. We assume that χ(L) ≤ 1. This means that χ(L * ) ≤ 1 provided that L * and L are sufficiently close. We also specify a point a 0 ∈ L 0 and a * 0 ∈ L * 0 We call the pairs (L, a 0 ) and (L * , a * 0 ) compatible if the following is true.
• a * 0 = a 0 . • If a 0 ∈ L j and j = 2, then L * j = L j . • L * i = L * j if and only if L i = L j . L *
is a perturbation of L in which we do not split apart any equal lines and we do not touch L 0 or any line adjacent to L 0 that contains a 0 .
We assume that there is some rectangle R gracing L which has a 0 as its first vertex. This means that λ 0 ∈ R − Σ(L), where λ 0 is the aspect ratio of R.
Lemma 2.8 Suppose that λ 0 ∈ Σ 0 (L). Then there is a configuration L * , arbitrarily close to L, with the following properties.
• (L, a 0 ) and (L * , a * 0 ) are compatible. • a * 0 is the first vertex of a rectangle R * that graces L * . • λ * 0 ∈ Σ 0 (L * ), where λ * 0 is the aspect ratio of R * .
Proof: Suppose first that the only lines in the set {L 3 , L 1 } which contain L actually coincide with L 0 . We choose λ ′ ∈ R − Σ − Σ 0 (L) that is very close to λ and let R ′ be the corresponding rectangle that graces L. Letting τ be the translation along L 0 that moves the vertex a ′ 0 of R ′ back to a 0 , we define L * = τ (L) and R * = τ (R). These objects have the desired properties.
It cannot happen that L 3 , L 0 , L 1 all contain a 0 because then L 3 and L 1 extend two adjacent sides of R and hence are perpendicular. This yields χ(L) ≥ 2. So, a 0 can be in at most 2 of these lines. The only case left to consider is when a 0 lies in L 0 and exactly one consecutive line, and these lines are distinct. We consider the case when a 0 = L 0 ∩ L 1 . The case when a = L 3 ∩ L 0 has a similar treatment and indeed follows from symmetry.
If L 2 = L 3 then this common line contains two points of R, and L 1 contains the other two points. But then L 1 , L 2 , L 3 are all either parallel or equal. This yields χ(L) ≥ 2. Hence L 2 = L 3 . If L 2 = L 1 then the points a 0 , a 1 , a 2 all lie on L 1 , and this is impossible for a three vertices of a rectangle. Hence L 2 = L 1 . If L 2 = L 0 then χ(L) ≥ 2 because L 2 = L 3 and L 0 , L 1 , L 2 all contain a 0 . Hence L 2 = L 0 . Similar arguments show that L 3 = L 0 and L 3 = L 1 . In short, all 4 lines are distinct and an arbitrary perturbation which just moves L 2 and L 3 is compatible with L provided that the perturbation is small enough.
Given the geometric nature of the situation, we can freely rotate and translate for the purpose of making the calculation easier. We rotate and translate so that a 0 = (0, 0) and L 0 and L 1 have nonzero slopes. This gives b 0 = b 1 = 0, and both m 0 and m 1 are nonzero. Given the algebraic nature of the situation, we just have to find some variation of just L 2 and L 3 which has the property that s ′ (λ) is either nonzero or infinite for all λ ∈ R. To do this, we set (m 2 , m 2 , m 3 , b 3 ) = (0, 0, 0, 1). With these choices, we compute
s ′ (λ) = m 2 1 (m 0 m 1 + m 0 λ − m 1 λ) 2
This expression is either nonzero or infinite for any λ ∈ R. ♠
The Moduli Space of Rectangles
The Basic Spaces
We fix an integer N ≥ 3. Let S 0 = S 0 (N) ⊂ (R 2 ) N denote the open set of all embedded labeled N-gons. We often suppress the dependence on N in our notation. The space S 0 inherits a metric space from (R 2 ) N and also a smooth manifold structure. Our first main goal of this chapter is to define an open dense subset S 3 ⊂ S 0 which consists of sufficiently generic polygons to make the proof of Theorem 1.3 a fairly easy consequence of Lemma 2.3. We call the vertices of γ ∈ S 0 by the names v 1 , ..., v N . We define the jth edge of γ to be the one which has v j as its lagging vertex. Thus, going around counter-clockwise, we encounter v 1 , e 1 , v 2 , e 2 , ....
We consider cyclically ordered quadriples in {1, ..., N} 4 of the following form:
(a, b, c, d), (a, a, b, c), (a, b, b, c), (a, b, c, c), (a, b, c, a).
Different-lettered indices are meant to correspond to different edges of γ. We call such quadruples relevant. The cyclic ordering means, for instance, that (N, 1, 2, 3) would be relevant. Given a relevant quadruple I, we let L I be the quadruple of lines extending the corresponding edges of γ.
Say that a chord of γ is a line segment joining two distinct vertices of γ. Let S 1 denote the set of embedded N-gons γ with the following properties.
1. No two distinct chords of γ are parallel or perpendicular.
2. Any relevant list L of lines has χ(L) ≤ 1. Here χ is as in §2.2.
By construction, S 1 is open dense in S 0 .
Lemma 3.1 Any rectangle that graces γ also graces a relevant quadruple of lines of γ.
Proof: Condition 1 guarantees that any rectangle R that graces γ ∈ S 1 has vertices on at least 3 distinct sides of γ. We order the vertices of R in increasing order and we let i j be the smallest possible index of an edge that contains the jth vertex of R. By construction the quadruple (i 0 , i 1 , i 2 , i 3 ) is of the kind considered above, the corresponding quadruple of lines is graced by R. ♠ Note that there might be other indices associated to our rectangle R than the one we defined in the proof of the preceding lemma. This can happen of some of the vertices of R are also vertices of γ ∈ S 1 . We will study this situation carefully. We call R critical if at least one of the vertices of R is a vertex of γ.
Lemma 3.2 Suppose {γ n } is a sequence in S 1 which converges to γ ∈ S 1 . Suppose that R n is a critical rectangle associated to γ n . Then there is a uniform lower bound to the side length of a side of R n .
Proof: Since γ n → γ there is a uniform lower bound to the distance between distinct vertices of γ n . Hence there is a uniform positive lower bound to the diameter of R n . So, we just have to worry about a pair of opposite sides of R n shrinking to points. Let v n be a vertex of R n that is also a vertex of γ n . Passing to a subsequence, we can assume that v n → v, a vertex of γ.
Once n is sufficiently large, the short side of R n incident to v n must lie in a single edge e n of γ n . Passing to a subsequence, we can assume that e n → e, an edge of γ.
Let w n be the vertex of R n opposite to v n . Passing to a subsequence, we can assume that w n → w ∈ γ. The short side of R n incident to w n has its endpoints in two different edges of γ n . Hence w is a vertex of γ. By constuction, the line wv is perpendicular to e. This violates Condition 1 above. ♠
Eliminating Doubly Critical Rectangles
Suppose that R is a rectangle that graces γ. We call R doubly critical if R is a square or if at least two of the vertices of R are vertices of γ. Let S 2 ⊂ S 1 denote the set of polygons which have no doubly critical rectangles.
Lemma 3.3 S 2 is open in S 1 .
Proof: Suppose that γ ∈ S 1 is the limit of a sequence of polygons {γ n } in S 1 − S 2 having double critical rectngles. By Lemma 3.2 we can assume that R n → R, a rectangle that graces γ. By continuity, R n is doubly critical.
Hence γ ∈ S 1 − S 2 . Hence S 1 − S 2 is closed in S 1 . Hence S 2 is open in S 1 . ♠
Lemma 3.4 Any γ ∈ S 1 has only finitely many doubly critical rectangles.
Proof: We consider the case when a critical rectangle has more than one vertex in common with γ. For any two distinct vertices v, w ∈ γ we consider the two lines V and W through v and w respectively which are perpendicular to the line vw. Condition 1 above quarantees that γ ∩ V and γ ∩ W are finite sets. If a doubly critical rectangle involves v and w as adjacent vertices, then there is a point of γ ∩ V that is the same distance from v as some point of γ ∩ W is from some point of w. There are only finitely many intersecton points like this. A similar argument, involving the circle with diameter vw through v and w, works when v and w are opposite vertices. Now we consider the square case. Every rectangle that graces γ is associated to some multi-index. There are only finitely many such multi-indices and Lemma 2.3, applied to the case λ = 1, tells us that there are only finitely many squares gracing the set of lines corresponding to each one. Hence there are only finitely many gracing squares and in particular only finitely many critical squares. ♠ Lemma 3.5 S 2 is dense in S 1 .
Proof: Now let γ ∈ S 1 be arbitrary. We will make a small perturbation of γ so as to eliminate any given doubly critical rectangle without creating any new ones. Doing this finitely many times, we have a perturbation, as small as we like, which eliminates all of them.
Consider the rectangle case first. We use the notation from the previous lemma. We simply perturb the edges of γ so that no point of γ ∩ V has the same distance from the vertex v as a point of γ ∩ W has from the vertex w. Likewise, we perturb so that no edge of γ intersects the circle with diameter vw through v and w at two antipodal points. This eliminates a doubly critical rectangle that involves both v and w.
We first eliminate all the critical rectangles which involve more than one vertex of γ. Now we turn to any critical squares that might remain. Let v be the vertex of a critical square and let w be the opposite vertex. From the order in which we have done things, w lies in a unique edge e of γ. We perturb γ by moving e parallel to itself slightly. This destroys the critical square. ♠
Eliminating Bad Indices
Remark: This is the section where we use the technical material from §2.5.
Let γ ∈ S 2 . By construction, γ has the property that its critical rectangles are non-square and only involve a single vertex. Let R be a labeled rectangle that graces γ and let I be a multi-index as above. We call R and I associates if the jth vertex of R lies on the edge e I j of γ. In this case, and only in this case, we mention the pair (I, R). Let a 0 , a 1 , a 2 , a 3 be the vertices of R. Let λ 0 be the aspect ratio of R. We call (I, R) bad if λ 0 ∈ Σ 0 (L), the set from the last chapter. We call I bad if there is some critical rectangle R such that (I, R) is bad.
Let S 3 (I) denote the subset of polygons in S 2 with respect to which the index I is not bad. Proof: Suppose that γ n ∈ S 2 − S 3 (I) converges to γ ∞ ∈ S 2 . Define • R n , the bad critical rectangle associated to I.
• L n , the set of lines extending sides of γ n associated to I.
• λ n , the aspect ratio of R n .
• s ′ n , the derivative ds/dλ computed with respect to L n . By Lemma 3.2 we can pass to a subsequence so that R n → R ∞ , a critical rectangle gracing γ ∞ that is associated to I. We can pass to a further subsequence so that everything else in sight converges in the same way. By continuity,
s ′ ∞ (λ ∞ ) = 0. Hence (I, R) is bad. Hence γ ∞ ∈ S 2 − S 3 (I). ♠ Lemma 3.7 S 3 (I) is dense in S 2
Proof: Take an arbitrary γ ∈ S 2 for which I is a bad index. By Lemma 2.3 there are only finitely many critical rectangles associated to I. These are obtained by setting s(λ) to be equal to the first coordinate of one of the two endpoints of the relevant edge of γ and solving. We will explain how to perturb γ by an arbitrarily small amount so as to eliminate one of the bad pairs (I, R). Doing the same thing finitely many times, we eliminate all the bad pairs of the form (I, R). Suppose (I, R) is a bad pair. Let a 0 be the vertex of R which is also a vertex of γ. Let L = L I be the corresponding set of lines. Let L * and R * be the perturbed line configuration and perturbed rectangle guaranteed by Lemma 2.8. Note that a 0 is still a vertex of R * and the remaining vertices of R * are interior to edges of γ provided that the perturbation is small enough. Also, the aspect ratio of R * does not belong to the set Σ 0 (L * ).
Taking a small enough perturbation, the new lines we get are not parallel to any of the lines of γ that we have left alone. A polygon without any parallel consecutive sides is defined by the lines extending its edges. We let γ * be the polygon whose associated lines agree with those of γ for any index not in I and agree with those of L * for those indices in I.
By construction, R * is critical for γ * and not bad. So, we have eliminated the bad pair (I, R) using an arbitrarily small perturbation and we have not created any new bad pairs. ♠ Let S 3 = S(I), the intersection being taken over all relevant multiindices. By the results above, S 3 is open dense in S 2 .
The Space of Gracing Rectangles
We will establish Theorem 1.3 for polygons in S 3 . Let G(γ) denote the set of (counter-clockwise) labeled rectangles which grace γ ∈ S 3 . We identify G(γ) as a subset of R 2 × R 2 × R as follows. Each rectangle R corresponds to the data (a 0 , a 1 , λ). Here a 0 and a 1 are the first two vertices of R and λ is the aspect ratio.
Theorem 3.8 For any polygon γ ∈ S 3 , the space G(γ) is a non-empty piecewise smooth 1-manifold. Each arc component of G(γ) is proper.
We prove this result through a series of smaller lemmas.
Lemma 3.9 G(γ) is a smooth 1-manifold in the neighborhood of any noncritical rectangle of G(γ).
Proof: Let R be such a rectangle. There is a unique multi-index I associated to I. Let L = L I . Let λ 0 be the aspect ratio of I. Every rectangle in G(γ) sufficiently close to R is associated uniquely to the same multi-index. Lemma 2.3 applied to L now tells us that the set of rectangles of G(γ) sufficiently close to R is parametrized precisely by the curve
λ → (s(λ), s(λ), λ), λ ∈ (λ 0 − ǫ, λ 0 + ǫ).
This gives a smooth and non-singular parametrization of G(γ) in a neighborhood of R. The parametrization is non-singular because derivative of the last coordinate is always 1. ♠ Lemma 3.10 G(γ) is a piecewise smooth manifold in the neighborhood of any critical rectangle of G(γ). The non-smooth point occurs precisely at this critical rectangle.
Proof: Let R be a critical rectangle. We label R so that a 0 is the vertex of R that is also a vertex of γ. Let i 0 and j 0 be the indices of the two edges incident to a 0 . There is a unique multi-index I associated to R which has i 0 as the first index and there is a unique multi-index associated to R having j 0 as the first index. The point is that the remaining vertices of R are comtained in the interiors of unique edges. Let L I,0 be the line of L incident to a 0 . Likewise define L J,0 . We first consider the picture with respect to L I . Let λ 0 be the aspect ratio of R. As we vary λ linearly away from λ 0 , there is a family of rectangles R I (λ) such that the corresponding vertex a 0 (I, λ) varies monotonically away from a 0 in a non-singular way. Given the monotone motion, precisely one of two things is true for λ sufficiently close to λ 0 :
• Negative case: R I,λ ∈ G(γ) if and only if λ < λ 0 .
• Positive case: R I,λ ∈ G(γ) if and only if λ > λ 0 .
In either case the subset of G(γ) near R and associated to I is a smoothly parametrized half-open arc whose endpoint is R. The same goes for J. Finally, any member of G(γ) sufficiently close to R is associated to one of I or J. Putting all this together, we see that in a neighborhood of R the space G(γ) is the union of two smooth and half-open arcs which meet at their common closed endpoint, namely R. ♠ Remark: We want to be a bit more explicit about how our argument works with relabeling. The space G(γ) really has 4 critical rectangles with the same unlabeled image as R. Each cyclic relabeling gives a smooth diffeomorphism from a neighborhood of R in the ambient space R 5 to a relabeled version of R. Thus, our analysis above, for the specific choice of labelings, implies that G(γ) looks the same up to ambient isomorphism around the 3 relabeled copies of R.
The last two results show that G(γ) is a piecewise smooth 1-manifold provided that it is nonempty. We will deal with the non-emptiness at the very end. First we analyze the (hypothetical) components of G(γ) that are arcs. As in the introduction, we call these arc components.
Lemma 3.11 Every arc component of G(γ) is proper.
Proof: Let A be an arc component. Suppose that A is not proper. This would mean that there is a sequence of points {ζ n } ∈ A that exits every compact subset of A but not does have aspect ratio tending to 0 or ∞. The only possibility is that the diameter of ζ n tends to 0. Otherwise, we could pass to a subsequence and take a well-defined limiting rectangle which would belong to G(γ). Note however that there is a lower bound to the diameter of any rectangle that graces γ because otherwise we could have all points of some rectangle on a union of two consecutive edges of γ. This is impossible. So, our sequence cannot shrink to a point. Hence A is proper. ♠ Let d : γ × γ → R + denote the distance function. That is,
d(a, b) = a − b .(6)
Say that a stick is a quadruple of points (a, a, b, b) or (a, b, b, a) where a = b and (a, b) is a critical point for d. The point (a, b) could either be a local max, a local min, or a saddle.
To make the next lemma precise we augment our space R 2 × R 2 × R so that the last coordinate can takes values in the compact arc [0, ∞]. That is, we compactify the last coordinate to include ∞.
Lemma 3.12 G(γ) contains rectangles arbitrarily close to any stick. More precisely, every stick is the end of a proper arc of G(γ).
Proof: This is a case-by-case analysis. There are 6 cases, as shown in Figure 2. In three of these cases, when one of the points of (a, b) lies in the interior of an edge of γ, the rectangles are easy to construct: They all have sides parallel to the stick. The remaining cases are tricky. We will consider the first case shown in Figure 2. The remaining two cases have similar treatments. The crucial feature in all these examples is that the lines perpendicular to the stick, drawn at the endpoints of the stick, do not separate the edges of the polygon. Figure 3 shows this in more detail for the first example. We apply a similarity so that the endpoints of the stick are (0, 0) and (1, 0). Let L 0 , L 1 , L 2 , L 3 be the lines shown. In terms of the data defining these lines we have b 0 = b 1 = 0 and b 2 = b 3 = 1. We also have m 0 , m 2 < 0 and m 1 , m 3 > 0. Referring to the equations in the previous chapter, we have
s(λ) = S 0 + S 1 λ U 0 + U 1 λ + U 2 λ 2 , U 2 = (m 0 − m 1 )(m 2 − m 3 ) = 0.
The coefficient S 2 vanishes. Hence, as λ → ∞, the first coordinate of a 0 (λ) converges to 0. Hence a 0 (λ) → (0, 0) as λ → ∞. The same kind of calculation shows that a 1 (λ) → (0, 0) as λ → ∞. Cycling the indices by 2, we get the similar result that a 2 (λ) → (0, 1) and a 3 (λ) → (0, 1). This proves that the rectangle of aspect ratio λ converges to the stick as λ → ∞. ♠ Lemma 3.13 Every end of every proper arc of G(γ) is a stick.
Proof: Let A be such a proper arc. As we exit one end of A, the aspect ratio of the associated rectangles tends to either 0 or ∞. Hence, there must be some line segment which is a limit for these rectangles. Call this line segment σ. The perpendiculars to σ at the endpoints are tangent to γ in the following generalized sense. If the endpoint lies in the interior of an edge, then the perpendicular simply equals this edge. If the endpoint is a vertex, then the perpendicular does not separate the two edges incident to the vertex. This result follows essentially from the mean value theorem applied to the part of the polygon between the vertices bounding the short ends of the rectangles. These properties imply that σ is a stick. There cannot be a second limiting stick because once the aspect is sufficiently large the rectangles are trapped in a vicinity of σ. They would have to open back up in order to get away from σ. ♠ Corollary 3.14 For any γ ∈ S 3 , the number of unlabeled proper arc components of G(γ) equals half the number of unlabeled sticks.
Remark: From the thousands of examples I computed, it seems that every arc component joins a stick which is a saddle to a stick which is a local maximum or a local minimum. I have no idea why.
The Number of Gracing Squares
A Parity Result
Here we establish Equation 1 for γ ∈ S 3 (N).
Theorem 4.1 Ω(γ) + Ω ′ (γ) + Ω * (γ) is even.
Proof: We write Ω = Ω arc + Ω loop ,(7)
Where Ω arc is the number of unlabeled squares on the union of the arc components and Ω loop is the number of unlabeled squares on the union of loop components. We first prove that Ω arc and Ω ′ have the same parity.
Here Ω ′ is the number of unlabeled sweepouts. It suffices to show that each arc component has an even number of squares if and only if it is a sweepout.
We say that the aspect ratio of a point of G(γ) is the aspect ratio of the rectangle that this point represents. In particular, ζ has aspect ratio 1 iff ζ represents a square. Let ζ ∈ G(γ) such a point. Our analysis above shows that there is a small arc α ⊂ G(γ) containing ζ such α is parametrized monotonically by the aspect ratios of the corresponding points. When the aspect ratio of ζ ′ ∈ α is slightly less than that of ζ, the point ζ ′ lies on one side of ζ. When the aspect ratio of ζ ′ is slightly greater than that of ζ the point ζ ′ is on the other side of ζ.
Consider the aspect ratio as a function of an arc component A. As we trace along A the aspect ratio passes through 1 an odd number of times if and only if the limiting aspect ratios at either end of A are different -i.e. if and only if A is a sweepout. Hence A is a sweepout if and only if it contains an odd number of points representing squares. Now we prove that Ω loop has the same parity as Ω * , the number of cycling components. There are three cases to consider for the loop components. The same argument as above shows that that each loop component α has an even number of labeled squares on it. There are three cases to consider for the loop components. Let φ denote the operation of cyclically relabeling by one click, so to speak.
Case 1: Suppose that the orbit of α, namely {φ k (α)}, consists of 4 loops, then the total number of labeled squares is a multiple of 8. Hence α contributes an even number fo Ω loop .
Case 2: Suppose that the orbit consists of 2 loops. Then φ 2 fixes α and preserves the aspect ratios. The same argument as in the arc case shows that the quotient loop α/φ 2 has an even number of equivalence classes of labeled squares. But then α has a multiple of 4 labeled squares. Hence α ∪ φ(α) contains a multiple of 8 labeled squares. Again, α contributes an even number to Ω loop .
Case 3: Suppose that φ preserves α. Then α is a cycling component. α must contain a square because one can connect a rectangle R of aspect ratio less than 1 to the rectangle φ(R) of aspect ratio greater than 1. So, let Q be a square and let Q ′ be a rectangle which lies just a bit before Q in the cyclic order of Q ′ . When we go around α from Q to φ(Q) we reach φ(Q ′ ) just before reaching φ(Q). If the aspect ratio of Q ′ is less than 1 then the aspect ratio of φ(Q ′ ) is greater than 1. This means that we must have encountered an even number of labeled squares on the path between Q and φ(Q ′ ). Hence α contributes an odd number to Ω loop .
These three cases establish the fact that Ω loop and Ω * have the same parity. Hence Ω and Ω ′ + Ω * have the same parity. ♠
A Clean Case
The rest of the chapter is devoted to proving that Ω(γ) is odd when γ ∈ S 3 (N). We first consider a clean special case of the result. We fix some N ≥ 8.
We call a multi-index I = (i 1 , i 2 , i 3 , i 3 ) well separated if consecutive indices are unequal and not cyclically adjacent. Let S ′ 3 ⊂ S 0 denote the set of those polygons γ with the following conditions. 1. χ(L I ) ≤ 1 whenever I is well-separated.
2. A critical square of G(γ) has just one vertex in common with γ.
3. G(γ) has at most one critical square. 4. No bad pair of γ involves a critical square. See §3.3.
5.
Any multi-index associated to a square in G(γ) is well-separated.
We call a polygon ordinary if it has no critical squares.
Lemma 4.2 Suppose that γ −1 , γ 1 ∈ S ′ 3 are ordinary polygons that are connected by a continuous path in S ′ 3 . Then Ω(γ −1 ) and Ω(γ 1 ) have the same parity.
Proof: Let u → γ u for u ∈ [−1, 1] be our path. We first prove the result when γ u is ordinary for all u ∈ [−1, 1].
Let Q u be the square that graces γ u . There is a unique multi-index I that is associated to U, and it is well separated. Let L u be the configuration of lines associated to I. We rotate so that no lines of L u are vertical. By hypothesis, χ(L u ) ≤ 1. Let s u (λ) and t u (λ) be the two functions from Lemma 2.3 that are associated to L u . Given the existence of Q u , we know that s u (1) and t u (t) specify the first coordinates of the two vertices Q u,0 and Q v,0 and these vertices lie on the interiors of the edges of γ u .
Note that the functions s u and t u are continuous as functions of u and also the edges of γ u are continuous as functions of u. That means that for v sufficiently near u, there is a gracing square Q v near Q u associated to I as well. Indeed, the map v → Q v is continuous as a function of v. Hence there is a bijection between the squares gracing γ u and the squares gracing γ v provided that |v − u| is small enough. Hence the squares of G(γ u ) vary continuously with u and their number does not change. For later reference we call this the continuity argument.
Since S ′ 3 (N) is an open subset of (R 2 ) N , we can replace our path by one that is analytic. In this case, there are only finitely many non-ordinary points along the path. In light of the special case above, it suffices to prove the result for the restriction of our path to a tiny neighborhood of a non-ordinary point.
Composing with a continuous family of isometries, and cyclically relabeling, we can arrange hat v 1,0 is the critical vertex and, for all u ∈ [−1, 1], we have
• v 1,u = (0, 0).
• e 0,u lies to the left of the y-axis.
• e 1,u lies to the right of the y-axis.
There are exactly 2 associated multi-indices I 0 and I 1 such that v 1,u lies on the first line of the associated configuration. Both of these are well-separated and the first line of L I j ,u extends e j,u . We set L j,u = L I j ,u . Let s j,u denote the version of the function s with respect to the configuration L j,u .
Shrinking the range of the path if necessary, using properties of S ′ 3 together with compactness and continuity, and finally scaling, we can find some ǫ > 0 so that
|s ′ j,u (λ)| > 1, ∀λ ∈ (1 − ǫ, 1 + ǫ), s j,u (1) ∈ (−ǫ, +ǫ) ∀ (j, u).(8)
There are 4 cases, depending on the signs of s ′ j,u for j = 0, 1. Suppose u = 0. By construction, there is some λ j ∈ (1 − ǫ, 1 + ǫ) − {1} such that s j,u (λ j ) = 0. The key point is that λ 1 = λ 2 = λ, because these numbers both describe the aspect ratio of the same critical rectangle.
Suppose first that λ < 1. The functions s j,u are monotone throughout (1 − ǫ, 1 + ǫ). Hence s j,u (1) > 0 if and only if s ′ j,i > 0. If s 0,u (1) < 0 then this point is the vertex of the unique square that graces the configuration L 0,u . If we pick ǫ small enough, then this square also graces γ u . If s 0,u (1) > 0 then there is no such square. Similar statements hold for the index j = 1 with the signs reversed. We conclude from this that the total number of squares gracing γ u that are associated to I 0 or I 1 has the opposite parity as the number of indices j for which s ′ j,u is greater than 0. The same final count works when λ u > 1.
In short, the number of squares gracing γ u that are associated to I 0 or I 1 is independent of u at least if we count mod 2. Also, the analysis for the remaining non-critical squares is as in the first special case we considered. So, the total count is the same mod 2 for all parameters. ♠
Connecting Polygons
We take N ≥ 8. We will work our way up to finding paths in S ′ 3 . Let S ′ 1 denote the subset of S 0 consisting of those polygons γ such which satify Condition 1 from the previous section.
Lemma 4.3 Let γ ⊂ S 0 (N) be a continuous path connecting two points in S ′ 1 (N). Then there is an arbitrarily small smooth perturbation γ ′ of γ which has the same endpoints and lies entirely in S ′ 1 (N).
Proof: We can perturb γ so that it is smooth. Let X 0 and X 1 be the line configurations from Theorem 2.5. We say that a well-separated index I is good for γ if L I ∈ X 1 whenever L I is a configuration of lines associated to the index I with respect to a configuration along γ. Otherwise we say that I is bad for γ.
If I is good for γ then I is also good for any sufficiently small perturbation of γ. This follows from compactness and from the fact that X 1 is open in the set the fact that X 1 is open dense in X 0 . So, we just have to show that there are arbitrarily small perturbations of γ with respect to which I is good. If we know this, then we can take an arbitrarily small perturbation of γ which decreases the number of bad indices until there are none.
The path γ determines a smooth path ζ ⊂ X 0 . We simply take the configurations L I with respect to polygons along γ. Since X 0 − X 1 has codimension 2 in X 0 , we can find a smooth path ζ ′ arbitrarily close to ζ which remains in X 1 .
Say that a pseudo-polygon is a cyclically ordered collection of lines. A pseudo-polygon determines a polygon unless there is a pair of successive lines which are either parallel or agree. In this case, we call the pseudo-polygon degenerate. On the other hand, a polygon always determines a pseudopolygon which may or may not be degenerate.
We define a path of γ ′ of pseudo-polygons as follows: For an index i ∈ I we vary the lines according to what γ does. When i ∈ I we vary the lines according to what ζ ′ does. There are two cases to consider. If all the polygons along γ give rise to non-degenerate pseudo-polygons, then by continuity and compactness we can arrange the same thing for γ ′ . In this case, γ ′ determines a polygon path with all the desired properties.
Suppose that the pseudo-polygon path defined by γ has some degenerate members. We can perturb γ so that there are only finitely many degenerate members and so that each degenerate member only involves a single pair of consecutive lines. By considering finitely many pieces of γ separately, we reduce to the case where γ just has a single degenerate member. We cyclically relabel so that e 0 and e 1 are the offending edges and v 1 is the vertex between them.
If I contains neither 0 nor 1 then we simply use the knowledge of the location of v 1 to reconstruct the polygon from the pseudo-polygon. In other words, we have no reason to forget the location of v 1 because the edges around it are doing the same thing with respect to γ ′ as they are with respect to γ.
Since I is well-separated, it cannot contain both 0 and 1. We will consider the case when 1 ∈ I. In this case we modify the path ζ ′ , just by translating the configurations slightly, so that the line corresponding to the index 1 always contains the vertex v 1 of the corresponding polygon along γ. Call the new path ζ ′′ we can arrange that ζ ′′ ∈ X 1 and that ζ ′′ is as close as we like to ζ. We now use γ ′′ in place of γ ′ . We reconstruct a smooth family of polygons from the pseudo-polygons along ζ ′′ , and the vertex v 1 , just as in the previous case. The new path of polygons has all the desired properties. ♠ Let S ′ 2 denote the subset of S ′ 1 consisting of those polygons γ such which satify Conditions 1-4 from the previous section.
Lemma 4.4 Let γ ⊂ S ′ 1 (N) be a continuous path connecting two points in S ′ 2 (N). Then there is an arbitrarily small smooth perturbation γ ′ of γ which has the same endpoints and lies entirely in S ′ 2 (N).
Proof: Our argument is similar to what we do in the proof of Lemma 3.5. The set of polygons which satisfy Conditions 1-k is an open subset of (R 2 ) N for any choice of k. So, it suffices to show that we can arrange Conditions 2,3,4 one at a time by perturbing a path that lies in S ′ 1 (N) and has endpoints in S ′ 2 (N). Consider Condition 2. The set of polygons in the ambient space S 0 (N) which have a gracing square with more than one vertex has codimension at least 2 because we can find an entire plane worth of deformations which destroy the condition. We can move the polygon off the one vertex or the other independently. Since the set of polygons not satisfying Condition 2 has codimension at least 2, we can perturb our path so that it avoids this set. We make this perturbation.
Consider Condition 3. This is a also a codimension 2 condition. If G(γ) has two critical gracing squares, then the two critical vertices are distinct and we can independently perturb so as to eliminate the one critical square or the other. Now we perturb our path to avoid this codimension 2 set. We make this perturbation as well.
Consider Condition 4. A polygon γ which does not satisfy Condition 4 has a critical square Q. Let I be the multi-index corresponding to Q as in Lemma 3.7. Let L 0 be the first line of the line configuration associated to v. There are two independent variations which destroy the bad critical square.
• We can vary v along the line L 0 . By Lemma 2.3 this produces new polygons which do not have nearby critical squares. The nearby critical rectangles have aspect ratios unequal to 1.
• We can take the variation from Lemma 3.7. This variation does not move v at all, and results in a polygons which do not have bad critical rectangles at v.
This shows that the set of polygons which fail to satisfy Condition 4 has codimension 2 in the ambient space S 0 (N). So, we can perturb our path to avoid this set. Once we make this last perturbation, we are done.
A Subdivision Trick
In this section we start with a polygon γ ∈ S 3 (N) and we produce a new ordinary polygon γ * ∈ S ′ 3 (N + K 2 ) with 3 properties: • No side of γ has length greater than ǫ.
• γ * and γ are within ǫ of each other with respect to the Hausdorff metric.
• G(γ) and G(γ * ) have the same number of gracing squares.
The process is canonical once we fix the data (K 1 , K 2 , ρ). Here K 1 and K 2 are positive integers and ρ ∈ (0, 1).
We say that ρ-position on an edge v 1 v 2 is the point ρv 1 +(1−ρ)v 2 . Here is our procedure. At the kth step we place new vertex in ρ-position on an edge. Next, we move this vertex outward, normal to edge k, a distance 2 −K 1 −k . Finally, we move to the next edge of whatever polygon we have created. We do this for K − 2 steps. We call this the bending process.
We also mention the placebo process, where we place the new vertices but not make any motions. When we do the placebo process, we get the same underlying polygon as γ, except that many extra vertices are inserted. If we hold K 2 and ρ fixed and let K 1 → ∞, then the bent polygon converges to the placebo polygon.
We pick some large value of K 2 so that each edge gets subdivided many times. Next, we pick ρ so that none of the subdivision points in the placebo polygon is the vertex of a gracing square. The same principle as our continuity argument in Lemma 4.2, combined with induction on K 2 , shows that once K 1 is large enough the polygons γ * and γ have the same number of gracing squares. Moreover, there is a canonical bijection between these gracing squares, and each square in G(γ * ) converges to the corresponding square in G(γ) as K 1 → ∞.
Suppose now that Γ is a continuous path of the form u → γ u . Here γ u ∈ S 0 (N) for all u ∈ [0, 1]. Once we fix the seed (K 1 , K 2 , ρ), we can make our construction simultaneously for all γ u , If our original path is a path Γ belongs to S 0 (N) then our new path Γ * belongs to S 0 (N + K 2 ). The new path Γ * is given by u → γ * u . We don't have much control over how the inserted points interact with the critical squares during the path, but we can control what happens at the endpoints, as above, if those endpoints belong to S 3 (N). This lack of control in the middle won't bother us because we will make a further perturbation.
For each path Γ we let Γ denote the infimal side length of a square in G(γ u ) for any u ∈ [0, 1]. By compactness, Γ > 0.
Lemma 4.5 Let Γ be some path. Then there is some ǫ 0 > 0 such that Γ ′ for any sufficiently small perturbation Γ ′ of Γ.
Proof: Let γ ′ be a polygon in a small perturbation Γ ′ of Γ. By compactness, there is a uniform positive lower bound to the distance between any two non-adjacent edges of γ ′ . Therefore, any sufficiently small square gracing γ ′ would have to have all its vertices on two consecutive sides. This is impossible. ♠
The End of the Proof
In this section we prove that Ω(γ) is odd for any γ ∈ S 3 (N). Suppose that γ 0 and γ 1 are two elements of S 3 (N). We first connect γ 0 to γ 1 by a path Γ ⊂ S 0 (N). By Lemma 4.5, there is some ǫ 0 > 0 so that any sufficiently small perturbation Γ ′ of Γ satisfies Γ ′ > ǫ 0 . We will take all our perturbations to have this property without saying so explicitly.
We first choose data (K 1 , K 2 , ρ) and consider the subdivided path Γ * ⊂ S 0 (N + K 2 ).
If we pick this data appropriately, then we arrange that every polygon edge in sight has length less that ǫ 0 /100. We can arrange, moreover, that the endpoints of Γ * are ordinary points of S ′ 3 (N + K 2 ). We explained this at the beginning of the last section.
By the results in the preceding section we can perturb Γ * to a new path Γ * * with the following virtues.
• Γ * * ⊂ S ′ 2 .
• No edge of any polygon along Γ * * has length greater than ǫ 0 /10.
• Γ * * > ǫ 0 .
Under these conditions, any multi-index associated to any gracing square of any G(γ * * u ) is well separated. This shows that Γ * * ⊂ S ′ 3 . Lemma 4.2 now shows that Ω(γ 0 ) and Ω(γ 1 ) have the same parity.
For any N, we can give examples of Γ ∈ S 3 (N) such that Ω(γ) is odd. We simply take an equilateral triangle and suitably subdivide it as above. This completes the proof that Ω(γ) is always odd.
The fact that Ω(γ) is odd combines with Equation 1 to establish Theorem 1.4.
The Trihotomy Theorem
The Circular Invariant
Let S 1 be the unit circle. Let Σ ⊂ (S 1 ) 4 denote the subset of distinct quadruples, which go counterclockwise around S 1 . Any member σ ∈ Σ corresponds to points which cut S 1 into 4 arcs A 0 , A 1 , A 2 , A 3 . Let |A j | be the arc length of A j . We define the circular invariant of the points to be
Λ(σ) = |A 0 | + |A 2 | |A 1 | + |A 3 | .(9)
When σ consists of the vertices of a rectangle Λ(σ) is the aspect ratio of this rectangle. Otherwise Λ(σ) is only vaguely like an aspect ratio. For each integer K ≥ 1 we let Σ(K) denote those members σ such that Λ(σ) ∈ [K −1 , K]. Note that Σ(K) is not compact. For instance, we can let two successive arcs shrink to points while keeping the other two large. This situation cannot happen, however, when these quadruples are tied to gracing rectangles by a homeomorphism.
Suppose that φ : S 1 → γ is some homeomorphism from S 1 to a Jordan loop. In [Tv], Tverberg gives a way to approximate γ by a sequence {γ n } of parametrized embedded polygons so that the parametrizations φ n : S 1 → γ n converge uniformly to φ.
Lemma 5.1 Fix some positive integer K. Let {σ n } be a sequence of elements of Σ such that φ n (σ n ) is a rectangle gracing γ n and Λ(σ n ) ∈ Σ[K] for all n. Then the set {σ n } is precompact.
Proof: It suffices to show that there is a uniformly positive distance between consecutive points of σ n . We let σ n,k denote the kth point of σ n . Let R n be the rectangle whose vertices are φ n (σ n ). Suppose that |σ n,0 − σ n,1 | → 0.
We claim that the distance between σ n,2 and σ n,3 converges to 0 as well. Suppose not. Since φ n → φ uniformly, the first side of R n is converging to a point. However, the opposite side is not converging to a point. This is impossible for a rectangle. Hence |σ n,2 − σ n−3 | → 0 as well. Given the bound on Λ(σ n ) this can only happen all the associated arcs shrink to points. This is impossible because the sum of the arc lengths is 2π. ♠
Hausdorff Limits
Suppose that C is a compact metric space. Let X C denote the set of closed subsets of X. We define the Hausdorff distance between closed subsets A, B ⊂ C to be the infimal ǫ such that each of the two sets is contained in the ǫ-tubular neighborhood of the other one. This definition makes X C into a compact metric space.
Lemma 5.2 Suppose that a sequence {A n } of connected sets in C converges to a set A. Then A is connected.
Proof: Suppose A is disconnected. Then there are open sets U, V ⊂ C such that A ⊂ U ∪V and A∩U and A∩V are both not empty. Since A is compact, there is some positive ǫ > 0 such that all points of A ∩ U are at least ǫ from all points of C − U. Likewise for A ∩ V . For all sufficiently large n, the arc A n intersects both U and V and therefore contains a point x n ∈ C − U − V . But then x n is at least ǫ from A. This contradicts the fact that A n → A in the Hausdorff metric. ♠ Note that Y need not be path connected. Consider a sequence of path approximations to the topologist's sine curve.
Limits of Sweepouts
let φ : S 1 → γ be an arbitrary homeomorphism. Let φ n : S 1 → γ n be as in the previous section. Slightly perturbing these polygons, we can assume that Theorems 1.3 and 1.4 hold for all n. In this section we will consider the case when γ n has a sweepout for all n. In the next section we will consider the case when γ n has an elliptic component for all n.
We call a subset Y ⊂ Σ extensive if Y is connected and if Λ(Y ) = (0, ∞). That is, every possible circular invariant is achieved for points in Y .
Lemma 5.3
There is an extensive subset Y ⊂ Σ with the property that for each σ ∈ Y the rectangle with vertices φ(σ) belongs to G(γ).
Proof: γ n has a sweepout A n . This sweepout defines a continuous path Y n ⊂ Σ such that Λ(Y n ) = (0, ∞). Here Y n is such that φ maps each member of Y n to the vertices of a rectangle in the sweepout.
Even though Σ(K) is not compact, Lemma 5.1 tells us that there is some compact subset C(K) ⊂ Σ(K) such that Y n ∩ Σ(K) ⊂ C(K) for all n. We can take C(K) to be all the accumulation points of sequences in Y n ∩ C(K).
For each n and for each K we can construct nested continuous paths Y n (n, 1) ⊂ Y n (n, 1) ⊂ ... ⊂ Y n (n, n).
such that the endpoints of Y n (n, K) have circular invariants 1/K and K and Y n (n, K) ⊂ C(K). Using the Cantor diagonal trick we can pass to a subsequence to that the Hausdorff limits
Y (K) = lim n→∞ Y n (n, K) ⊂ C(K)(11)
exist simultaneously. By the lemma in the previous section, Y (K) is connected for all K. By construction Y (K) ⊂ Y (K + 1) for all K. The nested union of connected sets is connected. Therefore Y = K Y (K) is connected. By construction Y is extensive. Finally, φ(σ) is the set of vertices of a rectangle that graces γ, for every σ ∈ Y . ♠
Limits of Elliptic Components
Now we will suppose that the polygon γ n is such that G(γ n ) has an elliptic component ζ n for all n.
Lemma 5.4 Suppose there is a uniform positive lower bound to the side length of a rectangle in ζ n . Then there is a connected subset S ⊂ G(γ) consisting of rectangles having uniformly large side lengths, such that every point of γ is the vertex of some member of S and a member of S is square.
Proof: Let Y n ⊂ Σ be the subset corresponding to ζ n . By hypotheses there is a single compact subset K ⊂ Σ such that Y n ⊂ K for all n. Passing to a subsequence, take the Hausdorff limit Y = lim Y n . The set Y is connected. We let S = φ(Y ). By construction, S ⊂ G(γ). Every vertex of γ n is the vertex of a rectangle corresponding to ζ n . Let v be a point of γ. Let v n ∈ γ n be a point that converges to v. Let R n be one of the rectangles in ζ n which has v n as vertex. Given the diameter bound, the limit R = lim R n exists and belongs to S.
Since ζ n has a square for all n, we can take the limit of these squares to get a point in S corresponding to a square that graces S. ♠
The preceding result peels off the elliptic case of the Trichotomy Theorem. Let ǫ n denote the infimal side length of a rectangle corresponding to ζ n . From now on we suppose that ǫ n → 0.
Lemma 5.5 There is an extensive subset Y ⊂ Σ with the property that for each σ ∈ Y the rectangle with vertices φ(σ) belongs to G(γ).
Proof: Let Y n ⊂ Σ be the set which corresponds to ζ n . Since there is no lower bound on the side length of rectangles in ζ n , it must be the case that there are points in ζ n whose circular invariant tends to 0. But then, given the invariance under relabeling, we see that Y n also contains points whose circular invariant tends to ∞. But then the sets {Y n } behave exactly as they to in the sweepout case. The rest of the proof is the same as in Lemma 5.3. ♠
The Hyperbolic Case
We work with the set Y constructed in Lemma 5.3 or Lemma 5.5. Suppose there is a positive lower bound to the diameter of members of Y .
Lemma 5.6 G(γ) has a connected subset which contains points representing rectangles of every aspect ratio.
Proof: The set S = φ(Y ) is a connected subset of G(γ). The set of aspect ratios achieved by members of S is connected because S is connected. We just have to show that S contains members having aspect ratio arbitrarily close to 0 and aspect ratio arbitrarily close to ∞. If σ ∈ Σ is a set with large diameter and very small circular invariant, then two consecutive points of σ are very close together and the adjacent two consecutive points are not. But then the corresponding rectangle with vertices φ(σ) must have a very short side and also a long side. Keeping track of the labeling, we see that the aspect ratio of R is close to 0. Hence there are rectangles corresponding to points in S having aspect ratio arbitrarily close to 0. Likewise for ∞. ♠ Let Z ⊂ S 1 denote those points which are not members of Y . Each point of φ(S 1 − Z) is the vertex of a rectangle that graces γ.
Lemma 5.7 Z contains at most 4 points.
Suppose Z has cardinality at least 5. We choose any 5 points of Z and let J 0 , J 1 , J 2 , J 3 , J 4 be the complementary intervals. We label so that these intervals go around counter-clockwise. Because Y is connected, the kth vertex of any σ ∈ Y is trapped in some fixed interval J j k for all σ ∈ Y . We label so that i 0 = 0. Given our counter-clockwise ordering, we have i 0 ≤ i 1 ≤ i 2 ≤ i 3 .
Given that Y members where the points a 0 and a 1 are arbtrarily close together, we must have i 1 ≤ i 0 + 1. The same goes for the points a 1 and a 2 . This forces i 2 ≤ i 1 + 1. Likewise i 3 ≤ i 2 + 1. But then the arc J 4 is empty, which prevents σ 3 and σ 0 from bounding the very small circular arc just mentioned. This is a contradiction. ♠
The Parabolic Case
Finally, we suppose that there is no uniform lower bound to the diameter of subsets of Y .
Lemma 5.8 G(γ) has a connected set S which contains gracing rectangles of arbitrarily small diameter. All but at most 2 points of γ are vertices of rectangles corresponding to points in S Proof: Let S = φ(Y ), as usual. The first statement follows immediately from the fact that Y is connected and contains sets of arbitrarily small diameter.
For the second statement we use the same notation as in Lemma 5.7. This time we have arcs J 0 , J 1 , J 2 . Let σ ∈ Y be a set with very small diameter. Then one of the arcs of S 1 must be nearly all of S 1 and all the points are bunched together. Call the index of the special arc special . Let's say that the special index is 3. Then we must have i 0 , i 1 , i 2 , i 3 ∈ {0, 1}. The arc J 2 is empty.
On the other hand, given that the circular invariant ranges all the way from 0 to ∞ there are other points of Y in which the arc between a 3 and a 0 is arbitrarily small. But J 2 separates these points. This is a contradiction. We get similar contradictions if we suppose that a different index is special. ♠
We have considered all the cases. This completes the proof of the Trichotomy Theorem.
Non-Atomic Measures
Here we explain remark (iv) made just after the statement of our Main Theorem. Suppose that µ is a non-atomic probability measure on γ. Then we can parametrize γ continuously so that µ is the pushforward of linear measure on the circle. In this case, the quantities in Equation 9 are just the µ-measures of the respective arcs.
In the case of the elliptic component, for every rectangle σ ′ ∈ S with λ(σ ′ ) = λ 0 there is another (relabeled) rectangle σ ′′ ∈ S with λ(σ ′′ ) = 1/λ 0 . Hence, by connectedness, there is some σ ∈ Σ with λ(σ) = 1. In the remaining cases, we have λ(S) = (0, ∞) and so we have λ(σ) = 1 for some σ ∈ S.
Figure 1 :
1A sweepout on an equilateral triangle.
Lemma 3.6 S 3 (I) is open in S 2 .
Figure 2 :
2Rectangles near sticks.
Figure 3 :
3Rectangles near sticks.
. Suppose (a, b) = (4, 4). Let α ′ be so that L α ′ is not the special line with respect to either coincidence. Let V i be the deformation obtained by rotating L α ′ about the special point w.r.t. Coincidence j that it contains. Likewise define V j . These deformations have the desired properties unless the two relevant special points coincide. So, we just
http://arxiv.org/ps/1804.00740v2
Any cyclic quadrilateral can be inscribed in any closed convex smooth curve. A Akopyan, S Avvakumov, arXiv:1712.10205v1A. Akopyan and S Avvakumov, Any cyclic quadrilateral can be in- scribed in any closed convex smooth curve. arXiv: 1712.10205v1 (2017)
J Aslam, S Chen, F Frick, S Saloff-Coste, L Setiabrate, H Thomas, arXiv 1806.02484Splitting Loops and necklaces: Variants of the Square Peg Problem. J. Aslam, S. Chen, F. Frick, S. Saloff-Coste, L. Setiabrate, H. Thomas, Splitting Loops and necklaces: Variants of the Square Peg Problem, arXiv 1806.02484 (2018)
C Hugelmeyer, Every Smooth Jordan Curve has an inscribed rectangle with aspect ratio equal to √ 3. arXiv 1803. 7417C. Hugelmeyer, Every Smooth Jordan Curve has an inscribed rectan- gle with aspect ratio equal to √ 3. arXiv 1803:07417 (2018)
A survey on the Square Peg Problem. B Matschke, Notices of the A.M.S. 61B. Matschke, A survey on the Square Peg Problem, Notices of the A.M.S. Vol 61.4, April 2014, pp 346-351.
T Tao, An integration approach to the Toeplitz square peg conjecture Foum of Mathematics. Sigma5, T. Tao, An integration approach to the Toeplitz square peg conjecture Foum of Mathematics, Sigma, 5 (2017)
H Tverberg, A Proof of the Jordan Curve Theorem. , H. Tverberg, A Proof of the Jordan Curve Theorem, Bulletin of the London Math Society, 1980, pp 34-38.
The Mathematica Book. S Wolfram, Cambridge University PressChampaign/Cambridge4th ed. Wolfram MediaS. Wolfram, The Mathematica Book , 4th ed. Wolfram Media/Cambridge University Press, Champaign/Cambridge (1999)
|
[] |
[
"ON THE FEASIBILITY AND GENERALITY OF PATCH-BASED ADVERSARIAL ATTACKS ON SEMANTIC SEGMENTATION PROBLEMS A PREPRINT",
"ON THE FEASIBILITY AND GENERALITY OF PATCH-BASED ADVERSARIAL ATTACKS ON SEMANTIC SEGMENTATION PROBLEMS A PREPRINT"
] |
[
"Soma Kontár \nFaculty of Information Technology and Bionics Budapest\nPeter Pazmany Catholic University\nPráter u. 501083\n",
"András Horváth \nFaculty of Information Technology and Bionics Budapest\nPeter Pazmany Catholic University\nPráter u. 501083\n"
] |
[
"Faculty of Information Technology and Bionics Budapest\nPeter Pazmany Catholic University\nPráter u. 501083",
"Faculty of Information Technology and Bionics Budapest\nPeter Pazmany Catholic University\nPráter u. 501083"
] |
[] |
Deep neural networks were applied with success in a myriad of applications, but in safety critical use cases adversarial attacks still pose a significant threat. These attacks were demonstrated on various classification and detection tasks and are usually considered general in a sense that arbitrary network outputs can be generated by them. In this paper we will demonstrate through simple case studies both in simulation and in real-life, that patch based attacks can be utilised to alter the output of segmentation networks. Through a few examples and the investigation of network complexity, we will also demonstrate that the number of possible output maps which can be generated via patch-based attacks of a given size is typically smaller than the area they effect or areas which should be attacked in case of practical applications. We will prove that based on these results most patch-based attacks cannot be general in practice, namely they can not generate arbitrary output maps or if they could, they are spatially limited and this limit is significantly smaller than the receptive field of the patches.
|
10.48550/arxiv.2205.10539
|
[
"https://arxiv.org/pdf/2205.10539v1.pdf"
] | 248,987,227 |
2205.10539
|
0cb8c07cec9801e6c51eea2b3a65236e9cc1297b
|
ON THE FEASIBILITY AND GENERALITY OF PATCH-BASED ADVERSARIAL ATTACKS ON SEMANTIC SEGMENTATION PROBLEMS A PREPRINT
May 24, 2022
Soma Kontár
Faculty of Information Technology and Bionics Budapest
Peter Pazmany Catholic University
Práter u. 501083
András Horváth
Faculty of Information Technology and Bionics Budapest
Peter Pazmany Catholic University
Práter u. 501083
ON THE FEASIBILITY AND GENERALITY OF PATCH-BASED ADVERSARIAL ATTACKS ON SEMANTIC SEGMENTATION PROBLEMS A PREPRINT
May 24, 2022
Deep neural networks were applied with success in a myriad of applications, but in safety critical use cases adversarial attacks still pose a significant threat. These attacks were demonstrated on various classification and detection tasks and are usually considered general in a sense that arbitrary network outputs can be generated by them. In this paper we will demonstrate through simple case studies both in simulation and in real-life, that patch based attacks can be utilised to alter the output of segmentation networks. Through a few examples and the investigation of network complexity, we will also demonstrate that the number of possible output maps which can be generated via patch-based attacks of a given size is typically smaller than the area they effect or areas which should be attacked in case of practical applications. We will prove that based on these results most patch-based attacks cannot be general in practice, namely they can not generate arbitrary output maps or if they could, they are spatially limited and this limit is significantly smaller than the receptive field of the patches.
Introduction
With the application of deep neural networks becoming mainstream in our everyday lives, questions and concerns about the robustness and reliability of these networks are also becoming ever more important. Adversarial attacks targeting the vulnerabilities of neural networks were investigated heavily in the past years. These attacks are considered general in the sense that with the proper optimization techniques arbitrary outputs can be generated by them, regardless of the input image, which means that these attacks pose a significant threat in practical applications.
Adversarial attacks were first introduced in [1] and they have revealed an important aspect of deep neural networks: although they generalise well and work properly not just on the typical input set, but also on similar inputs, they can be exploited by malevolent attackers, since inputs are high dimensional and one can easily generate non-real life samples, which fall extremely far from both human judgement and the expected outcome.
In the following years, the possibilities of exploiting adversarial attacks were investigated elaborately, building on the original authors' findings [2], [3], [4], [5], devising new attack strategies improving the robustness of the generated attacks [6], [7], enabling black-box attacks in which case the gradients of the network are not necessary [8], [9], [10]. Later, adversarial attacks were also presented on more complex tasks than classification, like detection and localisation problems [11], and on various network architectures (e.g.: Faster-RCNN [12]).
The first attacks in classification and detection problems applied minor, low-intensity perturbations over the whole image, just like in [2] for classification. It was later demonstrated that low-intensity attacks are not at all robust in the wild [13]. In practice this special additive noise is usually altered by perspective distortion and additive noise in the environment (e.g.: illumination changes) and also not life-like, since in practice the attacker needs to have access to the image processing system to modify all elements of the input, instead of modifying a real-world object.
Although real life and robust attacks were demonstrated for classification and detection problems, in case of segmentation problems, only low intensity attacks were investigated [14], [15]. Segmentation problems are also more complex and their output depends on fine details of the input samples. In case of classification one expects that the output class should not depend at all from the pose of the investigated object and in case of detection problems only small changes should appear. If the object rotates slightly or changes its pose (e.g.: a person moves his arm) output classes should remain exactly the same and bounding-boxes should change slightly, meanwhile segmentation masks might change rather significantly. Based on this one could expect that segmentation networks are more robust toward real-life adversarial attacks.
It was demonstrated in [14] and [15] that networks trained for semantic segmentation problems can also be attacked with low-intensity noise and the authors could generate arbitrary output maps with the proper additive noise. Although it was never proven that these methods can generate arbitrary output maps, the authors have demonstrated that highly uncorrelated and randomly selected output maps could be achieved, which gave rise to the general belief in the scientific community that these low-intensity approaches can result in arbitrary output maps.
In [16] it is shown that state-of-the-art semantic segmentation networks are vulnerable to some indirect local adversarial attacks -in the attack scenario a patch is placed in the environment, creating "dead zones" for a particular class of objects. While this does show that some networks are vulnerable to patch-based adversarial attacks, the authors found that models with a bigger field of view are more sensitive to these kinds of attacks. In contrast, our method is closer to a real-life scenario, since we are only modifying the object the attack is targeting, thus not needing access to the environment we are performing our attack in.
The authors of this paper are not aware of any successful direct patch based attacks on semantic segmentation problems, which emphasises the difficulty of generating such samples.
In this paper we will demonstrate that patch based attacks are feasible on semantic segmentation problems. After demonstrating their feasibility we will analyze their generality. The number of possible output maps using a patch of limited size is typically fairly small, which means that these attacks -contrary to general belief -can not be used to generate arbitrary outputs, driving the conclusion that they cannot be general.
We also have to admit that the non-generality of patch based attacks on segmentation problems does not mean at all that they can not be applied in practice. This only means that arbitrary outputs can not be generated by them, but the question of which outputs can and can not be generated by an adversarial patch remains open which the authors plan to investigate in their future work.
Adversarial Attacks
The term adversarial example was coined by [1], where attacks on neural networks trained for image classification were generated via a very low intensity, specially formed additive noise, completely imperceptible to the human observer. The method used to generate these so-called adversarial examples was to maximize the networks response to a certain class by altering the input image.
The first attacks [2] were implemented by calculating the sign of the elements of the gradient of the cost function (J) with respect to the input (x) and expected output (y), multiplied by a constant to scale the intensity of the noise (formally sign∇ x J(θ, x, y)), where θ is the parameter vector of the model). This allows for much faster generation of attacks. This method is called the Fast Gradient Sign Method (FGSM).
An extension to FGSM by [3] was to use not only the sign of the raw gradient of the loss, but rather a scaled version of the gradient's raw value. This method is usually referred to as the Fast Gradient Value (FGV) method.
Another extension to the iterative version of FGSM by [4] was to incorporate momentum into the equation, theorizing that similarly to 'regular' optimization during training, it would help avoiding poor local minima and other non-convex patterns in the objective function's landscape.
[5] builds on the assumption that the robustness of a binary classifier f at point x 0 , is equal to the distance of x 0 from the separating hyperplane ∆(x 0 ; f ). Therefore the necessary smallest perturbation to change the sign of the output of f corresponds to the orthogonal projection of x 0 onto the separating hyperplane. They solve this in a closed-form formula, and apply these small perturbations to the image in an iterative manner until the decision of the classifier is changed. Later they extended it to multiclass classification problems as well.
Though these approaches were extremely important from a theoretical point of view and the generation methods are general, they pose no significant threat to practical applications of neural network, since they limit the amount of applied noise. The smallest perturbations beside the engineered additive noise, e.g.: perspective or illumination changes, lens distortion could completely upend the desired results. Hence, the application of these attacks in real life is unfeasible [13].
In [6], [7] robust and real-world attacks were presented against various classification networks. These methods create an adversarial patch, where instead of the global, but low-intensity approaches, distortions appear in a region with limited area, but intensity values are not bounded 1 . Successful attacks with adversarial patches were also demonstrated using black and white patches only [8], where not the intensities of the patch, but the locations and sizes of the stickers are optimized. These attacks, where the gradients of the networks are not necessarily used during optimization open space towards black-box attacks [9], [10], where the attacker needs access only to the final responses, confidence values to generate attacks using evolutionary algorithms. Later these approaches were presented on detection and localization problems as well [11] using various network architectures (e.g.: Faster-RCNN [12] ).
A general overview of adversarial attacks, containing a more detailed description of most of the previously mentioned methods can be found in the following survey paper [17]. The resilience of segmentation networks against adversarial attacks was investigated heavily in the past years [14], [15], [18], [19]. But only global, low-intensity attacks were examined. The authors are not aware of any publication demonstrating patch based attacks on semantic segmentation.
Patch-based Segmentation Attacks on a Simple Dataset
Since we were not able to find a simple segmentation dataset (like MNIST [20] for classification), where objects with various shapes can be found we have created a simple dataset based on CLEVR [21]. The original dataset did not contain segmentation masks but was used for visual question answering. We have modified the generator script and generated masks for semantic, amodal and instance segmentation.
The dataset contains 25200 colored images (of size 320 × 240) of simple objects along with their instance masks, amodal masks and pairwise occlusions and three-dimensional coordinates for each object. This results a simple dataset for various tasks, ranging from three-dimensional reconstruction and instance segmentation to amodal segmentation. The dataset contains objects of simple shapes, but also contains shadows, reflections and different illuminations, which make it relevant for the evaluation of segmentation algorithms.
The dataset along with the script and all our training codes belonging to later chapter is available at the following repository to help reproducibility and the detailed investigation of the applied parameters.
We have selected the U-net architecture [22] to be trained on our simple CLEVR inspired dataset. Our aim was to demonstrate adversarial attacks with architectures where classification and segmentation are handled in the same layers and not by different heads, as in case of Mask-RCNN [23], where the classifier head might be fooled by the attack, meanwhile the network provides a same or similar instance mask. We have used a U-NET like structure containing convolution blocks with 8, 16, 32, 64 channels, downscaling was accomplished by strided convolution, while upscaling was implemented by transposed convolution. 23400 images were selected for training and 1800 images were left for validation. All the validation scenes were generated independently from the training scenes. Our selected task was semantic segmentation, where a four-channeled output image had to be generated by the network representing the probabilities, that a pixel belongs to a cube, sphere, cylinder or to the background.
We have selected 100 samples from our test set randomly and trained adversarial patches using the method published in [4]. We have tried scripted attacks where we have changed the expected output class of a selected object (in this case we did not change the shape of the mask), but even in this case the network has no advantage from the previously learned shapes or from the fact that objects in our database has consistent shapes, since here the aim was to segment spheres with shape of a cube or cylinder. But to prove that the network can create arbitrary shapes and segments, we have also created expected masks by hand and tested our approach on them. We have opted for the previously mentioned, scripted method in larger case experiments because class switching could be easily implemented by scripts. Sample attacks with the expected masks and network outputs before and after the attacks can be found on Fig. 1 As it can be seen from the previous examples, patch based attacks were possible on this simple segmentation task. The outcome of the network was close to the expected mask in almost all examined cases and even though they were not perfectly reconstructed every time, altogether 95% of the output pixels were modified as expected.
Real-life Images of Simple Shapes
Once we have created successful attacks in simulation we gathered 100 real life samples (10 different setups from 10 different views) and tested our approach on them. We have applied our method without fine-tuning or further Figure 1: This figure depicts segmentation attacks on the CLEVER dataset. The first column contains the attacked image, after the patch was optimized for this sample for 5000 iterations. The second column displays the expected outputs after the attacks, meanwhile the third and fourth columns show the network output before and after the attack. The first two rows contain samples where object classes were switched and the last row contains a sample, where the mask was drawn by hand. Please note that part of the output mask was deleted, to demonstrate that objects can not only be turned to other classes, but they can partially or completely disappear. We have to note that there is no theoretical difference between modifying an output pixel to belong to a desired class or to none of them. optimisation on real samples or without the application of any domain adaptation methods [24], [25]. Our network worked on an acceptable level on real-life samples. Although segmentation was noisy and many small objects have appeared in the background, segmentation of real objects was correct regarding shapes and classes, which are the most important to investigate possible attacks.
We have selected 10 images and changed their outputs by hand: we have repainted one of the objects on the segmented output map and used the same method as in case of the simulated data on these specific images to change the output of the network.
To demonstrate the robustness of the patches in this experiment we have followed the generation of the patches introduced in [6]. For training we have created new simulated data, with added variance on view-angles, scales (camera distances from the objects) and lighting conditions, but all images contained the objects in the same constellation. Later we have used these images an tried to generate one patch which works well on the selected object regardless of the previously listed variances. We have also added small random noise to the intensity values of the patch and changed the position of the patch on the image slightly to avoid the generation of a low-intensity attack. Additionally to random disposition of the patch we have applied average pooling on it with a kernel of 3 × 3 and stride one, this way we have optimized the average of neighbouring patch pixels in each kernel, instead of directly optimizing each pixel and we got more consistent and better results.
In all the previous cases patches were created and added to the image in simulation, in this case we have optimized a patch to turn a cylinder to a cube on our simulated dataset, but printed it out and tested it on real life samples. A randomly selected example from our samples can be seen on Fig. 2. As it can be seen the blue cylinder was segmented correctly without the patch, but when the patch was attached to it its segmentation turned red signifying its pixels belong to a cube. We have to note that a more thorough study on more samples and with more complex examples is needed to understand how patch based attack can be efficiently generated in real-life, but our experiments demonstrate that robust adversarial patches, which are applicable under multiple views and conditions, are feasible.
Patch-based Segmentation Attacks on Cityscapes
For a more complex case study we have chosen the Cityscapes dataset [26] focusing on the semantic segmentation task of the dataset. Our models of choice were the Deeplab V3 [27] and MobileNet V3 architectures [28], in both of which a significant feature is that the mask prediction and the classification are calculated jointly, unlike in case of Mask-R-CNN [23].
In our experiments, we used a ResNet-18 backbone for the Deeplab V3 architecture and MobileNet V3 Large network, with 128 filters in the segmentation head. For both models, we utilized openly available pre-trained models, the Deeplab V3 model is available on Github, while the MobileNet V3 model is available via PyPi.
For the generation of the adversarial patch we have followed the method described in [6], in which the adversarial patch, which instead of modifying every pixel of the image with an additive noise completely replaces a small part of the image, is trained in a white-box setting by minimizing a loss between a (usually hand-crafted) target and the output of the network by only altering the values in the patch arbitrarily 2 . In our experiments, we have first selected a class and for the sake of reproducibility we used an algorithm to find the largest inscribable rectangle for a given binary object mask of this class, which can easily be extracted from the Cityscapes dataset's annotations. We then placed the patch in the middle of this target area to effectively quantize the area of effect of the adversarial patch. By this we aim to simulate the effect of real-life patches, where they can be found in the middle of objects instead of overstretching to to multiple objects. Some sample attacks can be seen on Fig. 3.
As these results demonstrate patch based attacks are feasible in practice in case of segmentation problems. Based on this findings one can ask the question what kind of limitations are there for patch based attacks, can they generate arbitrary, general output maps?
A complexity analysis of patch based attacks
In this section we will prove that it is impossible to generate arbitrary output images by attacking the inputs with patches. Obviously patch based attacks can not modify pixels which fall outside of the receptive field of corresponding neurons which limits their effectiveness based on the selected architecture. Here we will demonstrate that if patch-based attacks could generate arbitrary outputs, than the size of these patches has to be limited to a much smaller region than their receptive field.
Our proof follows the following way: We investigate a number of total different segmentation maps that can be generated in a region and provide an upper-bound for them based on the network architecure. We consider two maps different if the winning class (the largest value after softmax normalisation) at least at one pixel is different. This can only happen if non-linear changes happen in the forward path of the network. If one considers an output map in one region of the image containing W × H pixels where W represents the width, H the height of the region and the network used for semantic segmentation can differentiate between D number of classes one can generate: D W H different output maps where at least one pixel is classified differently. As it can be seen from this figure the effect of 2x2 patch is fairly large and it has changed the output class of 7461 pixels.The second row contains a similar example with the Mobilenet V3 architecture where an imaginary wall should be placed in the middle of the road. The last row depicts an other attack scenario where a pedestrian is completely removed from the network's output. The row's layout is the following left to right, by rows: original input, target mask, Deeplab V3 patched input, Deeplab V3 attack result, MobileNet V3 patched input and finally MobileNet V3 attack result. This not only demonstrates the vulnerability of neural networks that are used in mission-critical applications (ie. self-driving), but also signifies that adversarial attacks pose a significant threat in security applications as well.
To allow an attack that can generate all these patterns the network has to contain at least this many distinct linear regions (separated by a non-linear change in network output), since changing the largest value at a selected output pixel can only be implemented by non-linear functions, hence it is a non-monotonic change. To simply put it, the network has to move the output to a different linear region to ensure that it generates a different output map. Also it is not necessary that every linear region generates a different output map and while many of these could be the same, we will demonstrate that even if each one of them would be different, they still can not cover all the possible output maps in practice.
For this we will calculate the maximal number of linear regions in which the patch is involved and we will demonstrate that it is significantly larger than D W H , which shows that in practice it is not possible to generate arbitrary output maps in case of semantic segmentation using patch based attacks.
Upper bound of linear regions
An upper bound for the number of liner regions in a fully connected layer using ReLus as non-linearities was first introduced in the paper of Montufar et al in 2014 [29], which states that the number of linear regions R n is upper bounded by the following expression:
R L F C ≤ n0 i=0 n 1 i(1)
where the layer contains n 0 number of input and n 1 number of output neurons.
In 2020 in the paper [30] this theorem was extended to convolutional networks where the authors proved that applying L number of consecutive convolutional layers cannot generate more separated linear regions than:
R Conv ≤ L l=1 w0h0c0 i=0 w l h l c l i(2)
where in an L layered network w k and h k represent the spatial dimensions of the data in the k-th layer, meanwhile c k represents the number of channels in the selected layer (e.g. w 0 , h 0 , c 0 note the dimensions of the input data). This formula means that a convolutional layer working on an input data of 25 × 25 pixels containing 64 channels can multiply the number of linear regions by 3.29 * 10 220 , meanwhile in case of 128 channels this number is 5.3 * 10 269 .
In case of patch-based attacks the spatial dimensions are determined by the receptive field of the neurons in which the original patch is present, but these are typically small numbers, since the patch has to be small to ensure that it is hardly noticeable on the image by human perception. This means that a typical convolutional layer (containing 128 or 256 channels) where the size of a patch is around 25 × 25 pixels multiplies the number of linear regions by less than 10 300 . This way five convolutional layers could generate (at most) 10 1500 linear regions. In case of ten possible output classes this means that if this network could generate all possible output elements in a region, the region can not be larger than 10 W H , where W H has to be smaller than 1500. Otherwise the number of possible output maps would be larger and could not be generated by a network with such complexity. In this case for example it means that a 25x25 patch could only generate output maps with the area of 1500 pixels ( 38x38 pixels), if they are general and can yield arbitrary outputs. Table 1 contains the maximal number of linear regions for networks which are typically applied on semantic segmentation problems. These data were calculated for different patch sizes along with the maximal patch size which can be generated with this complexity if we assume that patch generation is universal, namely arbitrary output maps can be created with an appropriate attack. The numbers were done considering the Cityscapes dataset as a case study, where each output pixel can belong to nineteen different classes.
We also have to mention that all the aforementioned calculations contain the upper bound of linear regions. This number can only be achieved if a new non-linearity intersects each earlier linear regions. It was demonstrated in [32] and [33] that the number of linear regions can also be measured or algorithmically approximated using sampling in a trained architecture. In all the investigated cases the number of linear regions were significantly smaller in practice than the upper bound provided by the theorem. (e.g. in case of the investigated seven and eight layered convolutional networks the upper bound was 356180 and 819115, meanwhile by sampling methods only 3398 and 4822 linear regions were identified in the trained networks). This means that in practice one can assume that the number of output maps which can be generated by a network is orders of magnitude smaller than the previously introduced upper bounds and this way the maximal sizes in which arbitrary output maps can be generated is significantly smaller. The exact measurement of linear regions is important in practice for trained networks, but the presented upper bounds are more general and depend only on the network architecture and not the training data or the weights of the network.
We have seen in the previous sections that adversarial patches affect larger regions than the number presented in Table 1. Based on these empirical results we can state that patch based attacks can not be general in practice and they can not produce arbitrary output maps. We also have to mention that this paper only shows that arbitrary output maps can not be generated. This does not mean that in practice a whole object could not be altered, non-existing objects could be hallucinated or one could not make a person disappear in semantic segmentation in practice using patches as camouflage, which can also be seen on some of our samples.
Conclusion
In this paper we have investigated the generality of patch based attacks in semantic segmentation problems. For our case study we have investigated a simple simulated dataset with the U-NET architecture and the commonly investigated Cityscapes datasets and two commonly used network architectures: DeepLab V3 and MobileNet V3. Our finding shows that patch based attacks are feasible in case of semantic segmentation in practice and even in case of small patches such as 2x2 they are able to change the output classes of the segmentation maps on a large area (in certain cases containing more than 5000 modified pixels). On the other hand the complexity analysis of these architectures revealed that these networks can generate a lower number of possible outputs maps. This deducts that semantic segmentation can only be attacked successfully under certain conditions with patch based attacks, which largely depend on the network architecture and patch based attacks can not results arbitrary output maps. There are certain output maps in these segmentation networks which can not be generated by any patches (with limited size) for a given input.
Figure 2 :
2This figure presents the effect of a printed patch in real life in segmentation. as it can be seen we have managed to train an adversarial patch in simulation which was able to turn a cylinder into a cube in a real life segmentation problem.
Figure 3 :
3Example attacks on the Cityscapes dataset. The top row depicts details of an attack on the DeeplabV3 architecture where the pixels of the car should be turned into arbitrary other classes. The first column contains the input image with a 2x2 patch on the left lamp of the parking car and the second column display original output of the trained network without the adversarial patch. The next image displays the output of the network for the patched image and the effect of the patch, those pixels are marked by white where the output class (after argmax) of a pixel have changed are marked on the last image.
Table 1 :
1This table displays the maximal number of non-linear regions (R N ) for different network architectures
(UNET[22], FCN8[31], MobileNetv3-Large (M N V 3 )[28] and DeeplabV3 with ResNet18 backbone (DL V 3 )[27] and )
and patch sizes (S R ) along with maximal number of pixels which could be changed by such a region if generic output
maps can be created
UNET
FCN8
M N V 3
DL V 3
R N (2x2)
10 219
10 168
10 229
10 584
S R (2x2)
13x13
11x11
13x13
21x21
R N (5x5)
10 1448
10 1203
10 1239
10 3421
S R (5x5)
33x33
30x30
31x31
51x51
R N (10x10)
10 5034
10 4646
10 3446
10 12725
S R (10x10)
62x62
60x60
51x51
99x99
R N (20x20) 10 16842
10 17864
10 9343
10 48151
S R (20x20) 114x114 118x118 85x85 194x194
apart from the global bounds of image values
The values are of course bounded within the regular image values before preprocessing, eg. [0, 255].
AcknowledgmentThis research has been partially supported by the Hungarian Government by the following grant: 2018-1.2.1-NKP00008: Exploring the Mathematical Foundations of Artificial Intelligence and the support of the Alfréd Rényi Institute of Mathematics if also gratefully acknowledged.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, arXiv:1312.6199arXiv preprintC. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013.
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, arXiv:1412.6572arXiv preprintI. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
Adversarial diversity and hard positive generation. A Rozsa, E M Rudd, T E Boult, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. the IEEE Conference on Computer Vision and Pattern Recognition WorkshopsA. Rozsa, E. M. Rudd, and T. E. Boult, "Adversarial diversity and hard positive generation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25-32, 2016.
Boosting adversarial attacks with momentum. Y Dong, F Liao, T Pang, H Su, J Zhu, X Hu, J Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, "Boosting adversarial attacks with momentum," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185-9193, 2018.
Deepfool: a simple and accurate method to fool deep neural networks. S.-M Moosavi-Dezfooli, A Fawzi, P Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582, 2016.
Adversarial patch. T B Brown, D Mané, A Roy, M Abadi, J Gilmer, arXiv:1712.09665arXiv preprintT. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, "Adversarial patch," arXiv preprint arXiv:1712.09665, 2017.
Synthesizing robust adversarial examples. A Athalye, L Engstrom, A Ilyas, K Kwok, arXiv:1707.07397arXiv preprintA. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, "Synthesizing robust adversarial examples," arXiv preprint arXiv:1707.07397, 2017.
Robust physical-world attacks on deep learning models. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, T Kohno, D Song, arXiv:1707.08945arXiv preprintK. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, "Robust physical-world attacks on deep learning models," arXiv preprint arXiv:1707.08945, 2017.
Genattack: Practical black-box attacks with gradient-free optimization. M Alzantot, Y Sharma, S Chakraborty, M Srivastava, arXiv:1805.11090arXiv preprintM. Alzantot, Y. Sharma, S. Chakraborty, and M. Srivastava, "Genattack: Practical black-box attacks with gradient-free optimization," arXiv preprint arXiv:1805.11090, 2018.
Practical black-box attacks against machine learning. N Papernot, P Mcdaniel, I Goodfellow, S Jha, Z B Celik, A Swami, Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. the 2017 ACM on Asia Conference on Computer and Communications SecurityACMN. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506-519, ACM, 2017.
Fooling automated surveillance cameras: adversarial patches to attack person detection. S Thys, W Van Ranst, T Goedemé, arXiv:1904.08653arXiv preprintS. Thys, W. Van Ranst, and T. Goedemé, "Fooling automated surveillance cameras: adversarial patches to attack person detection," arXiv preprint arXiv:1904.08653, 2019.
Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. S.-T Chen, C Cornelius, J Martin, D H P Chau, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerS.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau, "Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector," in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 52-68, Springer, 2018.
No need to worry about adversarial examples in object detection in autonomous vehicles. J Lu, H Sibai, E Fabry, D Forsyth, arXiv:1707.03501arXiv preprintJ. Lu, H. Sibai, E. Fabry, and D. Forsyth, "No need to worry about adversarial examples in object detection in autonomous vehicles," arXiv preprint arXiv:1707.03501, 2017.
Adversarial examples for semantic segmentation and object detection. C Xie, J Wang, Z Zhang, Y Zhou, L Xie, A Yuille, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionC. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, "Adversarial examples for semantic segmentation and object detection," in Proceedings of the IEEE International Conference on Computer Vision, pp. 1369-1378, 2017.
Universal adversarial perturbations against semantic image segmentation. J H Metzen, M C Kumar, T Brox, V Fischer, 2017 IEEE International Conference on Computer Vision (ICCV). IEEEJ. H. Metzen, M. C. Kumar, T. Brox, and V. Fischer, "Universal adversarial perturbations against semantic image segmentation," in 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2774-2783, IEEE, 2017.
Indirect local attacks for context-aware semantic segmentation networks. K K Nakka, M Salzmann, European Conference on Computer Vision. SpringerK. K. Nakka and M. Salzmann, "Indirect local attacks for context-aware semantic segmentation networks," in European Conference on Computer Vision, pp. 611-628, Springer, 2020.
Threat of adversarial attacks on deep learning in computer vision: A survey. N Akhtar, A Mian, IEEE Access. 6N. Akhtar and A. Mian, "Threat of adversarial attacks on deep learning in computer vision: A survey," IEEE Access, vol. 6, pp. 14410-14430, 2018.
On the robustness of semantic segmentation models to adversarial attacks. A Arnab, O Miksik, P H Torr, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionA. Arnab, O. Miksik, and P. H. Torr, "On the robustness of semantic segmentation models to adversarial attacks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 888-897, 2018.
Class retrieval of detected adversarial attacks. J , H András, Applied Sciences. 11146438J. Al-afandi and H. András, "Class retrieval of detected adversarial attacks," Applied Sciences, vol. 11, no. 14, p. 6438, 2021.
The mnist database of handwritten digits. Y Lecun, Y. LeCun, "The mnist database of handwritten digits," http://yann. lecun. com/exdb/mnist/, 1998.
Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. J Johnson, B Hariharan, L Van Der Maaten, L Fei-Fei, C L Zitnick, R Girshick, Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEEJ. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick, "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning," in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 1988-1997, IEEE, 2017.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, Springer, 2015.
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
Domain-adversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, The Journal of Machine Learning Research. 171Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain-adversarial training of neural networks," The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096-2030, 2016.
Splat: Semantic pixel-level adaptation transforms for detection. E Tzeng, K Burns, K Saenko, T Darrell, arXiv:1812.00929arXiv preprintE. Tzeng, K. Burns, K. Saenko, and T. Darrell, "Splat: Semantic pixel-level adaptation transforms for detection," arXiv preprint arXiv:1812.00929, 2018.
The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Rethinking atrous convolution for semantic image segmentation. L.-C Chen, G Papandreou, F Schroff, H Adam, arXiv:1706.05587arXiv preprintL.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, "Rethinking atrous convolution for semantic image segmentation," arXiv preprint arXiv:1706.05587, 2017.
Squeeze-and-excitation networks. J Hu, L Shen, G Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.
On the number of linear regions of deep neural networks. G Montúfar, R Pascanu, K Cho, Y Bengio, arXiv:1402.1869arXiv preprintG. Montúfar, R. Pascanu, K. Cho, and Y. Bengio, "On the number of linear regions of deep neural networks," arXiv preprint arXiv:1402.1869, 2014.
On the number of linear regions of convolutional neural networks. H Xiong, L Huang, M Yu, L Liu, F Zhu, L Shao, International Conference on Machine Learning. PMLRH. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao, "On the number of linear regions of convolutional neural networks," in International Conference on Machine Learning, pp. 10514-10523, PMLR, 2020.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
Bounding and counting linear regions of deep neural networks. T Serra, C Tjandraatmadja, S Ramalingam, PMLRInternational Conference on Machine Learning. T. Serra, C. Tjandraatmadja, and S. Ramalingam, "Bounding and counting linear regions of deep neural networks," in International Conference on Machine Learning, pp. 4558-4566, PMLR, 2018.
Tropex: An algorithm for extracting linear terms in deep neural networks. M Trimmel, H Petzka, C Sminchisescu, International Conference on Learning Representations. M. Trimmel, H. Petzka, and C. Sminchisescu, "Tropex: An algorithm for extracting linear terms in deep neural networks," in International Conference on Learning Representations, 2021.
|
[] |
[
"Radio pulsar polarization as a coherent sum of orthogonal proper mode waves",
"Radio pulsar polarization as a coherent sum of orthogonal proper mode waves"
] |
[
"J Dyks \nNicolaus Copernicus Astronomical Center\nRabiańska 8, 87-100ToruńPoland\n"
] |
[
"Nicolaus Copernicus Astronomical Center\nRabiańska 8, 87-100ToruńPoland"
] |
[
"Mon. Not. R. Astron. Soc"
] |
Radio pulsar polarization exhibits a number of complex phenomena that are classified into the realm of 'beyond the rotating vector model' (RVM). It is shown that these effects can be understood in geometrical terms, as a result of coherent and quasi-coherent addition of elliptically polarized natural mode waves. The coherent summation implies that the observed tracks of polarization angle (PA) do not always correspond to the natural propagation mode (NPM) waves. Instead, they are statistical average of coherent sum of the NPM waves, and can be observed at any (and frequency-dependent) distance from the natural modes. Therefore, the observed tracks of PA can wander arbitrarily far from the RVM, and may be non-orthogonal. For equal amplitudes of the NPM waves two pairs of orthogonal polarization modes (OPMs), displaced by 45 • , can be observed, depending on the width of lag distribution. Observed pulsar polarization mainly results from two independent effects: the change of mode amplitude ratio and the change of phase lag. In the core region both effects are superposed on each other, which can produce so complex behaviour as observed in the cores of PSR B1933+16, B1237+25 and J0437−4715. Change of the phase lag with frequency ν is mostly responsible for the observed strong evolution of these features with ν. The coherent addition of orthogonal natural waves is a useful interpretive tool for the observed radio pulsar polarization.
|
10.1093/mnras/stz1690
|
[
"https://arxiv.org/pdf/1902.11141v1.pdf"
] | 119,195,742 |
1902.11141
|
fe977b3cfbaf802d00cbdccfea0aa2d55177b857
|
Radio pulsar polarization as a coherent sum of orthogonal proper mode waves
1 March 2019
J Dyks
Nicolaus Copernicus Astronomical Center
Rabiańska 8, 87-100ToruńPoland
Radio pulsar polarization as a coherent sum of orthogonal proper mode waves
Mon. Not. R. Astron. Soc
00000001 March 2019(MN L A T E X style file v2.2) Accepted .... Received ...; in original form 2018 Dec 27pulsars: general -pulsars: individual: PSR J0437−4715 -pulsars: in- dividual: PSR B1237+25 -pulsars: individual: PSR B1919+21 -pulsars: individual: PSR B1933+16 -radiation mechanisms: non-thermal
Radio pulsar polarization exhibits a number of complex phenomena that are classified into the realm of 'beyond the rotating vector model' (RVM). It is shown that these effects can be understood in geometrical terms, as a result of coherent and quasi-coherent addition of elliptically polarized natural mode waves. The coherent summation implies that the observed tracks of polarization angle (PA) do not always correspond to the natural propagation mode (NPM) waves. Instead, they are statistical average of coherent sum of the NPM waves, and can be observed at any (and frequency-dependent) distance from the natural modes. Therefore, the observed tracks of PA can wander arbitrarily far from the RVM, and may be non-orthogonal. For equal amplitudes of the NPM waves two pairs of orthogonal polarization modes (OPMs), displaced by 45 • , can be observed, depending on the width of lag distribution. Observed pulsar polarization mainly results from two independent effects: the change of mode amplitude ratio and the change of phase lag. In the core region both effects are superposed on each other, which can produce so complex behaviour as observed in the cores of PSR B1933+16, B1237+25 and J0437−4715. Change of the phase lag with frequency ν is mostly responsible for the observed strong evolution of these features with ν. The coherent addition of orthogonal natural waves is a useful interpretive tool for the observed radio pulsar polarization.
INTRODUCTION
Radio pulsars exhibit a wealth of polarization phenomena that have been studied for half a century. However, both the regular polarization properties as well as peculiar effects escape thorough understanding. The regular behaviour includes the appearance of two orthogonal polarization modes (OPMs) and transitions (jumps) between these OPMs at several longitudes in a pulse profile. Peculiar effects are numerous and involve strong deformations of polarization angle (PA) curve, especially at the central (core) profile components (Smith et al. 2013, hereafter SRM13;Mitra et al. 2015, hereafter MAR15) as well as 'half orthogonal' PA jumps (Everett & Weisberg 2001;MAR15). The research on the subject includes the analysis of the natural propagation wave modes in magnetised plasma (Melrose 1979;Lyubarskii & Petrova 1999;Rafat et al. 2018), curvature radiation properties (Gangadhara 2010), numerical polarized ray tracing (Wang et al. 2010), coherent (Edwards & Stappers 2004) and noncoherent deconvolution into separate modes (Mel-rose et al. 2006;McKinnon 2003), instrumental noise effects (McKinnon & Stinebring 2000) as well as interstellar propagation effects (Karastergiou 2009;McKinnon & Stinebring 1998). This is accompanied by a steady increase in the available polarization data of ever increasing quality (eg. recently Rankin et al. 2017;Brinkman et al. 2018).
In this paper I develop the polarization model based on coherent addition of waves in two orthogonal propagation modes (Dyks 2017, hereafter D17). The extended model offers a more general nature of the observed PA tracks and solves several interpretive obstacles that have appeared in D17.
In Sect. 2 I describe observations and modelling hints that inspired this study. These suggest the importance of equal modal amplitude in pulsar signal, so in Sect. 3 I describe a special-case model based on coherent addition of linearly polarized waves of equal amplitude. The model is used to interpret observations in Sect. 4, which is a good opportunity to present the model properties. Since the equal modal amplitudes may be driven by a circularly polarized signal, in Sect. 5 the model based on the circular feeding is extended into a 'birefringent filter pair' model which is applied to the issue of why the OPMs are so often observed nearly equal. Section 6 describes the double, ie. convolved or mixed nature of the polarization observed in the core profile region such as demonstrated by the case of PSR J0437−4715. The equal amplitudes and linear polarization of the natural mode waves cause some interpretive problems (described in Sect. 6.2), therefore, the ellipticity and different amplitudes of the modal waves are taken into account in a more general model described in Sect. 7.1. A glimpse of the properties of the model's parameter space is given in Sect. 7.2. Interpretive capabilities of the model are presented in Sect. 7.6 where the PA loop of PSR B1933+16 is modelled at two frequencies. GHz, after Everett & Weisberg (2001). The same convention as in Fig. 1 is used, with the PA (dots) plotted twice at a distance of 45 • . The 45 • jump at Φ ≈ 2 • coincides with a minimum in L/I, but the circular polarization fraction (black solid) is not affected.
INSPIRING OBSERVATIONS AND MODELLING HINTS
Observations
frequently observed at the regular OPM jumps. However, after some wiggling on the trailing side of the profile, at the pulse longitude Φ = 37.5 • the PA makes another 45 • downward transition, quickly followed by a more standard 90 • upward OPM jump at Φ = 38 • . When moved up by 45 • , the displaced central PA segment (between Φ = 30 • and 37 • ) provides roughly rectilinear interpolation between the PA observed outside of the segment. This suggests that the PA stays at the 45 • distance through most of the pulse window, and there must be some geometric reason for this. The 45 • shift seems to exist despite a clearly nonzero level of both L/I and V /I. Fig. 18 in MAR15 shows that a chaotic multitude of different PA values are observed within the displaced-PA interval of pulse longitude.
As can be seen in Fig. 2, based on Fig. 7 in Everett & Weisberg (2001), PSR B0823+26 also shows a 45 • jump which is coincident with a minimum in L/I. In this case, however, the profile of V /I does not seem to be affected by the phenomenon.
In D17 the half-orthogonal PA jump has been interpreted as a sudden narrowing of a phase delay distribution, with the delay measured between two linearly polarized waves, supposedly representing the waves of natural propagation modes. The small delays imply coherent addition of waves, which ensures the 45 • PA jump as soon as the waves have equal amplitudes. However, this rises interesting questions. First, what makes the amplitudes equal, and secondhaving two pairs of orthogonal PA values off at 45 • -which pair coincides with the PA of the supposedly quasi linear 1 Young & Rankin (2012). The black and grey colors refer to different frequencies ν 1 and ν 2 . The size of points represents the strength of modal track. With the change of ν the strongest mode appears at orthogonal position (90 • off) and the observed V changes sign to opposite (bottom panel).
natural polarization modes? Below I will confirm the idea of the lag-distribution narrowing, however, the identification of the modes will be shown to depend on whether equal modal amplitudes can be sustained in pulsar signal.
Another type of interesting polarization phenomenon is the exchange of the observed modal power with increasing frequency ν. This is well illustrated in Figs. 4 and 5 of Young and Rankin (2012) where single pulse PA distributions are shown at two frequencies for PSR B0301+19 and B1133+16. 2 In Fig. 3 I show a cartoon representation of this effect. The PA distribution in each pulsar reveals two enhanced PA tracks that follow a pair of well defined rotating vector model (RVM) curves, with each PA track apparently representing a different OPM. However, the primary (ie. brighter) mode track at 327 MHz becomes the secondary (fainter) track at 1.4 GHz. According to the authors, the data were corrected for the interstellar Faraday rotation, various instrumental effects and dispersion. Moreover, the apparent replacement of the modal power is confirmed by a probably concurrent change in the sign of V . The power of the observed OPMs is then partially separated not only in pulse longitude and drift phase (Edwards & Stappers 2003;Rankin & Ramachandran 2003;Edwards 2004), but also in the spectral domain (Noutsos et al. 2015). This may seem to be natural, because the modes are generally expected to have different refraction indices, each with different dependence on ν, which implies a ν-dependent phase lag between the modal waves. In the model of D17, however, any changes nals, waves or modes, should always be understood as 'linearly polarized' and 'circularly polarized', respectively. 2 The authors do not comment on this exchange at all. Navarro et al. (1997). Graphical convention is the same as in Fig. 1, except from that the PA (dots) is plotted once. The profile of L/I has two nearby minima in the profile center (Φ = 175 • ). The right minimum coincides with the handedness change of V . The left one occurs at high V /I. Both minima correspond to OPM jumps visible in the PA (the right one has magnitude close to 90 • ).
of the phase lag could not affect the ratio of modal power. This lack of flexibility makes the ν-related considerations difficult and calls for the model extension.
Another type of insightful polarization effects are the distortions and bifurcations of polarization angle tracks, especially those observed within the central (core) components of pulsars such as PSR B1237+25 (SRM13), B1933+16 (Mitra et al. 2016, hereafter MRA16), B1857−26 (Mitra & Rankin 2008), and B1839+09 (Hankins & Rankin 2010). All these phenomena reveal clear signatures of their coherent origin: they have maxima of |V | coincident with minima of L/I. The loop-like PA distortion of B1933+16 was modelled in section 4.4.1 in D17, whereas the PA track bifurcation of B1237+25 was interpreted in section 4.7 therein. Here those interpretations will be modified and will be made consistent with each other.
An interesting example of the core PA distortion is provided by the millisecond pulsar PSR J0437−4715 (Navarro et al. 1997, hereafter NMSKB97, Oslowski et al. 2014. As shown in Fig. 4 (after NMSKB97), at 660 MHz the PA curve steeply dives to the vicinity of orthogonal mode, then immediately retreats in another nearly full OPM jump. The retreat is associated with the sign change of V and a minimum in L/I (which is not quite vanishing). The first quasi-OPM transition, however, is associated with a high level of |V |/L. Section 4.4 in D17 describes an effect of symmetric twin minima in L/I which are associated with symmetric profile of V /I. Both these minima have identical look and identical origin. In PSR J0437−4715, however, the observed minima are dissimilar and have clearly different origin. Moreover, Figure 5. The origin of observed coproper OPMs (represented by the ellipses M 1 and M 2 ) as the coherent sum of phase-lagged proper mode waves m 1 and m 2 . The linearly polarized proper waves m 1 and m 2 are fed by the linearly polarized wave E, which enters a linearly birefringent medium at the mixing angle ψ in . The proper polarization directions of the medium are presented by x 1 and x 2 , ∆φ is the phase lag acquired from refraction index difference, and IM is the intermodal separatrix at ψ in = 45 • (the crossing of which corresponds to the pseudomodal OPM transition). The phase lagged position of the proper wave m 1 is shown with the dashed line. For the selected ψ in , the observed OPMs (M 1 and M 2 ) have the same handedness.
when viewed at different frequencies (NMSKB97) the minima seem to move in longitude at a different rate. They seem to pass across each other which is apparently related to νdependent amplitudes of the negative and positive V , and is accompanied by strong changes of PA distortions. Overall, the behaviour of polarization in PSR J0437−4715 looks as a clear manifestation of two independent processes that overlap in pulse longitude.
Another strange polarization effect can be seen on the trailing side of the core component in J0437−4715 (Fig. 4, Φ ∈ (200 • , 220 • )). The PA there seems to be freely wandering with no obedience to any RVM-like curve. Off-RVM PA values must also be involved in a phenomenon of nonorthogonal PA tracks, that is often observed in many pulsars (eg. B1944+17 and B2016+28, both at 1.2 GHz in Fig. 15 of MAR15). Apparently, any successful pulsar polarization model must be capable of easily detaching from RVM.
Modelling hints
The PA loop of B1933+16 has been interpreted in D17 as a sudden rise (and a following drop) of a phase lag between two linearly polarized orthogonal waves, supposedly representing the natural propagation modes. The model is quite successful because it can reproduce all relevant polarization characteristics, such as the nearly bifurcated distortion of PA, the twin minima in L/I, and the single-sign V with a maximum at the modal transition. Moreover, with a change of a single parameter (amplitude ratio of modes) the model consistently reproduces the change of these features with frequency ν. All this occurs because within the loop, the underlying PA track (which gets split into the loop as soon as the lag is increased) is assumed to be displaced by about 45 • from the linear natural modes.
However, the data (see Fig. 1 in MRA16) clearly show that the loop opens on a PA track that can be considered as one of the normal OPMs (as evidenced by a regular OPM jump observed just left to the loop). The model thus requires the modal power to be ∼ 45 • away from where the power is actually observed to be, if the identification of the observed OPMs as coincident with the normal modes is correct. As described in D17, the observed OPMs are sharp spikes of radiative power with the PA coincident with that of the natural propagation modes. As shown in Fig. 5 the observed orthogonal modes (M1 and M2) are produced when the phase lag distribution extends to 90 • , since at this value a linearly polarized input signal of any orientation is always decomposed into polarization ellipses aligned with the linearly polarized natural (proper) modes m1 and m2. As emphasized in D17, the observed OPMs (M1 and M2) are not the same as the proper modes m1 and m2, because M1 and M2 may have the same handedness despite being orthogonal Figure 6. Polarization characteristics of a signal that is a coherent sum of two orthogonal linearly polarized waves. Bottom panel: PA as a function of phase lag. Different curves correspond to a fixed wave amplitude ratio E 2 /E 1 = tan ψ in , with the mixing angle ψ in separated by 5 • , as shown on the vertical axis (ψ in = ψ(∆φ = 0)). The horizontal pieces of straight lines at ±45 • correspond to polarization ellipses shown in the top panel. Grey rectangles present regions with positive V . The thick arrows diverging from ψ in ≈ 45 • , and converging at ψ in ≈ −45 • present the phenomenon of the lag-driven PA-track bifurcation. Statistical distributions of lag and mixing angle are shown on the margins. The N ψ,in distribution has two components (at +45 • and −45 • ) which produce opposite circular polarization. Top panel: polarization ellipses and polarized fractions L/I and V /I for the equal amplitude case of ψ in = +45 • (thick solid) and −45 • (thin). L/I is identical in both cases.
to each other (such case is shown in Fig. 5). However, M1 and M2 have the same PA as the proper (normal) waves. Therefore, the linearly-fed observed OPMs M1 and M2 will be called below the 'coproper' modes.
Because the above-described 45 • difference was hard to justify, the analysis that followed in D17 attempted to interpret the core polarization through changes of mode amplitude ratio with pulse longitude (instead of the phase lag). The amplitude ratio was parametrized by the mixing angle ψin, i.e. the angle at which the emitted signal was separated into two linearly polarized natural mode waves (Fig. 5a). Because of the partial geometrical symmetry of the problem, slow changes of ψin with pulse longitude were essentially able to justify the core polarization behaviour of B1237+25, at least for the upper branch of its bifurcated PA track.
However, PSR B1237+25 exhibits two different states of subpulse modulation: the normal state (N) and the corebright abnormal state (Ab). In the N state the core PA mostly follows the upper branch of the split PA track, whereas the lower branch is brightest in the Ab state (cf. Figs. 1 and 6 of SRM13). Despite the change of the branch, however, the sign of V remains the same in both cases. In the ψin-based model of D17 (section 4.7 therein) this was impossible to achieve, because the diverging branches of the bifurcated track were interpreted purely through departure of ψin from a natural mode in two opposite directions, and the predicted sign of V is different on both sides of the proper mode. This can be seen in the lag-PA diagram of Fig. 6 which presents selected polarization properties of a wave that is a coherent combination of two orthogonal and linearly polarized waves oscillating at a phase lag ∆φ. The sign of V , as represented by the grey and bright rectangles, is opposite on each side of ψin = 0 • (which corresponds to one natural mode). Moreover, both the loop of B1933+16 and the PA bifurcation of B1237+25 look as phenomena of the same nature, so it is not Ockham-economic to interpret them in different ways (change of phase lag versus change of mixing angle). In the case of the PA loop of B1933+16, the model based on the ψin only, could reproduce the twin L/I minima, the single-sign V , and the PA distortion, but was incapable to produce the loop-shaped bifurcation itself (see Fig. 11 in D17). The bifurcation, instead, required the change of the lag (Figs. 12 and 13 in D17).
Below I further elaborate the models of D17 in order to explain the mysterious 45 • misalignment which allows us to interpret both phenomena within a unified scheme.
INTRODUCTORY MODEL
Coherent addition of linearly polarized waves
Let us start with the model described in D17: before reaching the observer, a radio signal of amplitude E is decomposed into two linearly polarized waves with orthogonal polarization: Ex = E1 cos (ωt), Ey = E2 cos (ωt − ∆φ).
(1)
The waves may be thought to represent the natural propagation modes of a linearly polarizing, birefringent intervening medium. The main (proper) polarization directions of the medium are x and y. After a phase delay ∆φ is built up between the waves, they combine (are added) coherently, which produces the detectable radio signal. The amplitudes of the combining waves are equal to
E1 = E cos ψin, E2 = E sin ψin,(2)
where ψin is the mixing angle that parametrizes the amplitudes' ratio:
tan ψin = E2/E1 = R.(3)
The Stokes parameters for the resulting wave (i.e. calculated after the phase-lagged components have been added coherently in the vector way) are given by:
I = E 2 1 + E 2 2 (4) Q = E 2 1 − E 2 2 (5) U = 2E1E2 cos(∆φ) (6) V = −2E1E2 sin(∆φ)(7)
whereas the linear polarization fraction and the resulting PA are:
L/I = (Q 2 + U 2 ) 1/2 /I (8) ψ = 0.5 arctan (U/Q).(9)
To calculate the observed PA, the coherent-origin angle of eq. (9) needs to be added to the external reference value determined by the rotating vector model:
ψ obs = ψ + ψRV M .(10)
Since we focus on coherent effects, only the value of ψ will be discussed below, but it must be remembered that ψ = 0 corresponds to ψ obs = ψRV M , ie. the RVM PA corresponds to the orientation of the intervening basis vectors ( x1 or x2) on the sky. Diverse pairs of (∆φ, ψin) in such model give the polarization characteristics presented in Fig. 6. Different curves in the lag-PA diagram (bottom panel of Fig. 6) present ψ calculated for different values of ψin. The value of ψin is fixed along each line, except from the horizontal lines at ψin = ±45 • . In this equal-amplitude case the PA jumps discontinuously by 90 • , which corresponds to the transition of the polarization ellipse through the circular stage (see the rows of ellipses in the top panel). The grey rectangles (actually squares) represent the regions with positive V . In spite of the impression made by the checkerboard pattern, the sign of V can change only at ∆φ = n180 • , where n is an integer. A change at ∆φ = 90 • + n180 • is impossible, because no lines cross these values of ∆φ, except at the dark nodes at corners of the grey regions. The nodes appear because for ∆φ = 90 • any orientation of the incident wave polarization (hence any amplitude ratio R) produces a polarization ellipse aligned with either x or y direction of the intervening polarization basis (see D17 for more details).
Because of the noisy nature of pulsar radio emission, in the following numerical calculations the values of ψin and ∆φ are drawn from statistical distributions N ψ,in and N ∆φ with peak positions ψ pk and ∆φ pk and widths σ ψ,in , σ ∆φ . The intensity is taken as I = N ψ,in (ψin)N ∆φ (∆φ). The results presented in sections (3)-(6) are produced with the same numerical code which is described in detail in sections 3.2.1 and 3.2.2 of D17.
Another pair of orthogonal polarization modes -equal wave amplitudes
Unlike in D17, however, it is assumed in this section that the incident signal can be represented by two circularly polarized waves of opposite handedness. 3 For simplicity of interpretation, in this introductory model the detection of these circular waves (C+ and C-in Fig. 7a and b) is assumed to be non-simultaneous (ie. the signal produced by C+ is not added coherently to the signal produced by C-). Consider the wave electric vector E+ which traces a spiral that projects on the dotted circle C+ in Fig. 7a. In the aforementioned linearly-polarizing birefringent medium, the wave induces the two linearly polarized waves, marked m1 and m2, and Figure 7. The mechanism of generation of the observed orthogonal polarization modes (grey ellipses C 1 and C 2 ) in the case of equal amplitudes of the natural mode waves m 1 and m 2 . While the proper mode waves m 1 and m 2 propagate through linearly birefringent medium, they acquire a phase lag ∆φ shown on the left. Then they combine coherently into the grey ellipses of the observed OPMs, which are always tilted at ±45 • with respect to x 1 . Larger lags can also produce the pseudomodal ellipses C A and C B of any handedness. The proper modal waves are fed by the circularly polarized signals C+ and C-, which ensures equal amplitudes of m 1 and m 2 . The circular feeding is not essential for the model if the equal amplitudes are assumed ad hoc, but the origin of the feeding circular waves may be elaborated to justify similar amounts of the observed OPMs C 1 and C 2 (Sect. 5).
described by eq. 1. The original phase delay ∆φ between the waves is equal to 90 • which results directly from their circular feeding. This phase lag ∆φ is assumed to be increased (or decreased) by different refraction indices of the natural propagation modes (therefore, the wave m1 is shifted to the dashed sinusoid position). Then the modal waves m1 and m2 are coherently added, which produces the elliptically polarized observed signal which is presented by the grey ellipse marked C1. This is one of the observed OPMs (or one observed PA track, if the name OPMs is to be reserved for the linearly fed coproper OPMs of D17). The eccentricity and handedness of the C1 ellipse depends on the value of the lag, however, as long as ∆φ is between 90 and 270 • all the resulting ellipses will have the same PA, precisely at the angle of −45 • with respect to the PA of the natural propagation modes. 4 Larger lags produce another orthogonal ellipse, which is marked CA in the figure. This second ellipse is 90 • away from C1 and, therefore, may possibly be called the other OPM. However, the CA mode may have the same handendess as C1, so perhaps it should be called a pseu-initial phase lag of ∆φ = 90 • . These are the positions at which the waves E+ and E− have to be injected into the lag-PA diagram of Fig. 6. Accordingly, Fig. 8 presents the lag-PA pattern that appears for a single feeding wave (C+) injected at (∆φ, ψin) = (90 • , 45 • ). Each set of panels in the figure may be considered as presentation of signals detected at a fixed pulse longitude in many different pulse periods. The value of ψin was sampled from a narrow Gaussian N ψ,in distribution of width σ ψ,in = 3 • whereas N ∆φ had the width σ ∆φ = 30 • (both distributions are shown near the plot axes). The right panels present the distribution of PA angles at a fixed pulse longitude, i.e. they present a vertical cut through those grey-scale PA histograms that are usually shown for single-pulse data (the black thick solid line is the intensity cumulated at a given PA). The distribution of V /I is shown with thick grey line and L/I is thin solid. Fig. 8a shows the case of a one-sided lag distribution, whereas the bottom panels show the symmetric N ∆φ . Comparison of panels a and b implies that a single circular feed (eg. C+ at ψin = 45 • ) can produce two orthogonal PA tracks depending on the shape and position of N ∆φ . The difference of refraction indices favours the one-sided N ∆φ , and it is also the case which avoids some depolarization typical of the two-sided N ψ,in . For the moderately wide lag distribution used in Fig. 8, the power stays close to ∆φ = 90 • and therefore V /I is high (∼0.7). L/I is about 0.5 in the top case, and the same in both PA tracks of the bottom-right panel. However, after Stokes-averaging over the PA distribution, the average L/I (at some longitude Φ) would be very low, unlike in the top case. The symmetry of N ∆φ distribution is thus important for some conclusions of this paper.
As can be seen in Fig. 8, the lag distribution is extending the grey PA pattern horizontally at ψin = ±45 • and it is these horizontal extensions (which can look as dark horizontal bars -see the next figure) that correspond to the observed OPM ellipses C1 and C2 in Fig. 7. The more these 'dark modal bars' are centered at ∆φ = n180 • , the higher is the local L/I (in a single PA track) and the smaller is |V |/I (this can be deduced from top panel of Fig. 6).
The important general implication of this section is that after statistical averaging over N ∆φ , the observed OPMs (or the observed PA tracks) have the PA that is different from the PA of the natural mode waves m1 and m2 (this PA is equal to 0 or 90 • , as measured from the x1 axis of Fig. 7). In the specific case considered (equal amplitudes of the natural modes, ψin = 45 • ), the observed OPMs are located mid way between the natural modes. Thus, the observed PA tracks are not equivalent to the natural mode waves. As shown further below (Sect. 7.5), the PA tracks may in general be displaced by an arbitrary, mode-amplitude-ratio-dependent angle (and a ν-dependent angle) from the natural modes. In the special case of equal amplitudes (of the natural mode waves m1 and m2) the two observed PA tracks (C1 and C2, or C1 and CA) are separated by 90 • from each other, and can easily be misidentified as the natural orthogonal modes, although they are misaligned by 45 • from the natural modes m1 and m2. Figure 8. Polarization characteristics for a mixture of nonsimultaneous signals, each of which is composed of the coherently added natural modal waves m 1 and m 2 . The phase lag ∆φ and mixing angle ψ in for the coherent addition were sampled from the statistical distributions shown near the top and left axes of the plot (in a N ∆φ is asymmetric). The result can be considered to present distribution of many radio signal samples observed at the same pulse longitude in different pulsar rotations. The main panels present the pattern of observed radiative power on the lag-PA diagram. The right panels present the observed intensity (thick solid), L/I (thin solid), and V /I (light grey) as a function of PA. Thus, the right panels show a vertical cut through the customary plots of PA distributions that are often presented for single pulse data. The result is for ψ in = 45 • , σ ψ,in = 3 • , ∆φ pk = 90 • and σ ∆φ = 30 • .
BASIC PROPERTIES OF THE EQUAL-AMPLITUDE OPMS AND THEIR APPLICATION TO PULSAR PROBLEMS
For the circular origin of the coherently combined waves m1 and m2, the value of ψ pk is fixed and N ψ,in must be narrow. Therefore, we are left with only three different processes that can happen to the radiative power on the lag-PA diagram: 1) the lag distribution may move to larger (or smaller) values; 2) the lag distribution may become wider, and 3) the other orthogonal circular-fed OPM can be added as an additional ψin distribution at −45 • , i.e. the ratio of amplitudes of E+ and E− may change. Fig. 9a N ∆φ is centered at 220 • which is larger than π, hence V of the bottom PA track (at −45 • ) becomes negative (compare the grey curves in 8b and 9a). The sign of V can thus change within the same PA track. In Fig. 9b the PA track makes an OPM transition to the upper value of +45 • , however, the circular polarisation stays negative, as in Fig. 9a. Thus, the increase of the lag can cause some OPM transitions, but they do not coincide with the sign change of V . They are actually a quarter of lag-change cycle away, so that the OPM jump occurs at a maximum V , while the sign change of V occurs well within a stable modal PA track, i.e. within flatter parts of a 'non-transiting' observed PA track. This is similar to the lag-driven effects in the coproper modes described in D17. The orthogonal modal tracks created by the change of lag (or by widening of N ∆φ ) can thus be called pseudomodes -they do not obey the normal rule of zero V at the minimum L/I. A phenomenon of this type (ie. lag- Figure 10. Top: Lag-PA pattern calculated for ψ pk = 41 • , σ ψ,in = 5 • , σ ∆φ = 15 • . Horizontal axis presents ∆φ pk . Circles show the PA averaged over N ψ,in and N ∆φ distributions. The corresponding L/I and V /I is shown in b. A change of lag corresponds to the horizontal motion in this plot, as shown with the backward bent arrows. Two interpretations of the core polarization, shown for B1237+25 and B1933+16, work at a single frequency, but fail to explain the frequency behaviour of the phenomenon.
change-based) is observed in several pulsars, eg. in the core PA bifurcations. The lag-driven transfer of power between different OPM tracks also explains the same sign of V in different OPM tracks, as observed in single pulses (MAR15). Superficially similar pseudomodal behaviour is also observed in the form of slow OPM transitions at high |V | that occur within the whole pulse window (eg. in PSR B1913+16, see Fig. 1 in D17, after Everett & Weisberg 2001, also PSR J1900−2600, Johnston & Kerr 2018). However, these are probably caused by the PA wandering which is discussed further below. The appearance of the equal-amplitude observed modes at the 45 • distance from the natural modes is interesting: it seems to automatically solve the problem of what the primary observed OPM is doing half way between the natural modes at the entry to the PA loop of B1933+16. It is sufficient to claim that the observed OPMs are 45 • away from the natural modes, because their amplitude ratio is close to 1 at this particular frequency. In such case, the PA loop can be explained by a rise and drop of |∆φ|, such as marked with the backward-bent arrow in Fig. 10b (right). The resulting loop is shown in Fig. 13 of D17. Such model reproduces several observed properties, such as the bifurcation of the PA track, twin minima in L/I, and the single-sign (negative) V . Moreover, a change of lag within a larger interval, such as shown in Fig. 10b (left) would explain the PA track bifur-cation of B1237+25, along with the sign-changing V at the core component.
The lag-induced PA bifurcation is also illustrated with the curved arrows in Fig. 6. It can be seen that for the PA track to split, the radiative power must be close to ψin = 45 • (in such case the lines of fixed ψin diverge up and down from 45 • ). Both the upward-and downward-heading arrows remain all the time within the grey rectangles of positive V . Thus for the lag-induced PA bifurcation the sign of V stays the same whether the upper or bottom branch of the bifurcation is followed. This would explain why the sign of V is the same in both modulation states in B1237+25: in the bright-core Ab modulation state the lower branch of the bifurcation is followed, but the sign of V does not change (in comparison to the N state).
It is thus found that the lag change is the key factor that affects the PA bifurcations observed both in B1933+16 and B1237+25. Both these phenomena have the same nature, and can be explained by the same model with slightly different parameters. However, the PA loop of B1933+16 may also be interpreted in a different (and better) way, which retains the usual coproper OPMs (with the same PA as the natural waves m1 and m2 at 0 and 90 • ), but assumes a quick change of ψin towards ∼45 • within the loop. This new interpretation is favoured as discussed later, but such new model also requires the rise and drop of the phase lag within the loop.
The PA bifurcation model that is based purely on the lag-change faces serious problems when the loop of B1933+16 is interpreted at two frequencies. In Sect. 4.4.1 of D17 (cf. Figs. 13 and 14 therein) I have shown that a change of a single parameter -ψin -from 41 • (at 1.5 GHz) to 31 • at 4.5 GHz well reproduces the new look of the loop at the higher frequency. This can be inferred from Fig. 6: the curved arrow that follows ψin = 40 • produces the PA amplitude of almost 90 • . A similar arrow (not shown) that would follow ψin = 30 • for the same range of lag, would produce a smaller amplitude of PA, consistent with the data at 4.5 GHz (see Fig. 1 in MRA16). The problem is that the change of ν is most naturally associated with the change of phase lag. Even if the mode amplitude ratio (hence ψin) changes with ν, it is hard to argue that the lag ∆φ does not change. For a smaller ∆φ, the horizontal backward-bending arrow in Fig. 10b (right) would turn back earlier, which would have made the PA amplitude smaller (as observed). However, such earlier backward turn would also cause the twin minima in L/I to approach each other, or even merge into a single minimum at the middle of the loop. This is not observed at 4.5 GHz: the minima in L/I become very shallow but stay at the same Φ ( Fig. 1 in MRA16).
Apparently the lag-change alone cannot explain the loop at both frequencies. It will be shown below that simultaneous change of ψin and ∆φ with pulse longitude is needed to understand the phenomenon at both frequencies.
Changes of width of the lag distribution
Considerable widening of the lag distribution wipes out the circular polarization and tends to produce two highly linearly polarized PA tracks of similar or equal amplitude (which gives zero net L/I at a given Φ). This is because the radiative power is filling in several 'dark horizontal bars' at Figure 11. The effect of 45 • PA jump, with the associated randomization of PAs visible in the right panels. When N ∆φ becomes narrow in comparison to N ψ,in the maxima of PA distribution move to 0 and 90 • (compare the PA distributions in Fig. 8). The small differences of lag around ∆φ pk = 90 • result in the randomisation of the observed PA, which is similar to the behaviour of B1919+21 at 352 MHz (MRA15).
both 45 • and −45 • in the lag-PA diagram. On the other hand, for moderately strong widening of N ∆φ the results may be similar to those of the N ∆φ shift, because the 'center of weight' of the widening N ∆φ moves rightward.
An interesting effect appears when the lag distribution becomes narrow and has a comparable width to the ψin distribution (N ψ,in ). Fig. 11 has been calculated for σ ψ,in = 8 • and σ ∆φ = 15 • . As can be seen in the right-hand panels, this makes the PA distribution quasi uniform, and the peaks relocate to coincide with the natural propagation modes m1 and m2 (located at PA of 0 and ±90 • ). The degree to which the peaks stand out depends on the ratio of σ ψ,in and σ ∆φ , and increases for narrower N ∆φ . This phenomenon has therefore the key characteristics of the 45 • PA jump, namely, the randomization of PA and the appearance of new pair of preferred PA values which are displaced by 45 • from the equalamplitude OPM tracks (observed in the wide-N ∆φ case).
The modelled quasi uniform distribution of PA corresponds to the erratic PA spread observed in the central profile region of PSR B1919+21, where the average PA curve is displaced by 45 • (see Fig. 18 in MAR15). The chaotic (quasiuniform in the model) distribution of PA becomes visible because the narrow N ∆φ is negligible, so the observed signal directly presents the state with no additional phase lag between the linear components m1 and m2. In this way the circulating motion of the electric field E+, as presented by the dotted circle C+ in Fig. 7a becomes directly visible (the circulation is recovered as the sum of the m1 and m2 waves with the little-changed original phase delay of 90 • ). Surprisingly, then, according to the circular-fed equal-amplitude model, the observed erratic PA spread also has geometric origin: it results from the circulating motion of the incident circularly polarized signal. The observed 45 • PA jump thus represents the transition from the lag-spread-stabilized PA (which represents the state of quasi-noncoherent average) to the lag-sensitive chaos of coherent states. In such model, the narrow well defined PA tracks present the observed OPMs (ie. the grey ellipses C1 and C2 that are misaligned by 45 • from the natural modes) which are associated with an average of wide N ∆φ -distribution. The longitudes with the erratic PA, on the other hand, present the non-averaged emission in which case the natural propagation modes m1 and m2 get through essentially undelayed. This interpretation, therefore, also associates the observed OPM tracks with the intermodes, just as the aforedescribed PA bifurcation model does.
The PA randomization of Fig. 11 has been obtained for a single OPM signal (say, C1 fed by C+, contributing the N ψ,in distribution at +45 • in Fig. 7). In this case the circular polarization can stay larger than zero throughout the 45 • jump, as observed in B0823+26 (Everett & Weisberg 2001) The addition of the second orthogonal mode (C2 or C-in Fig. 7) allows to suppress V arbitrarily strongly.
The phenomenon of the 45 • jump was interpreted in D17 as the narrowing of the lag distribution, which is maintained here. However, the orthogonal modes that correspond to the wide N ∆φ , and are observed at the profile outskirts in B1919+21, were interpreted differently, and the peak of the N ψ,in distribution in the narrow lag state was arbitrarily positioned near ψin = 45 • . In the model discussed in this section the nature of the observed OPMs is different (circular fed C1 and C2) and they automatically tend to stay at the 45 • distance from the orthogonal proper waves (m1 and m2).
The widening and displacements of N ∆φ produce the psedomodal behaviour -they are incapable of reproducing the classical mode jumps with coincident minima of L/I and |V |/I. To obtain such regular behaviour it is necessary to introduce the second circularly polarized component that produces the C2 OPM. This rises the question of why the amplitudes of these circular waves tend to be close to each other, and what causes the amplitude ratio to invert at the regular OPM jumps.
PULSAR AS A PAIR OF BIREFRINGENT FILTERS
Similar amount of pulsar OPMs in the circular-fed model
It has been shown above that some pulsar polarization effects can be described as the linearly birefringent filtration of two circularly polarized waves of similar amplitude but opposite handedness. If added coherently, such circular waves combine into a linearly polarized wave, or an elliptically polarized wave with large eccentricity of its polarization ellipse (see Figs. 12 and 13). This suggests that both these circularly polarized waves (C+ and C-) are generated by a single emitted signal with a narrow polarization ellipse. Such original signal may be split into two circularly-orthogonal waves in medium with circularly polarized natural propagation modes. If the original (i.e. emitted) signal is completely linearly polarized, as in the middle case in Fig. 13, then it produces identical amplitudes of both these circular waves (C+ and C-). In D17 the circular wave stage of the model was absent. Along with the change of pulse longitude Φ, the electric vector E of the emitted linearly polarized signal was slowly rotating with respect to the intervening polarization basis ( x1, x2). The reason was the change of angle between the low-and high-altitude direction of charge trajectories (section 4.5 therein). Whenever the vector E was passing through the 45 • angle, ie. mid way between x1 and x2, the mode amplitude ratio was inverted. This was causing the 90 • OPM jumps, albeit of the pseudmodal nature (with |V | peaking at the minimum L/I).
In the present circular-fed model such effect is not possible, because the rotation of the initial linearly polarized (or slightly elliptical) signal does not affect the amplitudes of the circular waves C+ and C-. The rotation just changes the phases of the waves, as shown in Fig. 12 (it is the Faraday rotation effect). Whatever the absolute oscillation phase of the circular waves, they always feed the same, orthogonal and linearly polarized waves m1 and m2. However, the ellipticity and handedness of the initial signal do affect the amplitudes of C+ and C-. As illustrated in Fig. 13, the relative amount of the circular waves is inverted whenever the handedness is changed, and the amplitude ratio is determined by the eccentricity of the initial signal.
Therefore, in the model with the filtration of the initial signal by the circularly-birefringent medium the regular OPM jumps (with the coincident minima of |V |/I and L/I) are caused by the change of handedness of the emitted signal. It is the handedness of the emitted radiation which determines which circle in Fig. 7 is larger, and which N ψ,in distribution -whether the one at 45 • or the one at −45 •is stronger, i.e. higher.
The regular OPM jump in the center of radio pulsar profiles
There is a way to test the hypothesis that the regular modal jumps are caused by the handedness change. It is well known that the regular inversion of the mode amplitude ratio is often observed in the central parts of pulsar profiles. The millisecond pulsar PSR J0437−4715 provides an example of this effect, as evidenced by the sign change of V and OPM jump at the normally behaving L/I minimum (see Fig. 4). Such sign-changing, sinusoid-like profile of V has long been associated with a sightline traverse through a fan beam of curvature radiation, the latter being emitted by a bent stream of charges (e.g. Michel 1991, pp. 355 -359). The ensuing pulse of curvature radiation, at least in vacuum theory, has precisely the sinusoid-like, handedness-changing profile of V . As a consequence of the geometry shown in Figs. 13 and 7, there should be a regular 90 • OPM jump produced by the change of handedness, and it is indeed often observed at zero V in such core components of supposedly curvature-radiation-related origin. 7 Thus, pulsar magnetosphere consisting of two filters that are made of circularly and linearly birefringent materials provides a quite successful polarization model: it is capable of explaining both the above-described non-RVM peculiarities and the standard polarization properties such as comparable modal power and the regular OPM jumps. However, such model is complex and difficult to justify physically. A possible physical scenario would include a low altitude emission of the nearly linear signal, followed by the circular decomposition in weak magnetic field at large altitudes. The final stage of the linear filtering could possibly be considered as equivalent to the effects that occur at the polarization limiting radius. Because of this complexity, in what follows the relative amplitude of the opposite-V modes is considered as a free parameter.
TWOFOLD NATURE OF PULSAR POLARIZATION
It was shown above that aside from the RVM effect, the polarization of pulsar radio signal can change because of two independent reasons: 1) as a result of change of the lag distribution N ∆φ and 2) as a result of change of the modal amplitude ratio (expressed by the ratio | E+|/| E−| in the circular-fed model). The first factor likely depends on the local properties of the intervening matter: a temporary increase of refraction index may appear when the line of sight is traversing through some extra amount of matter, e.g. a plasma stream. The second factor is likely governed by the radio emission process (and is determined by the ellipticity and handedness of the emitted radiation in the specific case of the filter pair model). The two mechanisms -the lag-driven and amplitude-driven changes of polarization -have markedly different properties. The lag-driven effect produces the anticorellated variations of |V |/I and L/I with pulse longitude (and OPM jumps at maximum |V |/I).
The amplitude-driven effects generate the regular modal behaviour with the usual OPM jumps. These generic properties are illustrated in Fig. 14 which presents a regular OPM transition on the left (Φ ≈ 120 • ) and the lag-driven bifurcation of the PA track on the right (Φ ≈ 290 • ) as a function of pulse longitude. The regular OPM coincides with the mode amplitude ratio of 1. The relative power of both modes, hereafter denoted Z, can be expressed as the integrated power (or just height, in case of identical width) of the N ψ,in distribution at +45 • and −45 • . The increasing value of Z × 100 is shown in the top panel (dotted), along with a temporary increase of ∆φ pk 7 The orthogonal elliptically polarized modes have by definition the opposite handedness, so it may seem to be a trivial vicious circle argument that a change of V sign confirms a modal jump. However, it is not, because without the final linearly birefringent filtering, the circular waves of Fig. 13 would combine back to the original ellipse or would be observed as separate circularly polarized signals. So it is the pair of filters which produces the regular OPM jumps. Figure 14. Two generic polarization effects in the coherent wave addition model: the regular OPM jump caused by inversion of mode amplitude ratio (left, at Φ ≈ 120 • ) and the lag-driven PA bifurcation/loop effect (right, at Φ ≈ 290 • ), both shown as functions of pulse longitude. Top: the relative power Z of both modes (dotted diagonal shows Z × 100) and the function ∆φ pk (Φ) which follows the Gaussian centered at Φ = 290 • . Beyond the Gaussian ∆φ pk is equal to 90 • . The profiles of L/I (solid) and V /I (dotted) are shown at the bottom. The lag-profile Gaussian has the 1σ width of 30 • , whereas σ ψ,in = 13 • , σ ∆φ = 45 • .
(solid Gaussian). 8 Several polarization effects observed in radio pulsars result from either process, or from a mixture of both. As described in section 2.1, both these non-RVM effects appear to shape the observed polarization especially in the central parts of pulsar profile.
Both these effects may depend on frequency. The influence of the lag may depend on ν because the lag depends on the refraction index, which is likely ν-dependent. As for the amplitude-related effects, they need to be ν-dependent to explain the modal power exchange observed in the Dtype pulsars by Young and Rankin (2012). This exchange of Figure 15. The result of simultaneous operation of both effectsthe lag change and amplitude change -within the same longitude interval. Both the Z = 1 crossing point and the Gaussian lag-pulse are moved toward the center of the Φ axis. Note the appearance of dissimilar twin minima denoted 'FT' that coincide with OPM jumps. The left minimum has high |V |/I, the right one coincides with V = 0. This is the phenomenon observed in the profile center of J0437−4715. Since Z and the amplitude of the lag pulse depend on ν, the look of this effect strongly depends on frequency. The value of Z is changing as shown with the dotted line in a, σ ψ,in = 5 • , σ ∆φ = 105 • . power seems to coincide with the V -sign change, although the radio spectral coverage is far from continuous. 9 The νdependent amplitude ratio is also responsible for another type of PA distortions (slow PA wandering) that is discussed further below.
The origin of dissimilar L/I minima in PSR J0437−4715
While interpreting polarization in the central (or other) parts of any profile it is important to allow for the possibility that both the effects of lag and amplitude ratio may be overlapping there to produce a net profile of L/I, V /I and a net PA. An obvious example of such overlap is the center of the profile of J0437−4715. Fig. 15 presents model result for the case when the temporary rise of lag (solid line in top panel) roughly coincides in Φ with the amplitude ratio reversal (dotted curve in top panel). The parameters have been changed a bit in comparison to Fig. 14, eg. the rate of Z change was increased, however, the main difference is that the longitude of equal mode power (Z = 1) and the peak of ∆φ pk (Φ) profile were displaced to roughly the same Φ. This combination of lag and amplitude effects reproduces the major features of the central profile portion in J0437−4715 at 660 MHz (Fig. 4, after NMSKB97). A double minimum of L/I appears at Φ = 170 • (denoted FT in Fig. 15). The right minimum in this pair coincides with the change of V sign, whereas the left one coincides with high |V |/I. The value of L/I in the regular right minimum does not quite reach zero, as in the observation. Within the longitude interval flanked by the minima, the PA is visiting the orthogonal PA track, but quickly returns back to the −45 • value. 10 The deep minimum at Φ ≈ 215 • does not follow the observations, but this is only because no efforts have been made to adjust parameters in this longitude interval. Another difference is that the modelled OPM follows the full 90 • traverse. This is caused by the perfect alignment of the N ψ,in with ±45 • (this constraint will be relieved below).
The complex polarization of core emission can thus be understood as a combination of the lag-driven and amplitude-ratio-driven polarization effects. The core emission of normal pulsars (eg. B1237+25, B1933+16) also exhibits polarization profiles that are neither symmetric nor antisymmetric. Apparently, the overlap of lag and amplitude effects also occurs in these objects and is partially destroying the anti/symmetry of L/I and V /I which appears when the lag and amplitude phenomena are viewed separately.
Towards a general model
Let us summarize the results obtained so far. A model based on coherent and quasi-coherent addition of linearly polarized waves of roughly equal amplitude is capable of qualitatively reproducing polarized profiles (ie. all three components: L/I, V /I and PA) of the following phenomena: 1) the bifurcations of PA track in pulsars with complicated core emission (ie. B1933+16 and B1237+25, including two modulation states of the latter) and 2) the mixed core behaviour of J0437−4715. When extended to encompass the origin of the feeding circular waves, the model can possibly justify the regular OPM jumps and the similar amount of modes.
On the other hand, the purely linear birefringent filtering may seem unphysical, and the model faces two problems that contain indications about how to change it. First, the pseudomodal OPM transitions tend to traverse regions of very low L/I. As can be seen in Fig. 6, for ∆φ increasing from zero at ψin ∼ 45 • the radiative power approaches the fully circularly polarized point at (∆φ, ψin) = (90 • , 45 • ) then jumps down to ψ = −45 • while staying all the time fully circularly polarized. This is consistent with the low L/I observed at the core PA bifurcations in PSR B1933+16 and B1237+25, however, a capability to flexibly adjust the modelled L/I is needed: in the D17's ψin-based model of B1913+16 it was difficult to avoid the strong decrease of L/I at the OPM transitions (cf. Figs. 1b and 7b in D17). Second, with the circular feeding of the linear proper waves (m1 and m2), the N ψ,in distribution is absolutely tied to ±45 • . Actually, even the spread of N ψ,in around these values (parametrized by σ ψ,in ) is hard to explain. 11 The circularly polarized waves (C+ and C-) that feed m1 and m2 are then too restrictive for the model and, at least when the 'filter pair' concept is dismissed, they indeed do little more than set the equal amplitude ratio of m1 and m2. Therefore, in the following I will use the lone pair of standard, ellipitically polarized, orthogonal natural mode waves (EPONM waves). Obviously, the coherent addition of such waves must produce all the successful results of previous sections, because the linearly polarized equal amplitude waves are just a special case of EPONM waves. However, the arbitrary amplitude ratio and the nonzero ellipticity provide important enlargement of the model capabilities.
A general model of pulsar polarization thus includes the eccentricity of the polarization ellipse for modal waves (m1 and m2). The eccentricity parameter may need to be sampled from statistical distribution of some width. Even with the same eccentricity for both modal waves, this means two new parameters. Along with the other four (the mixing angle for the amplitude ratio and the phase lag, plus the widths of their distributions), this makes up for six parameters. Such parameter space deserves a separate study, therefore, in what follows I describe my calculation method and only present a glimpse of the parameter space -just to address the above-described problems.
GENERAL MODEL
Coherent addition of elliptically-polarized orthogonal waves
The model is conceptually simple: observed pulsar polarization results from coherent and quasi-coherent addition of phase-lagged waves in two elliptically polarized natural propagation modes. They are numbered 1 and 2 and are presented in Fig. 16 by the ellipses m1 and m2. These ellipses are traced by the corresponding electric field waves E1 and E2. The ellipses m1 and m2 should not be mistaken for the observed PA tracks, because the latter result from coherent addition of m1 and m2 and may be easily displaced from the natural modes by an arbitrary angle. For example, if equal amplitudes of m1 and m2 are preferred, then the observed polarization ellipses (similar to the grey ellipses of
where ∆φ is the phase delay and tan β represents the ratio of the minor to major axis of the polarization ellipse. As usual ψin represents the ratio of the modal waves' amplitudes, ie. tan ψin = E2/E1, where E 2 1 = (E x 1 ) 2 + (E y 1 ) 2 and E 2 2 = (E x 2 ) 2 + (E y 2 ) 2 . These waves coherently combine into the observed signal that in general is elliptically polarized:
E x = E x 1 + E x 2 (15) E y = E y 2 + E y 2(16)
The polarization ellipse for the observed signal E = (E x , E y ) is calculated by numerically increasing ωt in the range between 0 and 360 • . The minor half axis Amin and the major half axis Amax of the observed ellipse are then identified numerically, along with the sense of the electric vector circulation (handedness). The PA is determined by the normalized components of the major axis:
cos ψ = A y max /Amax, sin ψ = A x max /Amax,(17)
whereas the ellipse axes length ratio gives the observed eccentricity angle:
tan βt = Amin/Amax(18)
which is different than the initial β of the proper modal waves. The normalized Stokes parameters are calculated from:
Q/I = cos(2βt) cos(2ψ)(19)
U/I = cos(2βt) sin(2ψ) (20)
V /I = sin(2βt),(21)
and the linear polarization fraction is calculated as L/I = [(Q/I) 2 + (U/I) 2 ] 0.5 .
Lag-PA diagrams for elliptical modes
The lag-PA diagram of Fig. 17 (left panel) presents the pattern of PA calculated for fixed values of eccentricity β increasing uniformly from 0 to 180 • in step of 5 • . The amplitudes of the combined modes are everywhere the same (ψin = 45 • ). The corresponding L/I is shown in Fig. 18, with L/I increasing in darker regions. The sign of V /I is changed several times in the same points of this diagram and, therefore, V /I is not shown. However, |V |/I is as before anticorellated with L/I, so dark regions in Fig. 18 present low |V |/I. The pattern presents new nodes, ie. regions where there is high probablity to observe the radiative flux. The nodes are at (∆φ, ψ) = (0, 45 • ) and (180 • , −45 • ). They appear for two reasons. The first is that for any eccentricity, at ∆φ = n180 • the electric vectors of the equal-amplitude modal waves always combine at the PA that is 45 • away from the PA of the proper modes (m1 and m2 have the PA of 0 and 90 • ). The second reason is that for the purely linear polarization (infinite eccentricity) of equal-amplitude waves the resulting PA is equal to ±45 • regardless of ∆φ. This produces the discontinuous PA jumps between the fixed PA values at ∆φ = 90 • and 270 • . For high eccentricity (nearly linear modal waves m1 and m2), and always for equal amplitude, the PA tends to linger close to 45 • , which increases the probability of the nearly intermodal PA. This is illustrated in Fig. 28 of the appendix.
The model described earlier in this paper (with the equal-amplitude linearly polarized orthogonal waves, LPOW) was confined only to the horizontal PA segments centered at the new nodes (and some nearby regions because N ψ,in was allowed to have finite width). The lag-PA space of the LPOW model is just a subpart of the new lag-PA diagram and this is because diverse ellipticities are added in Fig. 17. The patterns of L/I and V /I, within the overlapping part of the parameter space, are identical to the one of the LPOW model. For example, the value of L/I in Fig. 18 increases towards ∆φ = n180 • and decreases at the discontinous lag-change-driven OPM jumps. The new nodes coincide with the 'dark modal bars' of the LPOW model. This implies that all data interpretations provided before are also possible in the new elliptical model. In other words, the added diversity of eccentricities does not corrupt the previous results.
When the PA values of the left panel are collected in bins on the vertical axis, the histogram shown on the right is produced. The enhancements of the observed OPMs remain at the intermodal positions (half way between the PA of the natural modes). Naturally, for ψin = 45 • (R = 1) the intermodal nature of OPMs persists in the presence of diverse ellipticity.
A new feature of the lag-PA diagram are diagonal straight lines which connect the nodes. These correspond to the sum of two circular waves (β = 45 • ) at increasing lag. The result is a uniformly rotating linearly polarized signal, hence the linear change of PA (the diagonals thus represent the Faraday rotation effect). A similar case is shown in Figure 17. Left: Lag-PA diagram for equal mode amplitudes (ψ in = 45 • ) and for a set of eccentricity parameter β ∈ (0, 180 • ) sampled at interval of 5 • . The horizontal sections (β = 0, and 90 • ) are produced by the linearly polarized proper modes as described earlier in this paper. The diagonals correspond to circularly polarized m 1 and m 2 modes (β = 45 • and 135 • ) and are caused by the uniform PA change of the resulting fully linearly polarized signal (close cases are shown in Figs. 13 and 29). There are new nodes at ∆φ = n180 • that coincide with the 'dark modal bars' of the previous analysis. Projection of all the PA angles on the vertical axis gives the histogram shown on the right side. The observed OPMs for such equal-amplitude mixture of eccentricities are misaligned by 45 • from the natural modes. Fig. 12 and Fig. 29 in the appendix. The linear polarization is full along the diagonals (L/I = 1, V /I = 0).
If the radiation at a given pulse longitude contains a mixture of eccentriticies, then the lag-driven OPM transition occurs both along the S-shaped (or discontinuous) paths in the lag-PA diagram and along the straight diagonals. As shown in Fig. 18, in the middle of the OPM jump the combining signals of high eccentricity (ie. the almost linearly polarized modal waves which follow the S-shaped path) contribute circularly polarized power (note the bright stripe of the high |V |/I at the position indicated by the arrow) whereas the low-eccentricity signals (circularly polarized modal waves that follow the diagonals) contribute linearly polarized power (the diagonals are black everywhere, ie. they have L/I = 1, see Fig. 29, compare Fig. 28). The lag-driven OPM transition for a signal of mixed ellipticity, can thus be percieved as the passage from, say, the top horizontal row in Figs. 29 and 28, to the fourth row in these figures (along with all unshown cases of intermediate ellipticity). As shown in Fig. 18 with the arrow, the inclusion of wider ellipses increases L/I at the lag-driven OPM jump. The inclusion of eccentricity can thus increase the very low L/I at some lag-driven OPM transitions.
Entanglement of the lag-driven and amplitude-ratio-driven effects
The amplitude ratio of observed OPMs seems to change with pulse longitude Φ and with the frequency ν. The lag change should be considered as the primary effect which governs different look of PA tracks at different ν. The change of mode ratio (or ψin) is naturally responsible for changes of polarization with Φ (as proved by the regular OPM jumps). However, several observations at different ν suggest that ψin may also be ν-dependent. Moreover, if some observed OPMs have the intermodal nature, as illustrated in Fig. 7, then it is the change of lag itself, which causes the ratio of observed OPMs to change. This has been presented in Fig. 9, where the change of lag causes the radiative power to leak from one orthogonal PA track to another. It should be possible to recognize if the observed change of OPM amount has the lag-driven origin, because the lag-driven effects exhibit the pseudomodal behaviour (anticorellation of L/I and |V |/I).
This complexity needs to be kept in mind when the nonequal mode amplitudes are considered.
Beyond the equal amplitudes
To move N ψ,in away from 45 • , the amplitude ratio of the natural mode waves (m1 and m2) must be changed to a less There is a spread of available PA values near the nodes. Note the appearance of the non-orthogonal PA tracks in the histogram. Not all peaks must be observed, depending on the actual spread of β in the observed signal.
trivial value than 1. 12 The change of ψin causes the entire 12 To detach from 45 • in the circular-fed equal-amplitude model, it is necessary to consider simultaneous detection (and coherent combination) of the modes C 1 and C 2 . lag-PA pattern to evolve. Fig. 19 shows the case of ψin = 30 • (amplitude ratio of 0.58) whereas Fig. 20 is for ψin = 10 • (ratio 0.17). As can be seen in Fig. 19, with the change of mode amount ratio the PA paths move away from 45 • . Moreover, with the increase of lag many paths cover smaller range of PA than in the equal amplitude case. It should be noted that the addition of EPONM waves itself does not imply any preference for the same or similar amount of modes. The intermodal observed OPMs (located at ±45 • , see the histogram in Fig. 17) are just the conse- Figure 21. The lag-PA pattern for densely sampled parameter space of (∆φ, ψ in , β). The sampling was uniform in all these three angles. The observed PA tracks (peaks in the right histogram) coincide with the PA of the proper mode waves m 1 and m 2 . The black spots at the diagonals' crossings are at the same locations as the linear-fed nodes described in D17 (also see Fig. 6 in this paper).
quence of the equal amplitude assumption. When the entire parameter space is sampled uniformly in ψin ∈ (0, 180 • ), β ∈ (0, 180 • ) and ∆φ ∈ (0, 360 • ), the coproper modes of D17 (with the same PA as the natural modes, see Fig. 5) become statistically most probable and stand out in the histogram (see Fig. 21).
In the case of the linear-fed coproper modes (Fig. 5), the equal amplitudes of observed OPMs are produced when the incident linear signal is traversing through the intermodal separatrix IM (at a wide N ∆φ that extends to ∆φ = 90 • ). This seems to be a quite simple and natural way to change the PA by 90 • , but the sign of V does not change along with the PA jump. In the birefringent filter pair model, the equal/similar amplitudes of both opposite-V modes are produced by decomposition of the initial quasi-linearly polarized signal in the circularly polarizing medium of the first filter. Apparently, some mechanism akin to such process is required to explain the nearly equal amount of the modes m1 and m2 which are shown in Fig. 16. In any case, if there is any reason for which the natural mode amplitudes tend to be similar, then the histogram of Fig. 17 tells us that the observed OPMs are actually 45 • away from the natural propagation modes (as illustrated in Figs. 7 and 28). The dotted circles in Fig. 16 present how the similar amplitudes of modes can be produced if they are fed by the circularly-polarized waves of common origin (ie. produced by decomposition of an almost linearly polarized signal into the C+ and C-waves of nearly equal amplitude).
Finally, however, it must be admitted that amounts of the opposite-V modes are equally often observed to be strongly nonequal. This option opens the possibility to understand the off-RVM wandering of PA tracks. The primary mode exchange phenomenon, illustrated in figures of Young & Rankin (2012) and other observations clearly show that the mode amount ratio changes with frequency. The ratio Z (or R) also changes with pulse longi- tude Φ as proved by the regular OPM jumps and ubiquitous changes of L/I with Φ. The coherent addition of proper modal waves (m1 and m2) implies that the observed PA should be at ψ = 45 • (with respect to the proper wave PA) whenever the lag distribution is wide and the modes' amplitudes are equal. 13 Note that such statistical average of signals with different ∆φ reproduces the PA value that would be expected for a coherent sum of linearly polarized modal waves at zero lag, ie.:
ψ = ψin = arctan(E2/E1).(22)
This equation is valid for other amplitude ratio values, however, for the full uniform distribution of β the PA maxima 13 For a narrow N ∆φ the observed PA becomes determined by an accidental value of ∆φ or ∆φ pk . A wider N ∆φ allows for the statistically average value to stand out in a stable way. appear as two pairs, visible in the right panels of Figs. 19 and 20, ie. they appear at ψ = ±ψin, and ψ = 90 • ±ψin. For a narrower distribution of eccentricities only some of these four peaks remain in the signal. Fig. 22 is calculated for uniform β in the range (0, 45 • ) and ψin = 30 • (hence this case is a subpart of Fig. 19). This gives two PA tracks at ±ψin, ie. separated by 60 • . The phenomenon of non-orthogonal PA tracks is often observed in radio pulsar data. For example, in PSR B1944+17 and PSR B2016+28 (Fig. 15 in MAR15) the PA tracks are neither orthogonal nor parallel, which requires the nonequal modal amplitude to change with Φ.
Off-RVM PA wandering
For increasingly non-equal amount of modes, ψ will diverge from 45 • and, in the limit of one mode absent, the observed PA must become equal to 0 or 90 • , ie. it must start to coincide with one of the natural modes. Therefore, the model predicts that whenever the mode ratio is changing (along with changing Φ or ν) between 1 and 0, the observed PA should exhibit transitions between the observed intermodal PA track (at ψ = 45 • ) and the proper mode at ψ = 0. In a general case, the PA can wander between some initial ψ init in and a final ψ fnl in , with the values determined by their corresponding mode amplitude ratios, as given by eq. (22). This type of phenomenon likely occurs on the right side of the 660 MHz profile in J0437−4715 (Fig. 4). The ν-dependent modal ratio may also affect the strongly νdependent look of polarization within the core component of this pulsar. However, figures 17-22 clearly show that changes of lag with ν can affect the observed polarization at least equally strongly, and in fact they do, as is shown in the following section.
Frequency dependent lag and the loop of B1933+16
The OPMs observed to the left of the PA loop in PSR B1933+16 are stable at different frequencies: they look as the same pair of orthogonal patches at both 1.5 and 4.5 GHz (see Fig. 1 in MRA16). This holds despite the ratio of observed OPMs quickly changes with pulse longitude Φ at both frequencies. The steady orthogonal location of the modes can be ensured both in the linear-fed model of Fig. 5 and in the circular-fed model of Fig. 7. In the case of the coproper modes of D17, the lag distribution must be wide so that the nodes at PA of 0 and 90 • are enhanced. For a narrow ∆φ distribution the proper modal waves m1 and m2 would coherently combine to an arbitrary PA, as given by eq. (22). Alternatively, the proper modal waves would have to be detected non-simultaneously (to avoid coherent addition) to hold the steady PA at 0 and 90 • . The observed minima in L/I of B1933+16 ( Fig. 1 in MRA16) reveal that the mode amount ratio is being inverted every 4 • or so in the profile, so it is natural to assume that R temporarily becomes close to unity (ψin = 45 • or 135 • ) within the loop (where 'temporarily' again means 'within the narrow interval of Φ'). Therefore, it is assumed in the following that ψin slightly crosses the value of 45 • within the PA loop. For simplicity, the eccentricity is set to infinity (linear waves, β = 0) so that the model considered is that of Fig. 5 (which is a special case of the general model shown in Fig. 16). Moreover, since changes of ∆φ appear indispensable to obtain the bifurcation effect, it is assumed that both ψin and ∆φ change within the loop. Fig. 23 presents polarization characteristics calculated for the Gaussian change of ψ pk and ∆φ pk , as shown in panel a with the dotted and dashed line, respectively. Both Gaussians have the pulse longitude width σ = 30 • . The values of σ ψ,in = 15 • and σ ∆φ = 45 • are fixed across the pulse window. Outside the loop the modelled PA track follows the proper mode at ψ = 90 • , because the assumed profile for the peak of the mixing angle distribution is: ψ pk = 90 • − 60 • exp(−0.5(π − Φ) 2 /σ 2 ). When ψ pk diverges from 90 • so much that the intermode is crossed (dotted horizontal at 45 • ) the PA loop is opened on the coproper OPM track. 14 The loop is not identical to the observed one, but 14 It is therefore not necessary for the OPMs left to the loop to be intermodal. several observed features are reproduced: there are upwardpointing 'horns' of PA at the top of the loop, little power inside the loop, and the bottom of the loop extends into the downward-pointing tongue of radiative power that reaches all the way to the top of the loop (after the PA axis is wrapped up with the 180 • period). The twin minima in L/I and the single-sign V are also well reproduced.
The simultaneous changes of ∆φ seem to be indispensable in this model. The change of ψin alone does not produce the bifurcation, and the result in such case resembles that of Fig. 11 in D17.
The profile of peak value in the lag distribution was ∆φ pk = 360 • −100 • exp(−0.5(π −Φ) 2 /σ 2 ), ie. the amplitude of the lag change is equal to 100 • in Fig. 23. When the lagchange amplitude is decreased to 30 • , the result of Fig. 24 is obtained. One can see that the loop disappears and is transformed into the U-shaped PA distortion, much like the one observed in the data at 4.5 GHz (Fig. 1 in MRA16). Moreover, the twin minima in L/I become shallower, but do not merge, again as observed for B1933+16. The value of L/I increases, while |V |/I decreases, in agreement with data at both 1.5 GHz and 4.5 GHz.
The decrease of lag amplitude is thus the only thing which is needed to understand the evolution of the PA loop with frequency. It is possible to obtain the right behaviour with the decrease of lag only, which is naturally expected at the increased ν.
The phase lag may be a strong function of ν and it is possible that ∆φ becomes negligible at 4.5 GHz. If so, then the observed PA track of the U-shaped distortion directly reveals the profile of ψ pk , which is indeed observed to be about 45 • away from the OPMs that are observed outside the loop (the dotted curve in Fig. 24a is reproduced in the PA curve of Fig. 24b, to be compared with the 4.5 GHz data in Fig. 1 of MRA16).
The result of this section again shows that the core polarization of pulsars is a combination of amplitude-driven and lag-driven effects, and the look of PA curves and other polarization characteristics change with frequency, because of the frequency-dependent phase lag. When the Gaussian profiles of ψ pk (Φ) and ∆φ pk (Φ) are misaligned, the resulting profiles of L/I and V /I become asymmetric, which is observed at both frequencies. It should be possible to construct a similar multifrequency model for the complex behaviour of core polarization in J0437−4715 at different frequencies.
It must be noted that several effects of the lag change can also be produced through the narrowing of N ∆φ distribution. For example, a result similar to that of Fig. 23 can be obtained for a fixed (Φ-independent) ∆φ pk when σ ∆φ is changing within the loop. Reasonably looking loops were in particular obtained for a one-sided N ∆φ with σ ∆φ following the profile of σ ∆φ = 145 • − 135 • exp(−0.5(π − Φ) 2 /σ 2 ). In such case the exact shape of the resulting loop depends on the ∆φ pk value.
Lag-driven inversions of PA distortions
Polarization characteristics that result from coherent mode addition sometimes are very sensitive to the parameters used. The results illustrated in the previous section were calculated for symmetric (two-sided) lag distribution N ∆φ . Fig. 25 presents a different result for a one-sided N ∆φ . The Figure 25. Polarization characteristics calculated for the ψ pk and ∆φ pk profiles shown in panel a, and for σ ψ,in = 15 • , σ ∆φ = 45 • . The PA is distorted downward, and V is positive.
amplitudes of the ψ pk and ∆φ pk profiles are 60 • and 105 • , respectively, and ψ pk now increases within the PA distortion (see the dotted line in panel a).
A change of only the lag amplitude to 35 • leads to the result of Fig. 26. The PA distortion is now protruding upwards, whereas the other polarization characteristics (such as L/I and V /I do not change much). This phenomenon resembles the PA bifurcation of B1237+25 in the N and Ab modulation states (SRM13). With the change of modulation state, the observed PA follows different branch of the PA bifurcation while the sign of V does not change. It appears possible then, that the exchange of the followed PA branch is caused only by the change of the lag value in different modulation states.
Obviously, the phenomenon of the modulation-statedependent polarization, and other complex polarization phenomena in pulsars require more detailed study. The parameter space for the coherent addition of non-equal elliptical modes offers a large number of possible polarization profiles. Fig. 27 presents the PA as a function of lag, calculated for a sparse grid of parameters: ∆ψin = 10 • , ∆β = 5 • , and Figure 26. Same as in previous figure, but for a smaller amplitude of the ∆φ pk profile (top solid line). The PA is now distorted upward, but the sign of V does not change. This behaviour resembles that observed in the core of B1237+25 in different modulation states ( Fig. 1 and 6 in SRM13). ∆(∆φ) = 1 • . Different lines of ψ(∆φ) correspond to different pairs of (ψin, β). A numerical code for pulsar polarization needs to probe even larger parameter space, with the added widths of statistical distributions of ψin, ∆φ, and β. To make things more complex, it may be necessary to introduce a few additional parameters that describe how these six basic parameters depend on pulse longitude Φ. More complete analysis of the phenomena visible in Fig. 27, along with detailed numerical fitting of pulsar data, is deferred to further study.
CONCLUSIONS
It has been shown that complex non-RVM polarization effects in radio pulsars can be understood in geometrical terms, as the result of coherent and quasi-coherent addition of elliptically polarized orthogonal proper-mode waves. The phenomenon of coherent mode addition is described by Figure 27. The lag-PA diagram for sparsely and uniformly sampled parameter space of (∆φ, ψ in , β). The result has been obtained for intervals: ∆(∆φ) = 1 • , ∆ψ in = 10 • and ∆β = 5 • . The coproper nodes at ψ = n90 • , the linearly polarized diagonals, and the intermodes at ψ = 45 • + n90 • seem to stand out most in this picture. For increasing ∆φ, a radio signal with fixed ψ in and β would have the PA changing along one of the visible lines. three (or six) parameters: the phase lag, the amplitude ratio (mixing angle), and the eccentricity of polarization ellipse (plus the widths of their distributions). The model implies that the observed radio polarization is driven by at least two independent effects: the changes of mode amplitude ratio, which, in particular, are responsible for the regular OPM behaviour (with zero V at OPM transitions) and the changes of the phase lag which have opposite characteristics. Both these factors influence the observed polarization within the same pulse intervals, which is evident in the core region of profiles. Such model explains several complicated and dissimilar phenomena, such as: distortions, bifurcations and loops of the PA observed in the central part of profiles, twin minima in L/I associated with these distortions, maxima of |V |/I at OPM jumps, 45 • -off PA tracks, chaotic spread of PA values within the 45 • -displaced emission, and dissimilar L/I minima of mixed origin, such as those observed in J0437−4715. Moreover, the model is capable to interpret the changing look of these phenomena with frequency and possibly with modulation state.
The observed OPM tracks have often been directly associated with the natural propagation mode waves. It has been shown here that the observed OPMs do not necessarily correspond to the natural waves. Instead, the observed OPMs are a statistical average of coherent sum of the natural waves (with diverse phase lags). Therefore, the PA of observed polarization tracks can be completely different from the PA of the natural waves. The observed PA tracks may be nonorthogonal and they may wander away from the RVM PA. The coherent addition model implies that the PA is distorted by the ν-dependent location and width of the lag distribu-tion, and by the ν-dependent ratio of modal amplitude, as expressed by eq. (22). In the noncoherent model the observed PA can only jump by 90 • when one mode becomes stronger than another. In the coherent mode addition model, the noncoherent condition is obtained by coherent summation of numerous natural mode waves at diverse phase lags. This typically causes the coproper modes M1 and M2 to stand out in the data. Preference of equal modal amplitudes, however, makes the intermodes C1 and C2 most pronounced.
Identical amplitudes of the natural propagation mode waves (m1 and m2) are automatically produced when the waves are fed by a circularly polarized signal. The coherent addition model then implies that two pairs of observed OPM tracks may in general appear in pulsar profiles, and the pairs are separated by 45 • . Just like the linear-fed coproper modes, the intermodal OPMs are pronouced when the phase lag distribution is wide, which introduces many polarization ellipses that all share the same PA of 45 • (or −45 • , see Fig. 7).
In the case of the linear-fed coproper modes, the psedomodal OPM jumps are produced when ψin is passing through the intermodal value of 45 • . |V |/I is maximum at such OPM transitions. In the case of the circular-fed equalamplitude OPMs, the regular OPM jumps take place when the handedness of the feeding wave is changed. In the case of the general model with the elliptical proper modes, the regular OPM jumps occur in the usual way (when one mode becomes stronger than another, for whatever reason).
When the mode amplitude ratio slowly deviates from 1, the observed PA makes a non-orthogonal passage between the intermodal PA value and the natural mode PA, eg. between 45 • and 90 • . Such change of PA does not have to be precisely equal to 45 • given the possible simultaneous change of PA caused by the RVM effect. Examples of such slow wandering of PA between the OPM values can often be found in pulsar data, eg. on the trailing side of profile in J0437−4715, Fig. 4.
The presented model solves several problems that appeared in the analysis of D17. The complex polarization in the core components of both normal and millisecond pulsars can be understood as the result of simultaneous changes of phase lag (with pulse longitude and frequency) and of the mode amount ratio (which changes at least with pulse longitude). The change of lag with ν is responsible for the different look of the PA loop in B1933+16 at 1.5 and 4.5 GHz. If the profiles of lag and mode ratio are misaligned in pulse longitude, it is possible to produce the dissimilar twin minima in L/I as observed in J0437−4715. The original twoparameter lag-PA diagram of D17 seemed to clearly indicate where the observed OPMs are located, but it is found here that the observed 'modes' (PA tracks) in general do not coincide with the natural modes. They can be at any distance from the RVM PA, they can be non-orthogonal, and they can be intermodal wherever the amplitude ratio is close to unity.
The result of Fig. 23 shows that a fairly simple underlying model (see the Gaussian profiles in top panel) can produce the very complex effect of the PA loop (panel b). The coherent mode addition thus presents a capable interpretive tool. However, the model contains many parameters: at least the lag, mixing angle, widths of their distributions, plus six parameters for their pulse-longitude dependence (amplitude, peak longitude, and the width, in the case of a Gaussian). Even with the ellipticity ignored, this makes up for ten free parameters. Moreover, some pairs of the parameters (such as the lag and mixing angle, or the peak lag value and the width of the lag distribution) are degenerate at least to some degree. Therefore, it is not easy to find the best fit parameters through a hand-made sampling of the parameter space. Neither it is easy to break the degeneracy. A possible way out is to consider the ν dependence of modelled phenomena, which has helped us to break the ψin-∆φ degeneracy in the case of the loop in B1933+16. Modelling of the single pulse data (distributions of PA, L/I and V /I at a fixed Φ) may also prove useful. A need for a carefully designed fitting code is apparent. Figure 28. Coherent summation of two orthogonal elliptically polarized natural propagation mode waves (narrow orthogonal ellipses). Numbers in the corners give the value of phase lag used in the summation. The result has the form of a wide ellipse, a narrow ellipse at nearly diagonal orientation, or a diagonal line (linear polarization at ∆φ = 0 and 180 • ). For most lag values the result has the PA close to ±45 • . Only in the second and fifth horizontal row the PA deviates considerably from ±45 • . The dots on the ellipses refer to the same moment of time (t 1 for small dots, t 2 > t 1 for the large dots, ie. the direction of electric field circulation is from a small dot to a large one). The eccentricity angle of the ellipses that are added is β = 10 • . The result roughly corresponds to the S-shaped lines in Fig. 17 and 18, ie. it is not far from the discontinuous case with the PA jumping from 45 • to −45 • at ∆φ = 90 • (and back at 270 • ).
Figure 1 .
1Polarisation characteristics of PSR B1919+21 at 352 MHz, after MAR15. The dots present the PA in units of 180 • , plotted at an arbitrary absolute value. The PA is plotted twice at a distance of 45 • (equivalent to 0.25). Light grey intensity profile is shown in the background for reference. The PA jump near Φ = 30 • is nearly perfectly equal to 45 • . The circular polarization fraction (bottom black solid line) passes through zero at the jump and L/I (grey) is minimal.
Fig. 1
1presents the polarized profile of PSR B1919+21 as observed by MAR15. The profile exhibits a sharp 45 • PA jump near the maximum flux in the profile. The PA is plotted twice at a separation of 45 • which shows that the change of PA at the jump is near perfectly equal to 45 • . This may seem not strange given that the jump coincides with deep minimum in the linear polarization fraction L/I and with a sign change of the circular polarization fraction V /I. These are trademark features of the equal modal power, and are
Figure 2 .
2Polarization characteristics of PSR B0823+26 at 1.4
Figure 3 .
3Cartoon presentation of the primary mode exchange effect, after Figs. 4 and 7 of
Figure 4 .
4Polarization characteristics of J0437−4715 at 660 MHz, after
Figure 9 .
9The effect of increasing phase lag ∆φ. The N ∆φ distribution is moving rightward. In a ∆φ = 220 • which causes the sign of V in the bottom PA track to change (compare V in the bottom PA maximum ofFig. 8b), but the PA maximum stays at ψ in = −45 • . In b ∆φ increases to 330 • which moves the observed PA to +45 • , but the sign of V does not change. This is called a pseudomodal behaviour. To obtain the normal OPM behavior, another N ψ,in distribution would have to be added at −45 • , and the relative power in both N ψ,in would have to change with pulse longitude. The observed modes have the form of dark horizontal bars clearly visible in both lag-PA diagrams.4.1 Movement of the phase lag distribution
Fig. 9
9shows what happens when the phase lag distribution ofFig. 8bmoves towards larger values of ∆φ. In
Figure 12 .
12The almost linearly polarized signal, represented by the narrow polarization ellipse, can be mathematically decomposed into circularly polarized waves of similar amplitude. The same decomposition can occur in circularly birefringent medium. Rotation of the quasi-linear signal only changes the oscillation phases of the circular waves (tips of arrows refer to the same moment of time, and numbers give the phase diference). The waves' amplitudes are unaffected (cf. the Faraday rotation effect).
Figure 13 .
13Decomposition of the quasi-linearly polarized signal into circularly polarized waves. The amplitude ratio of the waves (equal to 0.8, 1 and 1.2 in the cases shown) depends on the eccentricity of the signal's polarization ellipse. With the change of handedness, the relative amplitude of the circular waves is inverted.
Figure 16 .
16General model of pulsar polarization is based on coherent addition of elliptically polarized natural mode waves E 1 and E 2 which trace the solid line ellipses m 1 and m 2 (shown in a and b). Here the proper modal waves are fed by the circular waves E + and E − (see Sect. 5), but this is not necessary for the model to work.
Fig. 6 )
6appear at the PA of 45 • when m1 and m2 are coherently added. The elliptical natural mode waves m1 and m2 can be written as: ψin cos β cos(ωt + ∆φ)
Figure 18 .
18Linear polarization fraction L/I for the lag-PA diagram shown in the previous figure. In the black regions L/I = 1, whereas in the bright regions L/I = 0. At the point indicated by the arrow several cases of eccentricity overlap, contributing different L/I. The linear (or quasi-linear) modes (β = 0) contribute circular polarization (L/I ∼ 0) whereas the diagonals contribute linearly polarized signal (L/I = 1). The pattern may also be considered to present |V |/I, with the value of 1 in the bright regions, but V /I changes sign several times in a given position of the plot and would require a different type of visualization.
Figure 19 .
19The lag PA diagram for mode amplitude ratio of 0.58 (ψ in = 30 • ). In comparison to the equal amplitude case, the curves of ψ(φ) have different shape and are displaced vertically.
Figure 20 .
20The lag-PA diagram for mode amplitude ratio of 0.18 (ψ in = 10 • ).
Figure 22 .
22The lag-PA diagram for mode amplitude ratio of 0.58 (ψ in = 30 • ), but with β limited to the interval (0, 45 • ). The resulting PAs are a subset of those inFig. 19. Two non-orthogonal PA tracks (visible as the peaks in the right histogram) appear at ±30 • . Changes of ψ in would change the separation between the tracks. 7.5 Mode-intermode PA transitions and the origin of the off-RVM wandering of PA 7.5.1 Non-orthogonal PA tracks
Figure 23 .
23Modelled polarization characteristics for the PA loop of B1933+16 as observed at 1.5 GHz (cf.Fig 1 in MRA16). The loop shaped bifurcation of PA results from the Gaussian variations of ψ pk and ∆φ pk (dotted and solid line in a). Distribution widths were kept constant across the feature: σ ψ,in = 15 • , σ ∆φ = 45 • .
Figure 24 .
24Same as in the previous figure, but for a smaller amplitude of the lag deviation (see the solid line in a). The PA loop has been transformed into the U-shaped PA distortion similar to that observed in B1933+16 at 4.5 GHz. L/I and V /I also change in the way consistent with the data.
Figure 29 .
29Same as in previous figure, but for nearly circularly polarized natural waves (β = 40 • ). This case is close to the diagonals shown in Figs. 17 and 18, ie. the PA of the resulting wave increases almost uniformly as in the Faraday rotation effect.
Hereafter, the terms 'linear' or 'circular', when referring to sigc 0000 RAS, MNRAS 000, 000-000
c 0000 RAS, MNRAS 000, 000-000
As discussed below, such circular waves can be produced by a decomposition of an elliptically polarized wave in medium with circularly polarized natural propagation modes. c 0000 RAS, MNRAS 000, 000-000
This makes such circular-fed 45 • -off modes statistically frequent, which is a feature analogical to the linear-fed modes of D17.domode. 5 As explained below, to account for the observed phenomenology, it is necessary to introduce a separate circularly polarized signal of opposite handedness, denoted with C-and E− inFig. 7b. This additional wave, in the same way as just described, produces the second observed OPM, marked with the grey ellipse C2. Again, as explained below, the mode may be accompanied by a pseudomode CB, which may have the same or opposite handedness as C2.Remarkably, the new observed modes C1 and C2 form a pair which is 45 • away from the natural modes m1 and m2. They are also mid way between the linear-fed modes described in D17, which have the same PA as m1 and m2. The new circular-fed OPMs 6 can be readily handled with the mathematical model described above, because each observed mode results from coherent addition of phase lagged, linearly polarized orthogonal waves (m1 and m2). Specifically, equal amplitudes of the waves imply the mixing angle of ±45 • (eq. 3) and the circulating feeding of the waves implies the 5 Though C A may have the opposite handedness too.6 The OPMs may also be called same-amplitude OPMs, especially that the circular feeding may be considered irrelevant to the problem as soon as the equal amplitudes are considered as an ad hoc assumption. See, however, Sect. 5. c 0000 RAS, MNRAS 000, 000-000
Here 'temporary' means 'constrained to a narrow interval of pulse longitude'.
In the case of B0301+19, for example, between the 327 MHz and 1.4 GHz Arecibo profiles of Young & Rankin I have only found the 1.22 GHz profile in MRA15. The modal exchange takes place near 1.22 GHz since V has both signs in different parts of the average profile at this ν.c 0000 RAS, MNRAS 000, 000-000
Since the model N ψ,in is perfectly aligned with 45 • , one is free to choose whether the PA jump direction is up or down.
The displacement from ψ in = 45 • could be obtained for elliptically polarized feeding waves.c 0000 RAS, MNRAS 000, 000-000
ACKNOWLEDGEMENTS I thank Richard Manchester for the average pulse data on J0437−4715 (Parkes Observatory). Plotting of polarized fractions for B1919+21 was possible thanks to the public Arecibo Observatory data base provided by Dipanjan Mitra, Mihir Arjunwadkar, and Joanna Rankin (MAR15). I appreciate comments on the manuscript from Bronek Rudak, discussions with Adam Frankowski and I thank Wolfgang Sieber for words of encouragement. This work was supported by the grant 2017/25/B/ST9/00385 of the National Science Centre, Poland.APPENDIX
. C Brinkman, P C C Freire, J Rankin, K Stovall, MNRAS. 474Brinkman C., Freire P. C. C., Rankin J., Stovall K., 2018, MNRAS, 474, 2012
. J Dyks, MNRAS. 472D174617Dyks J., 2017, MNRAS, 472, 4617 (D17)
. R T Edwards, A&A. 426677Edwards R. T., 2004, A&A, 426, 677
. R T Edwards, B W Stappers, A&A. 410961Edwards R. T., Stappers B. W., 2003, A&A, 410, 961
. R T Edwards, B W Stappers, A&A. 421681Edwards R. T., Stappers B. W., 2004, A&A, 421, 681
. J E Everett, J M Weisberg, ApJ. 553341Everett J. E., Weisberg J. M., 2001, ApJ, 553, 341
. R T Gangadhara, ApJ. 71029Gangadhara R. T., 2010, ApJ, 710, 29
. T H Hankins, J M Rankin, AJ. 139168Hankins T. H., Rankin J. M., 2010, AJ, 139, 168
. S Johnston, M Kerr, MNRAS. 4744629Johnston S., Kerr M., 2018, MNRAS, 474, 4629
. A Karastergiou, MNRAS. 39260Karastergiou A., 2009, MNRAS, 392, L60
. Y E Lyubarskii, S A Petrova, Astrophysics and Space Science. 262379Lyubarskii Y. E., Petrova S. A., 1998, Astrophysics and Space Science, 262, 379
. M M Mckinnon, ApJ. 5901026McKinnon M. M., 2003, ApJ, 590, 1026
. M M Mckinnon, D R Stinebring, ApJ. 502883McKinnon M. M., Stinebring D. R., 1998, ApJ, 502, 883
. M M Mckinnon, D R Stinebring, ApJ. 529435McKinnon M. M., Stinebring D. R., 2000, ApJ, 529, 435
. D Melrose, A Miller, A Karastergiou, Q Luo, 365638MN-RASMelrose D., Miller A., Karastergiou A., Luo Q., 2006, MN- RAS, 365, 638
. D B Melrose, Australian Journal of Physics. 3261Melrose D. B., 1979, Australian Journal of Physics, 32, 61
Theory of neutron star magnetospheres. F C Michel, University of Chicago PressMichel F. C., 1991, Theory of neutron star magnetospheres. University of Chicago Press
. D Mitra, M Arjunwadkar, J M Rankin, ApJ. 806MAR15236Mitra D., Arjunwadkar M., Rankin J. M., 2015, ApJ, 806, 236 (MAR15)
. D Mitra, J Rankin, M Arjunwadkar, MNRAS. 460MRA163063Mitra D., Rankin J., Arjunwadkar M., 2016, MNRAS, 460, 3063 (MRA16)
. D Mitra, J M Rankin, MNRAS. 385606Mitra D., Rankin J. M., 2008, MNRAS, 385, 606
. J Navarro, R N Manchester, J S Sandhu, S R Kulkarni, M Bailes, ApJ. 4861019Navarro J., Manchester R. N., Sandhu J. S., Kulkarni S. R., Bailes M., 1997, ApJ, 486, 1019
. A Noutsos, C Sobey, V I Kondratiev, P Weltevrede, J P W Verbiest, A Karastergiou, M Kramer, A&A. 57662Noutsos A., Sobey C., Kondratiev V. I., Weltevrede P., Verbiest J. P. W., Karastergiou A., Kramer M., et al. 2015, A&A, 576, A62
. S Os, W Van Straten, M Bailes, A Jameson, G Hobbs, MNRAS. 4413148Os lowski S., van Straten W., Bailes M., Jameson A., Hobbs G., 2014, MNRAS, 441, 3148
. M Z Rafat, D B Melrose, A Mastrano, Rafat M. Z., Melrose D. B., Mastrano A., 2018, arXiv e- prints
. J M Rankin, A Archibald, J Hessels, J Van Leeuwen, D Mitra, S Ransom, I Stairs, W Van Straten, J Weisberg, ApJ. 84523Rankin J. M., Archibald A., Hessels J., van Leeuwen J., Mitra D., Ransom S., Stairs I., van Straten W., Weisberg J. M., 2017, ApJ, 845, 23
. J M Rankin, R Ramachandran, ApJ. 590411Rankin J. M., Ramachandran R., 2003, ApJ, 590, 411
. E Smith, J Rankin, D Mitra, MNRAS. 435SRM132002Smith E., Rankin J., Mitra D., 2013, MNRAS, 435, 2002 (SRM13)
. C Wang, D Lai, J Han, MNRAS. 403569Wang C., Lai D., Han J., 2010, MNRAS, 403, 569
. S A E Young, J M Rankin, MNRAS. 4242477Young S. A. E., Rankin J. M., 2012, MNRAS, 424, 2477
|
[] |
[
"Shot noise dominant regime for ellipsoidal nanoparticles in a linearly polarized beam",
"Shot noise dominant regime for ellipsoidal nanoparticles in a linearly polarized beam"
] |
[
"Changchun Zhong \nDepartment of Physics and Astronomy\nPurdue University\n47907West LafayetteINUSA\n",
"F Robicheaux \nDepartment of Physics and Astronomy\nPurdue University\n47907West LafayetteINUSA\n\nPurdue Quantum Center\nPurdue University\n47907West LafayetteINUSA\n"
] |
[
"Department of Physics and Astronomy\nPurdue University\n47907West LafayetteINUSA",
"Department of Physics and Astronomy\nPurdue University\n47907West LafayetteINUSA",
"Purdue Quantum Center\nPurdue University\n47907West LafayetteINUSA"
] |
[] |
Results on the heating and the parametric feedback cooling of an optically trapped anisotropic nanoparticle in the laser shot noise dominant regime are presented. The related dynamical parameters, such as the oscillating frequency and shot noise heating rate, depend on the shape of the trapped particle. For an ellipsoidal particle, the ratio of the axis lengths and the overall size controls the shot noise heating rate relative to the frequency. For a particle with smaller ellipticity or bigger size, the relative heating rate for rotation tends to be smaller than that for translation indicating a better rotational cooling. For one feedback scheme, we also present results on the lowest occupation number that can be achieved as a function of the heating rate and the amount of classical uncertainty in the position measurement.
|
10.1103/physreva.95.053421
|
[
"https://arxiv.org/pdf/1701.04477v2.pdf"
] | 55,228,701 |
1701.04477
|
598eb996f699e5961bc92772cb879051f7e05b5f
|
Shot noise dominant regime for ellipsoidal nanoparticles in a linearly polarized beam
Changchun Zhong
Department of Physics and Astronomy
Purdue University
47907West LafayetteINUSA
F Robicheaux
Department of Physics and Astronomy
Purdue University
47907West LafayetteINUSA
Purdue Quantum Center
Purdue University
47907West LafayetteINUSA
Shot noise dominant regime for ellipsoidal nanoparticles in a linearly polarized beam
(Dated: October 7, 2018)numbers: 4250Wk0710Pz6225Fg
Results on the heating and the parametric feedback cooling of an optically trapped anisotropic nanoparticle in the laser shot noise dominant regime are presented. The related dynamical parameters, such as the oscillating frequency and shot noise heating rate, depend on the shape of the trapped particle. For an ellipsoidal particle, the ratio of the axis lengths and the overall size controls the shot noise heating rate relative to the frequency. For a particle with smaller ellipticity or bigger size, the relative heating rate for rotation tends to be smaller than that for translation indicating a better rotational cooling. For one feedback scheme, we also present results on the lowest occupation number that can be achieved as a function of the heating rate and the amount of classical uncertainty in the position measurement.
Results on the heating and the parametric feedback cooling of an optically trapped anisotropic nanoparticle in the laser shot noise dominant regime are presented. The related dynamical parameters, such as the oscillating frequency and shot noise heating rate, depend on the shape of the trapped particle. For an ellipsoidal particle, the ratio of the axis lengths and the overall size controls the shot noise heating rate relative to the frequency. For a particle with smaller ellipticity or bigger size, the relative heating rate for rotation tends to be smaller than that for translation indicating a better rotational cooling. For one feedback scheme, we also present results on the lowest occupation number that can be achieved as a function of the heating rate and the amount of classical uncertainty in the position measurement.
I. INTRODUCTION
The transition between a quantum and a classical description of a system as its size is increased has been discussed extensively since the birth of quantum mechanics [1][2][3][4]. Understanding the behavior of increasingly large systems in terms of quantum mechanics is one of the motivations for investigating mesoscopic quantum phenomena [5,6]. In order to observe mesoscopic quantum coherence, a mesoscopic system needs to be cooled to the quantum regime and it should be well isolated from its environment such that the quantum coherence is not destroyed before any observation. Recently, laser levitated nanoparticles have become a promising candidate to study mesoscopic quantum phenomena due to this system's favorable properties regarding decoherence and thermalization. [5,[7][8][9][10].
Despite the great advantage of laser levitation, the nanoparticle still suffers from shot noise due to photon scattering from the trapping laser. In ultrahigh vacuum, this shot noise is the dominant source of decoherence [11], which will lead to an increase in energy of the solid-body degrees of freedom: the center of mass motion and the solid-body rotations. Thus, in a laser levitated cooling experiment, the photon scattering, as an unavoidable factor, plays the role of setting a fundamental cooling limit to the system since the heating from shot noise will counteract whatever method is used to cool the nanoparticle.
Cooling and controlling the center of mass vibration of levitated nanoparticles have been discussed intensively in the past several years [12][13][14][15]. The interest in the rotational motion of a non-spherical nanoparticle is also increasing [10,[16][17][18]. The anisotropy of a dielectric nanoparticle has an orientation dependent interaction with a linearly polarized optical field which leads to a restricted, librational motion in some of the orientation * [email protected] † [email protected] angles when the laser intensity is large enough [19,20]. The oscillating frequency of the rotational degrees of freedom can be much larger than that of the spatial degrees of freedom indicating that the rotational ground state can be reached at a higher temperature [10]. However, this feature does not guarantee that the ground state of the librational motion is easier to reach than that for the center of mass vibration. From our previous study [16], the decoherence rate due to shot noise in the rotational degrees of freedom was several orders of magnitude faster than that in the translational degrees of freedom for a nanoparticle interacting with blackbody radiation. The results from this theoretical study suggested that cooling the center of mass vibrations has a practical advantage over cooling the librational motion.
In this paper, we investigate the shot noise heating and parametric feedback cooling [12] of a nano ellipsoid trapped in a linearly polarized laser beam. The nanoparticle is trapped in the center of the beam with its long axis closely aligned with the laser polarization direction. Because the nanoparticle is nearly oriented with the laser polarization, the decoherence and shot noise heating rate of the librational motion is qualitatively changed from that for a nanoparticle interacting with blackbody radiation. The heating rate differs in the rotational and the translational degrees of freedom depending on the particle size and geometry. Importantly, we find that the relative rotational heating rate is slower than translation for a wide range of nanoparticle sizes and shapes, suggesting a better rotational than translational cooling. However, the preference for smaller relative heating rates becomes much less certain when classical feedback uncertainty is included in the calculation. By one measure, a lower optimal cooling limit can be reached for motions with a higher relative heating rate. Thus, the details of the limitations imposed by the classical measurement uncertainty will determine whether lower quantum numbers can be achieved for vibrations or librations. The results of the feedback cooling calculations are suggestive, instead of definitive, because they are based on classi-2 Now let's turn to the case of a non-spherical nanoparticle trapped in a same laser beam and we focus on the shot noise heating in the rotational degrees of freedom. [14]. The laser field is also propagating in the y direction and polarized in the z direction asẼinc =⇠E exp( ik·r). We can imagine that it also undergo an angular momentum kick, causing the shot noise heating in the rotational degrees of freedom. If we denote the system state in the rotational configuration basis by ⇢(⌦, ⌦ 0 ), its time evolution could be obtained through a scattering model
@ @t ⇢(⌦, ⌦ 0 ) = ⇤⇢(⌦, ⌦ 0 ),(12)
where
⇤ = N 2V Z d 3 k~k m µ(k) Z d 2k0 f⌦(k 0 ,k) f⌦0 (k 0 ,k) 2 .
(13) The shot noise heating is given by
d dt hEi = d dt tr(Ĥr⇢)(14)
whereĤr is the free rotational Hamiltonian for a symmetric top
Hr = ~2 2I1 ( @ 2 @ 2 + cot @ @ + ( I1 I3 + cot 2 ) @ 2 @ 2 + 1 sin 2 @ 2 @↵ 2 2 cos sin 2 @ 2 @↵@ )(15)
To get the shot noise heat, the next step is to determine the decoherence rate ⇤. Similarly, the laser distribution takes the delta function µ(k) = (k k 0), wherek0 is in the propagating y direction. The scattering amplitude is given by
f⌦(k 0 ,k) = k 2 4⇡✏0↵ ⌦ ·P ,(16)
where the polarizability↵⌦ is a matrix for a specific configuration |⌦i = |↵, , i. If we place the symmetrical axis of the top along the coordinate axises, the polarizability will be simply diagonal
↵0 = 0 @ ↵x 0 0 0 ↵y 0 0 0 ↵z 1 A ,(17)
FIG. 2. A nano-particle is trapped in an optical tweezer, which is polarized in the z direction. The laser beam is propagating in the positive y direction.
Combine the above equations and also average over the outgoing wave polarizations, the integral of Eq. (12) becomes
⇤ = N C 2V k 4 0 2(4⇡✏0) 2 4⇡ 3 (↵z ↵x) 2 (1 cos(2 ) cos(2 0 ) cos(↵ ↵ 0 ) sin(2 ) sin(2 0 )).(19)
For simplification, we approximate the nano particle by a linear symmetric top, thus the free rotational hamiltonian takes the form
Hr = ~2 2I1 ( @ 2 @ 2 + cot @ @ + 1 sin 2 @ 2 @↵ 2 ).(20)
Thus the rotational shot noise heating rate is written as
d dt hEi =~2 I1 (1 + cos 2 )(21)
III. NUMERICAL SIMULATION
In the small angle approximation, the rotational motion can be expressed as harmonics oscillation. Then it is possible to apply feedback cooling to the rotational degree of freedom.
IV. CONCLUSION
In conclusion, we study the translational and rotational shot noise of a nano-particle in a trapping laser and the classical and quantum results are compared.
The first quantum model of optical feedback cooling and force sensing is given in Ref. [15].
Besides the fundamental meaning of observing macroscopic quantum coherence [16], the levitated system also finds application in the detection of ultra-weak forces [17].
This work was supported by the National Science Foundation under Grant No.1404419-PHY. tion could be obtained through a scattering model
@ @t ⇢(⌦, ⌦ 0 ) = ⇤⇢(⌦, ⌦ 0 ),(12)
where
⇤ = N 2V Z d 3 k~k m µ(k) Z d 2k0 f⌦(k 0 ,k) f⌦0 (k 0 ,k) 2 .
(13) The shot noise heating is given by
d dt hEi = d dt tr(Ĥr⇢)(14)
whereĤr is the free rotational Hamiltonian for a symmetric top
Hr = ~2 2I1 ( @ 2 @ 2 + cot @ @ + ( I1 I3 + cot 2 ) @ 2 @ 2 + 1 sin 2 @ 2 @↵ 2 2 cos sin 2 @ 2 @↵@ )(15)
To get the shot noise heat, the next step is to determine the decoherence rate ⇤. Similarly, the laser distribution takes the delta function µ(k) = (k k 0), wherek0 is in the propagating y direction. The scattering amplitude is given by
f⌦(k 0 ,k) = k 2 4⇡✏0↵ ⌦ ·P ,(16)
where the polarizability↵⌦ is a matrix for a specific configuration |⌦i = |↵, , i. If we place the symmetrical axis of the top along the coordinate axises, the polarizability will be simply diagonal
↵0 = 0 @ ↵x 0 0 0 ↵y 0 0 0 ↵z 1 A ,(17)
agating in the positive y direction.
Combine the above equations and also average over the outgoing wave polarizations, the integral of Eq. (12) becomes
⇤ = N C 2V k 4 0 2(4⇡✏0) 2 4⇡ 3 (↵z ↵x) 2 (1 cos(2 ) cos(2 0 ) cos(↵ ↵ 0 ) sin(2 ) sin(2 0 )).(19)
For simplification, we approximate the nano particle by a linear symmetric top, thus the free rotational hamiltonian takes the form
Hr = ~2 2I1 ( @ 2 @ 2 + cot @ @ + 1 sin 2 @ 2 @↵ 2 ).(20)
Thus the rotational shot noise heating rate is written as
d dt hEi =~2 I1 (1 + cos 2 )(21)
III. NUMERICAL SIMULATION
In the small angle approximation, the rotational motion can be expressed as harmonics oscillation. Then it is possible to apply feedback cooling to the rotational degree of freedom.
IV. CONCLUSION
In conclusion, we study the translational and rotational shot noise of a nano-particle in a trapping laser and the classical and quantum results are compared.
The first quantum model of optical feedback cooling and force sensing is given in Ref. [15].
Besides the fundamental meaning of observing macroscopic quantum coherence [16], the levitated system also finds application in the detection of ultra-weak forces [17].
This work was supported by the National Science Foundation under Grant No.1404419-PHY.
FIG. 2. A ellipsoidal nano-particle is trapped in an optical tweezer, which is polarized in the z direction. The laser beam is propagating in the positive y direction. The ellipsoid rotationally vibrates with its long axis closely aligned with the laser polarization direction.
B. Shot noise in rotational degrees of freedom
Now let's turn to the case of an ellipsoidal nanoparticle trapped in a laser beam and we focus on the shot noise heating in the rotational degrees of freedom. As shown in Fig. (2), the configuration of the ellipsoidal particle can be described by its Euler angles |⌦i = |↵, , i [8,9]. The laser field is propagating in the y direction and polarized in the z direction. The laser field can be written asẼ inc =⇠E exp( ik ·r). Due to a torque
M i = 1 2 Re(P ⇤ ⇥Ẽ inc ) i(11)
exerted on the particle, the long axis of the ellipsoid tends to be aligned along the direction of the laser polarization [14]. The rotational vibration angle is observed to be small in cooling experiment [11], which enables us to use the small angle approximation in the following calculations.
Classically, we can also imagine that the ellipsoid undergo an angular momentum kick from the laser beam, causing the angular momentum recoil and thus heating in the rotational degrees of freedom. In the following, we derive the rotational shot noise heating rate based on the collisional decoherence in the orientational space. If we denote the density matrix of the system in the rotational configuration basis by ⇢(⌦, ⌦ 0 ), its time evolution could be obtained through a scattering model
@ @t ⇢(⌦, ⌦ 0 ) = ⇤⇢(⌦, ⌦ 0 ),(12)
where
⇤ = N 2V Z d 3 k~k m µ(k) Z d 2k0 f ⌦ (k 0 ,k) f ⌦ 0 (k 0 ,k) 2 .
(13) Detailed discussion about the equation can be found in the Ref. ([8]). The shot noise heating rate is given by
d dt hKi = d dt tr(Ĥ r⇢ ),(14)
the decoherence rate ⇤. Similarly, the laser distribution takes the delta function µ(k) = (k k 0 ), wherek 0 is in the propagating y direction. The scattering amplitude is given by
f ⌦ (k 0 ,k) = k 2 4⇡✏ 0 E⇠ 0 ·↵ ⌦ ·Ẽ inc ,(16)
where↵ ⌦ is the polarizability matrix for a specific configuration |⌦i = |↵, , i. If we place the symmetrical axis of the ellipsoid along the coordinate axises, the polarizability matrix will be simply diagonal
↵ 0 = 0 @ ↵ x 0 0 0 ↵ y 0 0 0 ↵ z 1 A ,(17)
where ↵ x = ↵ y for a symmetric top. The polarizability with any rotational configuration can be easily derived through the following rotation
↵ ⌦ = R † (⌦)↵ 0 R(⌦).(18)
Combine the above equations and also average over the outgoing wave polarizations, the integral of Eq. (13) becomes
⇤ = N C 2V k 4 0 (4⇡✏ 0 ) 2 1 3 (↵ z ↵ x ) 2 (1 cos(2 ) cos(2 0 ) cos(↵ ↵ 0 ) sin(2 ) sin(2 0 )).(19)
For simplification, we approximate the nano particle by a linear symmetric top, thus the free rotational hamiltonian takes the form
H r = ~2 2I 1 ( @ 2 @ 2 + cot @ @ + 1 sin 2 @ 2 @↵ 2 ).(20)
Thus the rotational shot noise heating rate is written as
d dt hEi =~2 I 1 (1 + cos 2 )(21)
III. NUMERICAL SIMULATION
In the small angle approximation, the rotational motion can be expressed as harmonics oscillation. Then it is possible to apply feedback cooling to the rotational degree of freedom.
FIG. 1. A symmetric ellipsoidal nanoparticle is trapped in a laser beam (shown by the red line), which is polarized in the z direction and propagating in the positive y direction (shown by the red arrow). Besides the vibrational motion in the center of mass degrees of freedom, the ellipsoid also rotationally vibrates with its long axis closely aligned with the laser polarization direction. The angles α, β, γ denote an orientation of the nanoparticle. cal mechanics. Quantum calculations with more realistic measurement assumption would allow for estimates of the feedback cooling limits [21][22][23][24][25]. Although more computationally demanding, a quantum version of feedback cooling of levitated nanoparticles should be within reach.
This paper is organized as follows. Section II introduces the translational and rotational shot noise heating of a nano-ellipsoid trapped in a laser beam based on the theory of collisional decoherence [4,16,17]. Section III analyzes the particle's vibrational and librational motions and discusses its relative cooling in the laser beam. Section IV presents the numerical results of the heating and the parametric feedback cooling. The simulation is classical and assumes an ideal measurement of the particle's position and velocity. Although limited to the classical regime, the calculations give insight into the relative difficulty of cooling the vibrational and librational degrees of freedom. In Sec. V, the results of feedback cooling with classical feedback uncertainty are presented. Finally, Sec. VI summarizes our results.
II. SHOT NOISE HEATING IN A LASER BEAM
In this paper, we consider a nano ellipsoid with a size about 50 nm and mass m trapped in a linearly polarized laser beam, as shown in Fig. (1). The laser field is polarized in z and propagating in the positive y direction, which can be denoted by E inc = ξE exp(i k 0 · r), where ξ, E and k 0 = k 0ŷ are the polarization vector, the field magnitude and wave number respectively. The system is assumed to be well isolated from its environment and recoil from the elastically scattered photons is the major source of decoherence.
A. Shot noise in translational degrees of freedom
In order to compare the shot noise in rotation and translation, we first present the well known photon recoil heating of a trapped nanoparticle in its center of mass motion [5,11]. Classically, the levitated nanoparticle experiences a momentum kick from each scattered photon [11], each of which gives a recoil energy ∆E = 2 k 2 /2m when the nanoparticle is much smaller than the wavelength of the light. The shot noise heating rate can be derived through multiplying the recoil energy by the momentum transfer cross section and the photon flux. Quantum mechanically, the interaction between the system and the incoming photons causes a decoherence in the system state [1], which generates a diffusion in momentum space. The classical and quantum mechanical treatments lead to the same shot noise heating rate. In the position basis, the master equation can be written as
[4] ∂ ∂t ρ(x, x ) = −Λ(x, x )ρ(x, x ).(1)
The unitary part of the time evolution is not shown in the above expression. Λ(x, x ) is the decoherence rate.
In a long-wavelength approximation (which is a good approximation in the cases we consider), the decoherence
rate Λ = D(x − x ) 2 ,
where D is the momentum diffusion constant and it takes the form
D = J p d 3 kµ( k) d 2k f ( k, k ) 2 k 2 2 k −k 2 ,(2)
where J p is the photon flux, µ( k) is the incoming wave number distribution and dσ/dΩ = |f ( k, k )| 2 is the differential cross section. k and k are the incoming and outgoing wave vectors, respectively. The shot noise heating rate can be evaluated by the following formulȧ
E T = d H T dt = tr(K T ∂ ∂t ρ),(3)
where H T = K T + V T denotes the system Hamiltonian and K T = P 2 /2m is the free system Hamiltonian. The potential energy V T is absent from the right hand side of Eq. (3) since the trace operation will set Λ = D(x − x ) 2 to zero. Combining the above equations, a straightforward calculation yields the following resulṫ
E T = J p d 3 kµ( k) d 2k dσ dΩ 2 k 2 2m 2(1 − cos θ),(4)
where θ is the angle between the incoming and outgoing wave vector. Equation (4) gives the translational shot noise heating rate, which is exactly the same as what one would expect from a classical derivation [12]. In order to compare the above calculation with experimental results [11], Eq. (4) needs to be further evaluated. We are interested in the shot noise of a system coherently illuminated by a laser beam, so the incoming wave vector distribution can be approximated by
µ( k) = δ( k − k 0 ),(5)
in which k 0 = k 0ŷ is the incoming wave vector. If we denote ξ as the polarization vector of the outgoing wave, the scattering amplitude can be written as [26] f
( k , k) = k 2 4π 0 E ξ · P ,(6)
where P = α · E inc is the induced dipole moment. For now, we choose a spherical nanoparticle (a non-spherical particle is discussed below), such that the polarizability is a scalar
α = 4π 0 − 1 + 2 r 3 ,(7)
where r is the radius, and 0 are the relative and the vacuum dielectric constant respectively. Substituting the above equations into Eq. (4) and using the following formula [27] λ=(1,2)
λ i λ j = δ ij −k ikj(8)
to average the polarization of the outgoing wave, the shot noise heating rate is obtaineḋ
E T = D 2 m = 8πJ p 3 k 2 0 4π 0 2 α 2 2 k 2 0 2m .(9)
Using the parameters in Ref. [11], the laser wavelength λ = 1064 nm, the particle mass of a fused silica of radius r = 50 nm is approximately 1.2 × 10 −18 kg, the relative dielectric constant is about 2.1, and the photon flux J p is equal to the laser intensity over the energy of a photon. The laser intensity at the focus is given by I = P k 2 N A 2 /2π. The laser power is P = 70 mW and N A = 0.9 is the numerical aperture for focusing [11] (These values are used throughout the paper unless specified otherwise). Combining all of these factors, the translational shot noise heating rate iṡ
E T 200 mK/sec,(10)
which matches well the experimental result in Ref. [11].
B. Shot noise in rotational degrees of freedom
Inspired by the experiment of laser trapping and cooling of non-spherical nanoparticles [10,18], the master equation of rotational decoherence was studied for either mass particles or thermal photons scattered from an anisotropic system, and a squared sine dependence on the orientation difference was found in the angular localization rate [16,17]. Similar to the momentum diffusion induced by the translational decoherence, the rotational decoherence generates an angular momentum diffusion, which was discussed for a spherically symmetric environment in Ref. [17]. Based on the rotational master equation, the time evolution of the expectation value of the angular momentum J was shown to be a constant, while the second moment of the angular momentum indeed follows the diffusion equation
J 2 t = J 2 0 + 4Dt,(11)
where D is the diffusion coefficient determined by different types of scattering. The diffusion coefficients of Rayleigh-type and Van der Waals-type scattering were given in Ref. [17].
In this section, we discuss the rotational shot noise from photon scattering in a laser beam. The starting point is the master equation of rotational decoherence. As shown in Fig. (1), the configuration of the ellipsoid can be described by its Euler angles |Ω = |α, β, γ [16,17]. If we denote ρ(Ω, Ω ) as the density matrix of the system in the orientational basis, the time evolution follows the equation [
16] ∂ ∂t ρ(Ω, Ω ) = −Λ(Ω, Ω )ρ(Ω, Ω ),(12)
where
Λ = J p 2 d 3 kµ( k) d 2k f Ω ( k , k) − f Ω ( k , k) 2(13)
is the rotational decoherence rate. Detailed discussion about the equation can be found in Ref. [16]. Similar to Eq. (3), the rotational shot noise heating can be obtained by evaluatingĖ
R = d dt H R = tr(K R ∂ ∂t ρ),(14)
where H R = K R +V R is the rotational Hamiltonian, V R is the potential energy which has zero contribution in the above equation, and K R is the free rotational part. For a symmetric top, K R takes the following form [28]
K R = − 2 2I 1 ( ∂ 2 ∂β 2 + cot β ∂ ∂β + ( I 1 I 3 + cot 2 β) ∂ 2 ∂γ 2 + 1 sin 2 β ∂ 2 ∂α 2 − 2 cos β sin 2 β ∂ 2 ∂α∂γ ),(15)
where I 1 and I 3 are the moments of inertia of the ellipsoid along the short and long axis, respectively. To calculate the shot noise heating, the next step is to determine the decoherence rate Λ. As with the derivation of the translational shot noise, the distribution of the laser wave vector takes the delta function µ( k) = δ( k − k 0 ), where k 0 is in the propagating y direction. The scattering amplitude is given by
f Ω ( k , k) = k 2 4π 0 E ξ ·ᾱ Ω · E inc ,(16)
whereᾱ Ω is the polarizability matrix for a specific configuration |Ω = |α, β, γ . If we place the ellipsoid symmetrically along the coordinate axis, the polarizability matrix will be diagonal
α 0 = α x 0 0 0 α y 0 0 0 α z ,(17)
where α x = α y for a symmetric top. The polarizability with other rotational configuration can be derived through the following operation
α Ω = R † (Ω)ᾱ 0 R(Ω).(18)
Combining the above equations and averaging over the polarizations of the outgoing wave using Eq. (8), the integral of Eq. (13) becomes
Λ = J p 2 k 4 0 (4π 0 ) 2 2π 3 (α z − α x ) 2 (1 − cos(2β) cos(2β ) − cos(α − α ) sin(2β) sin(2β )).(19)
The polarizability α x,z should not be confused with the Euler angle α and α . As expected, Λ differs from the decoherence rate from blackbody radiation given in Ref. [16]. The localization rate Λ depends on the orientations |Ω and |Ω individually since the polarization of incoming photons is not isotropic. There is no dependence on γ because we're assuming a symmetric top. The localization rate depends only on the difference of the angle α because the photons are linearly polarized in the zdirection which does not have a preferential angle in the xy-plane. For the cases considered below, we take the small oscillation approximation β 1 which will be justified in the next section. (Unless specified otherwise, the symbol in this paper means this approximation is used.) Combining the Eqs. (12), (15) and Eq. (19), a direct evaluation of Eq. (14) yields the rotational shot noise heating ratė
E R 8πJ p 3 k 2 0 4π 0 2 (α z − α x ) 2 2 2I 1 ,(20)
where terms of order β 2 have been dropped.
III. RELATIVE COOLING OF THE ELLIPSOID IN THE LASER BEAM
There are several possible quantities that are useful when comparing the cooling of translation and rotation. The first one is the ratio of magnitudes of the translational and rotational shot noise, which is written aṡ
E Ṙ E T 5 λ 2π √ a 2 + b 2 2 (α z − α x ) 2 α 2 z ,(21)
where the moment of inertia I 1 = 1 5 m(a 2 +b 2 ) with a and b being the short and long axis of the ellipsoid and k 0 = 2π/λ are used. The polarizability can be determined by the formula [29]
α i = 0 V − 1 1 + L i ( − 1) ,(22)
where V is the particle volume, and is the relative dielectric constant. L i=(x,y,z) is determined by
L x = L y = 1 − L z 2 , L z = 1 − e 2 e 2 ( 1 2e ln 1 + e 1 − e ),(23)
where e = 1 − a 2 /b 2 is the ellipticity of the nanoparticle. Using the wavelength λ = 1064 nm and = 5.7 for diamonds and = 2.1 for silica, the rotational and translational shot noise and their ratiosĖ R /Ė T for several nano-diamonds and fused silica are given in Tab. I, II, III and IV (For convenience, other related quantities are included in the tables). The geometries of the ellipsoids in the tables are chosen in a way such that their sizes √ a 2 + b 2 or ellipticities are approximately fixed. From the table, we see that the ratioĖ R /Ė T differs depending on the ellipticity or size of the nanoparticle. More elongated or smaller ellipsoid tends to have higher shot noise heating in the rotational degrees of freedom, which suggests that particles with more spherical shape or bigger size may be better for rotational cooling.
The second useful quantity is the ratio of the rate of change of occupation number ṅ R / ṅ T , where n ≡ E/ ω is defined as the mean occupation number, and E and ω are the energy and the oscillating frequency in the corresponding degree of freedom. For exploration of quantum phenomena, the occupation should be as small as possible. In order to get the ratio, it is necessary to analyze the mechanical motion of the nanoparticle in the laser trap. We consider an incident Gaussian beam which is z polarized and propagates in the y direction, as shown in Fig. (1). The detailed discussion of the Gaussian beam can be found in Ref. [12,30]. The ellipsoid in the laser trap experiences a force and a torque
F i = 1 2 ( P · ∂ i E inc ), M i = 1 2 ( P × E inc ) i ,(24)
where no absorption is assumed such that the dipole moment P is real. For the center of mass motion, using the small oscillation approximation, the particle oscillates harmonically in the trap and each degree of freedom has an oscillating frequency, where all corrections quadratic in the amplitude of oscillations have been dropped. y 0 = πw 2 0 /λ, w 0 = λ/(πN A) is the beam waist and E 0 is the field strength in the center of the laser focus. Similarly, for the rotational motion, due to the torque exerted on the particle, the long axis of the ellipsoid will be aligned with the direction of the laser polarization, as shown in Fig. (1). From the small oscillation approximation, the torsional oscillating frequencies can be written as
ω x = ω z α z m E 0 w 0 , ω y α z 2m E 0 y 0 ,(25)ω β1 = ω β2 α z − α x 2I 1 E 0 ,(26)
where all corrections quadratic in the amplitude of oscillations have been dropped. The subindex β 1 and β 2 are used to denote the torsional vibration along the x and y axis, respectively. From the above equations, one finds that the ratio of torsional oscillating frequency to the translational oscillating frequency is aproximately given by
ω β1 ω x √ 5w 0 2(a 2 + b 2 ) α z − α x α z .(27)
In an experiment, the beam waist is much bigger than the size of the particle, and the polarizability α z and α z − α x are roughly the same order, so the rotational oscillating frequency is generally higher than the translational oscillating frequency [10]. Thus, the ratio of the corresponding rate of change of occupation number is obtained
ṅ R ṅ T ≡Ė R /ω β1 E T /ω x λ 2 4π 2 w 0 10(α z − α x ) 3 (a 2 + b 2 )α 3 z ,(28)
where the ratio is determined by the laser parameters, the particle size and the quantity (α z − α x )/α z (determined by the particle ellipticity and dielectric constant). The ratios ṅ R / ṅ T with respect to the particle ellipticity and size are given in Fig. (2). The blue and yellow curves are for diamonds and silica respectively. In Fig. 2(a), the particle size is kept fixed while we increase the ellipticity. As the particle shape approaches more spherical (ellipticity decreases), the ratio ṅ R / ṅ T becomes smaller. In Fig. 2(b), we change the particle size while the particle ellipticity stays fixed. As the particle size increases, we see ṅ R / ṅ T gets smaller. In addition, comparing the results for diamond and silica with the same geometries, we see that the ratio ṅ R / ṅ T is generally smaller for silica. The reason is that (α z − α x )/α z in Eq. (28) is smaller for particles with smaller dielectric constants and silica has a smaller dielectric constant than diamond. Intuitively, the ratio ṅ R / ṅ T should be chosen as small as possible so as to get a better rotational cooling to the ground state. However, we will show later that the unavoidable measurement noise quantitatively modifies the trend. The third useful quantity is the ratio ∆n R /∆n T , where ∆n ≡ 2πĖ/ ω 2 is the change in occupation number over one vibrational period in the corresponding degree of freedom. The ratio can be written as
∆n R ∆n T =Ė R /ω 2 β E T /ω 2 x λ 2 2π 2 w 2 0 α z − α x α z ,(29)
which only depends on the laser parameters, the particle ellipticity and the particle dielectric constant. The ratios ∆n R /∆n T for diamond and silica with respect to the particle ellipticity are given in the tables and are plotted in Fig. (3). The curves show that the ratio increases with the particle ellipticity and also increases with the particle dielectric constant. This quantity is important and we will show in Sec. V that this quantity actually controls the classical dynamics during the feedback cooling. The above equations are based on a small oscillation angle approximation. In a cooling experiment, the max- imal oscillation angle can be estimated by
β max 2k B T I 1 ω 2 β ,(30)
where k B is the Boltzmann constant and T denotes the temperature. Using the data (a = 48 nm, b = 53 nm) from Tab. I and III, we find that the maximal angle spread is still small (β max 10 −3 rad for diamond, β max 10 −2 rad for silica) at T = 0.1 K. For higher oscillating frequencies and lower temperature, the maximal angle spread β max will be even smaller.
IV. NUMERICAL SIMULATION OF SHOT NOISE HEATING AND FEEDBACK COOLING
Parametric feedback cooling is discussed in Ref. [12], where a single laser beam is used for both trapping and cooling. The spatial motion of a nanoparticle is cooled from room temperature to subkelvin, and the quantum ground state cooling is also suggested with the same cooling mechanism. In this parametric feedback scheme, a signal at twice the oscillation frequency is obtained by multiplying the particle's position with its first time derivative x(t)ẋ(t). This information is then fed back to the system, which leads to a loop that on average acts as a drag on the particle. The parametric cooling works by simply modulating the intensity of the trapping laser, and this scheme is extremely suitable for rotational cooling since it avoids relatively complex operations if one tries to feedback torque. In this section, the feedback cooling calculations are based on ideal assumptions about measuring the nanoparticle's position and orientation. The discussions of feedback cooling with the measurement uncertainty are given in the next section.
Combining the translational and rotational motion, the classical dynamics of the ellipsoid is governed by
m d 2 x i dt 2 −mω 2 i (1 + ∆)x i , I 1 d 2 β j dt 2 −I 1 ω 2 βj (1 + ∆)β j .(31)
The small oscillation approximation is used in the above equations where all corrections quadratic in the amplitude of oscillations have been dropped. x i = (x, y, z) and β j = (β 1 , β 2 ). The shot noises in translation and rotation are added at each time step according to
p i (t + δt) = p i (t) + δW i · δp i , L i (t + δt) = L i (t) + δR j · δL j ,(32)
where δW i , δR j are the standard normally distributed random numbers, and δp i = 2Ė Ti δt · m, δL j = 2Ė Rj δt · I are the fluctuation of the momentum and angular momentum for each degree of freedom induced by the shot noise. The heating rate in the z direction (optical polarization direction) is half that of the other two translational degrees of freedom because the photons scatter less in the direction of the laser polarization [11]. ∆ is a scalar which takes the form
∆ = i=1,2,3 η i x iẋi + i=1,2 ζ i r 2 β iβi ,(33)
where r is the size of the nanoparticle. The feedback parameters η i and ζ i have the unit Time/Length 2 and they control the cooling limit and speed. Details about the parameters and the parametric feedback cooling limit are given in the appendix. Simulations are performed for three different nano-diamonds with decreasing ellipticity, whose half axes go from (a = 15 nm, b = 70 nm), (a = 38 nm, 60 nm) to (a = 48 nm, b = 53 nm). The corresponding parameters are given in Tab. I. The classical equations of motion are numerically solved using a fourth-order Runga-Kutta algorithm with adaptive time steps [31]. All simulations are repeated many times and data is collected by averaging over the different runs to reduce the random noise.
We start by presenting the simulation with zero feedback (η 1,2,3 = 0, ζ 1,2 = 0), which corresponds to the pure shot noise heating process. The system is prepared initially at temperature T i = 1 µK. The result is shown in Fig. (4), where each curve depicts the time evolution of the energy in the corresponding degrees of freedom. Figure 4(a) and 4(b) show the case (a = 15 nm, b = 70 nm) in the first 100 ms. The rotational shot noise is about an order of magnitude larger than that in the translational motion. The case (a = 38 nm, b = 60 nm) is given in Fig. 4(c) and 4(d), in which the rotational and translational shot noise heating rates are of similar size. As the ellipticity gets smaller, the shot noise in the rotational degrees of freedom becomes less than that in the translational motion, which is shown in the Fig. 4(e) and 4(f) for the case (a = 48 nm, b = 53 nm). From Tab. I, the case (a = 48 nm, b = 53 nm) has a higher rotational than translational oscillating frequency, which suggests that it might be a good candidate for rotational cooling.
The non-zero feedback cooling is performed with the system temperature initially prepared at T i = 0.1 K. The feedback parameters (η i , ζ i ) are chosen in a way such that Eq. (33) is much smaller than one and the position and velocity are assumed to be measured perfectly.
First, we turn on the feedback in all degrees of freedom. The results are shown in Fig. (5). Because the calculations are classical, the results for occupation less than 10 are qualitative/suggestive. However, we do expect that the classical results are approximately correct for n ∼ 10 so we do expect this feedback could get to near the ground state. By tuning the feedback parameters from ∆ 1 = {η 1,2,3 = 1.1 × 10 11 s/m 2 , ζ 1,2 = 10 11 s/m 2 } to ∆ 2 = 10∆ 1 , the system is observed to be quickly cooled. Both the translational and rotational occupation numbers can get down to less than one in this classical calculation, which suggests a possibility of ground state cooling in all degrees of freedom. Figure 5(a) and 5(b) depict the cooling of a nano-diamond with half axes (a = 15 nm, b = 70 nm) in the translational and rotational degrees of freedom respectively. We see that rotation and translation are cooled with almost equal speed though the rotational oscillating frequency is more than six times higher than that for the translational motion. As the ellipticity goes lower, the cooling in rotation becomes more effective than in translation. As shown in Fig. 5(c) and 5(d) for case (a = 38 nm, b = 60 nm), when the parameter ∆ 1 is taken, the rotational occupation numbers go down close to 10 while the translational occupation numbers are still around 20. The cooling in rotation gets even better when the particle with half axes (a = 48 nm, b = 53 nm) is used, where the rotation is close to the ground state ( n < 1) while the translational occupation numbers are still more than 10, as shown in Fig. 5(e) and 5(f). The reason is that when the ellipticity of the nanoparticle gets smaller, the rotational shot noise heating is less than that for translational heating while the rotational oscillating frequency is still larger than that for translation. Thus, a better rotational cooling for a particle with low ellipticity is expected, which was suggested in the previous section. From the appendix, the steady state value of n is proportional to the square root ofĖ/ω 2 . This suggests that smaller values of ∆n = 2πĖ/( ω 2 ), as defined in the previous section, are better for cooling to the ground state. However, we will see in the next section that measurement noise qualitatively modifies this trend.
Second, we keep the feedback cooling only in the rotational degrees of freedom with ∆ 1 = {η 1,2,3 = 0, ζ 1,2 = 10 11 s/m 2 } and ∆ 2 = 10∆ 1 . The results are shown in Fig. 6(a) and 6(b) for the nano-diamond with half axes (a = 48 nm, b = 53 nm). As shown in Fig. 6(a), when the feedback is increased from ∆ 1 to ∆ 2 = 10∆ 1 , the rotational occupation number goes down all the way to the quantum regime. However, as shown in Fig. 6(b), the translational motion is heated up in the mean time. In order to see the cooling in only one rotational degree of freedom, we also calculate the case with ∆ 1 = {η 1,2,3 = 0, ζ 2 = 0, ζ 1 = 10 11 s/m 2 }. As shown in Fig. 6(c), 6(d) and 6(e), the motion in β 1 degree of freedom quickly gets cooled to the ground state regime when ∆ 2 are taken, while all other degrees of freedom (β 2 , x, y and z) are heated up. For β 2 , extra heating is observed due to the resonance heating: the changes in the laser intensity are predominantly at the frequency to resonantly couple with either of the rotational degrees of freedom. In Fig. 6(b), 6(d) and 6(e), the dashed lines show the heating from pure shot noise. We see that the pure shot noise heating rates are slightly lower (almost the same) than the heating rate with feedback cooling. The reason is because the cooling in one degree of freedom can add to the heating in the other degrees of freedom. Fortunately, this extra heating is not excessive and should not be a problem in experiments.
V. THE PARAMETRIC FEEDBACK COOLING LIMIT WITH CLASSICAL UNCERTAINTY
The above discussion of feedback cooling is based on an ideal measurement of the particle's position and velocity. In reality, a measurement can't be infinitely accurate and is fundamentally limited by the quantum uncertainty δxδp /2, which introduces an extra feedback noise during the cooling process. The uncertainty in the position measurement can be reduced by increasing the photon scattering rate, however stronger photon scattering induces faster shot noise heating. Thus, tuning an appropriate photon recoil rate and a proper feedback parameter should be important in optimizing the feedback cooling.
This section numerically studies the optimal cooling limit when the main error in the position measurement is due to classical measurement uncertainty. As we will show below, the equations of motion can be scaled. Therefore, the simulation is performed in only the x degree of freedom for the case (a = 48 nm, b = 53 nm) in Tab. I. The calculation is still classical, but the feedback signal is modified to satisfy δxδp = N /2, where N is a measure of the classical uncertainty. The dynamical equation is given by
m d 2 x dt 2 = −mω 2 x (1 + ηx mẋm )x, x m = x + δR · δx,(34)
and the shot noise is added according to
p(t + δt) = p(t) + δW · δp,(35)
where δR and δW are Gaussian random numbers with unit variance, δp = 2Ė Tx δt · m is the momentum fluctuation determined by the shot noiseĖ Tx , and x m is the (7) shows the results, where each curve corresponds to the steady state occupation in terms of the feedback parameter η. The pink line corresponds to the classical feedback with no noise in the position measurement (N = 0), where the occupation keeps decreasing as we increase the feedback parameter. As we add uncertainty to the feedback signal, the purple (N = 1), black (N = 1.5), green (N = 2), red (N = 2.5) and blue (N = 3) lines will go up after passing their minimal occupations, which are the corresponding optimal cooling limit. The reason is that, as η increases, the feedback cooling is strengthened, but the noise in the measured value of x leads to the feedback procedure itself adding noise to the motion. Beyond a value of η, the feedback noise heating becomes faster than the feedback cooling, which indicates that the steady state occupation can reach a minimum and then increase. Moreover, one can see that a larger uncertainty in the position measurement leads to a larger occupation for optimal cooling limit. The reason is that the feedback noise heating is generally faster with a big N in the position measurement than that with a smaller N .
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲▲ ▲▲▲▲▲▲ ▲ ▲▲▲▲ ▲▲▲ ▲ ▲▲ ▲ ▲▲▲ ▲ ▲ ▲ ▲ ▲ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ • 3ℏ/2 ■ 2.5ℏ/2 ◆ 2ℏ/2 ▲ 1.5ℏ/2 ▼ 1ℏ/2 ○ 0ℏ
The steady state occupation is also related to the shot noise heating and the oscillating frequency, as suggested by the result in the appendix n limit ∝ Ė T /ω 2 for ideal measurements. In fact, the one dimensional dynamical equation for the nanoparticle can be scaled
d 2x dt 2 = −x(1 + η 2mx mẋm ), x m =x + δR · δx,(36)
and the shot noise is added according tõ
p(t + δt) =p(t) + δp(t)δW,(37)
where the scaled positionx = x/a 0 with a 0 = /(2mω x ),t = ω x t, andĖ T = 2Ė T /( ω 2 x ). δp = 2Ė T · dt and δx = N 1/(2Ė T dt). The scaled equation shows that ∆n = 2πĖ T /( ω 2
x ) (as defined previously), N and η determine the particle's dynamics. To confirm that, we simulate the cooling of the x degree of freedom for particle (a = 48 nm, b = 53 nm) with fixed measurement uncertainty (N = 2). First, we choose (Ė T , ω x ) to be different values (470 mK/s,343 kHz), (824 mK/s,454 kHz) and (1295 mK/s,569 kHz), which are obtained by tuning the laser power to P = (40 mW, 70 mW, 110 mW) respectively. Figure 8(a) gives the simulation results, where the three curves give the steady state occupation in terms of the feedback parameter. Those curves match each other, which confirms that ∆n indeed determines the dynamics, since varying the laser power doesn't change the quantity ∆n 0.083. All three curves get to an optimal cooling limit around n = 8.5 when η = 3.3 × 10 12 s/m 2 . In Fig. 8(b), we take ∆n 0.026 by changing the laser beam waist. Using the same laser powers P = (40 mW, 70 mW, 110 mW), the shot noise heating and the x translational oscillating frequency are (1488 mK/s,1085 kHz), (2603 mK/s,1434 kHz) and (4092 mK/s,1798 kHz) respectively. The three curves still match, but the minimal point is shifted to ( n = 12, η = 5 × 10 11 s/m 2 ), which suggests that the optimal cooling limit should depend on the choice of ∆n for a given value of N , the scale factor between the uncertainty in the position measurement, δx, and the momentum shot noise scale, δp.
Comparing the two results in Fig. (8), we see that a lower optimal cooling limit is reached for the motion with a bigger ∆n when N is held fixed. This motivates us to calculate the optimal cooling limit for varied ∆n (by changing the beam waist), and the result is show in Fig. (9), where the two curves correspond to N = (1, 2). Both curves reveal that a bigger ∆n leads to a lower optimal cooling limit, which suggests that a more accurate feedback cooling can beat the cost from the higher shot noise heating for fixed N . The fact that a higher shot noise leads to a lower optimal occupation might be because of (1) a higher shot noise indicates a more accurate and effective feedback cooling; (2) an accurate feedback induces a lower feedback noise. Figure (9) also shows that a smaller N generally has a lower optimal cooling limit, which matches the result in Fig. (7). The data in Fig. (9) stops at ∆n = 0.41, since a bigger η is needed in order to get to the optimal cooling limit. Our calculation becomes unstable when η gets larger. In reality, a bigger feedback parameter η means a lot more effort in feedback cooling. The maximal realizable η in the experiment should physically bound the lowest cooling limit for fixed N . The actual shot noise heating rate and measurement uncertainty determines the minimum oc- 9. (Color online) The optimal cooling limit for x degree of freedom of particle (a = 48 nm, b = 53 nm) from Tab. I with respect to ∆n. The blue and yellow curves correspond to the classical uncertainty measure N = 1 and N = 2 respectively. Our data stops at ∆n = 0.41 since the feedback calculation with a larger η becomes unstable when we try to reach the optimal cooling limit. cupation number. By scaling these parameters, one can understand how the system will respond in terms of the dimensionless Eqs. (36) and (37).
VI. CONCLUSION
The translational and rotational shot noise heating and feedback cooling of an optically trapped nano-ellipsoid were analytically and numerically investigated. The detailed analysis suggests that a lower relative rotational heating rate is expected for a wide range of nanoparticle geometries. This conclusion is in contrast to that when scattering from black body radiation was studied [16] which reported that rotational degrees of freedom decohered much faster than translational degrees of freedom. The qualitatively different conclusion is due to the difference in photon scattering from a polarized beam aligned along the nanoparticle axis compared to unpolarized photons.
The analysis and numerical calculation of the shot noise heating suggest that a lower relative rotational heating rate results from (1), a nanoparticle with near to spherical shape for fixed size; (2), a nanoparticle with a bigger size for fixed ellipticity; (3), a trapping laser with a shorter wavelength and a bigger beam waist; (4), a nanoparticle with lower dielectric constant. In addition, the calculation of the feedback cooling in only the rotational degrees of freedom reveals that a separate rotational cooling should be experimentally possible, since heating in the other degrees of freedom was only slightly faster than the shot noise.
The feedback cooling with classical measurement uncertainty was analyzed. The measurement uncertainty introduces an extra noise during the feedback, which competes with the cooling when the feedback parameter increases. When the scaled classical uncertainty N is held fixed, a system with a bigger value of ∆n = 2πĖ/( ω 2 ) could in principle get to a lower optimal cooling limit. While this is an interesting result, it is hard to imagine an experiment where the N can be held fixed while the shot noise heating rate is changed as it would require the uncertainty in x to decrease proportional to 1/ Ė as the heating rate increases. A more effective way to achieve small occupation number is to decrease N which is proportional to the uncertainty in x times Ė .
In conclusion, the shot noise heating, the measurement uncertainty, and the feedback parameter are important factors to consider when cooling a levitated nanoparticle in the shot noise dominant region. The results presented here can provide a framework for thinking about how these parameters affect the heating and the feedback cooling of levitated nanoparticles. However, since our calculations are classical, there is clearly a need for investigations of quantum effects on feedback cooling for small occupation number. The results in Fig. (9) suggest there may be non-intuitive trends in the quantum limit. This work was supported by the National Science Foundation under Grant No.1404419-PHY.
PACS numbers: 42.50.Wk, 07.10.Pz, 62.25.Fg
FIG
. 2. (Color online) The ratio of the occupation number change ṅR / ṅT in terms of ellipticity (a) and size (b). (a) The size of particles is fixed at √ a 2 + b 2 = 71 nm while the ellipticity increases. (b) The ellipticity is fixed at e = 0.77 while the particle size increases. The blue curves are for diamonds while the yellow curves are for silica.
FIGFIG
. 4. (Color online) The classical simulation results of shot noise heating for nano-diamonds in both the translational and rotational degrees of freedom. Each curve is averaged over 400 individual reheating trajectories. (a) and (b) are for the nanoparticle with half axes (a = 15 nm, b = 70 nm), while (c) and (d) with half axes (a = 38 nm, b = 60 nm), (e) and (f ) with half axes (a = 48 nm, b = 53 nm). The dashed lines are the heating curves T = T0+Ėt with T0 the initial temperature andĖ the corresponding heating rate from Tab. I. . 5. (Color online) The parametric feedback cooling for nano-diamonds in all degrees of freedom, where each curve shows the time evolution of the average occupation number in the corresponding degree of freedom. Data are collected by averaging 30 cooling trajectories. Calculations are for classical parametric feedback cooling, thus results for occupation numbers less than 10 are suggestive. (a) and (b) depict the translational and rotational cooling respectively for a nanoparticle with half axes (a = 15 nm, b = 70 nm). The cooling parameter ∆1 = {ηi = 1.1 × 10 11 s/m 2 , ζi = 10 11 s/m 2 } for t < 100ms and ∆2 = 10∆1 for t > 100ms. Similarly, (c) and (d) show the cooling for half axes (a = 38 nm, b = 60 nm) while (e) and (f ) for half axes (a = 48 nm, b = 53 nm).
online) The parametric feedback cooling in only the rotational degrees of freedom for a nano-diamond (a = 48 nm, b = 53 nm). All curves are averaged over 400 trajectories.(a) and (b) show the cooling in both β1 and β2 with the cooling parameters ∆1 = {η1,2,3 = 0, ζi = 10 11 s/m 2 }. The black and purple lines show the rotational motion gets cooled as we increase the feedback parameters from ∆1 to ∆2 = 10∆1. The red, green and blue lines depict that the heating trajectories in translational degrees of freedom. (c), (d) and (e) show the result of cooling in only β1 with parameters ∆1 = {η1,2,3 = 0, ζ2 = 0, ζ1 = 10 11 s/m 2 } and ∆2 = 10∆1, and heating in β2 and x, y, z respectively. In (d), resonance heating causes massive heating in the uncooled β2 degree of freedom. The dashed lines in (b), (d) and (e) are the heating curves T = T0 +Ėt with T0 the initial temperature andĖ the corresponding heating rate from Tab. I.
units of 10 12 s/m 2 ) Occupation FIG. 7. (Color online) The steady state occupation in terms of the feedback paramter η for x degree of freedom of the particle (a = 48 nm, b = 53 nm) from Tab. I. The different curves correspond to several different values of classical uncertainty. measured position with δx = N /(2 2Ė Tx δt · m), which is chosen to satisfy the relation δxδp = N /2. Several values of N are used in the the pure classical calculation. In reality, the results are not physically possible for N < 1, and for small N the result is only suggestive because it would require a true quantum treatment. Figure
units of 10 11 s/m 2 ) Occupation FIG. 8. (Color online) The steady state occupation for x degree of freedom of particle (a = 48 nm, b = 53 nm) from Tab. I in terms of the feedback parameter η with N = 2. Three different laser powers P = (40 mW, 70 mW, 110 mW) are used. (a) The quantity ∆n = 0.083; (b)
TABLE I .
Iare kept approximately the same. The trapping laser has wavelength λ = 1064 nm and power P = 70 mW.(a, b)/nm αz −αx αz ω β 1 /2π ωx/2π ωy/2πĖR(mK/s)ĖT (mK/s)ĖR/ĖTThe parameters for three different nano-diamonds in a laser trap. The data is ordered for diamonds with decreasing
ellipticity, while their sizes
√
a 2 + b 2 ω β 1
ωx
ṅ R
ṅ T
∆n R
∆n T
(15, 70) 0.60 4.02 MHz 625 kHz 398 kHz 3.83 × 10 3
382
10.0 6.43 1.56 0.24
(38, 60) 0.28 2.20 MHz 497 kHz 316 kHz 1.84 × 10 3
838
2.20 4.42 0.50 0.11
(48, 53) 0.07 998 kHz 454 kHz 289 kHz
113
824
0.14 2.20 0.06 0.03
TABLE II. The parameters for three different nano-diamonds in a laser trap. The data is for diamonds with increasing size
while fixing the ellipticity such that the ratio (αz − αx)/αz stays approximately the same. The trapping laser has wavelength
λ = 1064 nm and power P = 70 mW.
(a, b)/nm αz −αx
αz
ω β 1 /2π ωx/2π ωy/2πĖR(mK/s)ĖT (mK/s)ĖR/ĖT
ω β 1
ωx
ṅ R
ṅ T
∆n R
∆n T
(27, 42) 0.28 3.14 MHz 497 kHz 316 kHz 1.23 × 10 3
292
4.22 6.31 0.68 0.11
(38, 60) 0.28 2.20 MHz 497 kHz 316 kHz 1.84 × 10 3
838
2.20 4.42 0.50 0.11
(49, 78) 0.28 1.68 MHz 497 kHz 316 kHz 2.46 × 10 3
1830
1.34 3.40 0.39 0.11
TABLE III .
IIIThe parameters for three different fused silica in a laser trap. The data is for silica with different ellipticities, while their sizes √ a 2 + b 2 are kept approximately the same. The trapping laser has wavelength λ = 1064 nm and power P = 70 mW.TABLE IV. The parameters for three different fused silica in a laser trap. The data is for silica with increasing sizes while the ellipticity is fixed such that the ratio (αz − αx)/αz stays approximately the same. The trapping laser has wavelength λ = 1064 nm and power P = 70 mW.(a, b)/nm αz −αx
αz
ω β 1 /2π ωx/2π ωy/2πĖR(mK/s)ĖT (mK/s)ĖR/ĖT
ω β 1
ωx
ṅ R
ṅ T
∆n R
∆n T
(15, 70) 0.30 1.90 MHz 419 kHz 267 kHz
119
48.6
2.45 4.52 0.54 0.12
(38, 60) 0.13 1.17 MHz 388 kHz 247 kHz
93.2
197
0.47 3.01 0.16 0.05
(48, 53) 0.03 549 kHz 374 kHz 238 kHz
6.50
240
0.03 1.47 0.02 0.01
(a, b)/nm αz −αx
αz
ω β 1 /2π ωx/2π ωy/2πĖR(mK/s)ĖT (mK/s)ĖR/ĖT
ω β 1
ωx
ṅ R
ṅ T
∆n R
∆n T
(27, 42) 0.13 1.67 MHz 388 kHz 247 kHz
62.6
69.1
0.91 4.30 0.21 0.05
(38, 60) 0.13 1.17 MHz 388 kHz 247 kHz
93.2
197
0.47 3.01 0.16 0.05
(49, 78) 0.13 899 kHz 388 kHz 247 kHz
124
427
0.29 2.31 0.12 0.05
FIG. 3. (Color online) The ratio ∆nR/∆nT in terms of the
particle ellipticity. The blue and yellow curves correspond to
Diamond and Silica respectively.
Appendix A: The parametric feedback coolingThis appendix describes the parametric feedback cooling scheme and analyzes the cooling limit in the shot noise dominant regime. Perfect measurement is assumed in the following derivation. As an example, the average cooling power for one translational degree of freedom from the feedback is given bywhere E is the system energy in this degree of freedom and k is the spring constant. The approximation is made above by ignoring the noise when taking the cycle average. The negative sign of the power guarantees an effective cooling during the feedback process. Combining with the translational shot noise heating rate, the system energy follows the differential equationA steady state can be reached when the heating and cooling are balanced, which yields the cooling limitwhere ω is the oscillation frequency. One finds that a bigger η gives a lower steady state energy and the particle mass together with the quantityĖ T /ω 2 determine the final occupation. The differential equation can be analytically solvedwhereE i is the initial energy of the system. The system gets cooled as time increases and the parameter ηĖ T 2m is a measure of how fast the system is cooled. The feedback parameter η has the unit Time/Length 2 , which can be tuned to control the speed of cooling and the final steady state energy.
Decoherence and the appearance of a classical world in quantum theory. E Joos, H D Zeh, C Kiefer, D J Giulini, J Kupsch, I.-O Stamatescu, Springer Science & Business MediaE. Joos, H. D. Zeh, C. Kiefer, D. J. Giulini, J. Kupsch, and I.-O. Stamatescu, Decoherence and the appearance of a classical world in quantum theory. Springer Science & Business Media, 2013.
Decoherence, the measurement problem, and interpretations of quantum mechanics. M Schlosshauer, Reviews of Modern Physics. 7641267M. Schlosshauer, "Decoherence, the measurement prob- lem, and interpretations of quantum mechanics," Reviews of Modern Physics, vol. 76, no. 4, p. 1267, 2005.
Decoherence and the transition from quantum to classical-revisited. W H Zurek, quant- ph/0306072arXiv preprintW. H. Zurek, "Decoherence and the transition from quantum to classical-revisited," arXiv preprint quant- ph/0306072, 2003.
Collisional decoherence reexamined. K Hornberger, J E Sipe, Physical Review A. 68112105K. Hornberger and J. E. Sipe, "Collisional decoherence reexamined," Physical Review A, vol. 68, no. 1, p. 012105, 2003.
Cavity optomechanics using an optically levitated nanosphere. D E Chang, C Regal, S Papp, D Wilson, J Ye, O Painter, H J Kimble, P Zoller, Proceedings of the National Academy of Sciences. 1073D. E. Chang, C. Regal, S. Papp, D. Wilson, J. Ye, O. Painter, H. J. Kimble, and P. Zoller, "Cavity opto- mechanics using an optically levitated nanosphere," Pro- ceedings of the National Academy of Sciences, vol. 107, no. 3, pp. 1005-1010, 2010.
Quantum superposition of massive objects and collapse models. O Romero-Isart, Physical Review A. 84552121O. Romero-Isart, "Quantum superposition of massive ob- jects and collapse models," Physical Review A, vol. 84, no. 5, p. 052121, 2011.
Laser cooling of a nanomechanical oscillator into its quantum ground state. J Chan, T M Alegre, A H Safavi-Naeini, J T Hill, A Krause, S Gröblacher, M Aspelmeyer, O Painter, Nature. 4787367J. Chan, T. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, and O. Painter, "Laser cooling of a nanomechanical oscil- lator into its quantum ground state," Nature, vol. 478, no. 7367, pp. 89-92, 2011.
Quantum ground state and single-phonon control of a mechanical resonator. A D O'connell, M Hofheinz, M Ansmann, R C Bialczak, M Lenander, E Lucero, M Neeley, D Sank, H Wang, M Weides, Nature. 4647289A. D. O'Connell, M. Hofheinz, M. Ansmann, R. C. Bial- czak, M. Lenander, E. Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, et al., "Quantum ground state and single-phonon control of a mechanical resonator," Nature, vol. 464, no. 7289, pp. 697-703, 2010.
Shortrange force detection using optically cooled levitated microspheres. A A Geraci, S B Papp, J Kitching, Physical review letters. 10510101101A. A. Geraci, S. B. Papp, and J. Kitching, "Short- range force detection using optically cooled levitated mi- crospheres," Physical review letters, vol. 105, no. 10, p. 101101, 2010.
Torsional optomechanics of a levitated nonspherical nanoparticle. T M Hoang, Y Ma, J Ahn, J Bang, F Robicheaux, Z.-Q Yin, T Li, Phys. Rev. Lett. 117123604T. M. Hoang, Y. Ma, J. Ahn, J. Bang, F. Robicheaux, Z.-Q. Yin, and T. Li, "Torsional optomechanics of a levitated nonspherical nanoparticle," Phys. Rev. Lett., vol. 117, p. 123604, Sep 2016.
Direct measurement of photon recoil from a levitated nanoparticle. V Jain, J Gieseler, C Moritz, C Dellago, R Quidant, L Novotny, Phys. Rev. Lett. 116243601V. Jain, J. Gieseler, C. Moritz, C. Dellago, R. Quidant, and L. Novotny, "Direct measurement of photon recoil from a levitated nanoparticle," Phys. Rev. Lett., vol. 116, p. 243601, Jun 2016.
Subkelvin parametric feedback cooling of a lasertrapped nanoparticle. J Gieseler, B Deutsch, R Quidant, L Novotny, Physical review letters. 10910103603J. Gieseler, B. Deutsch, R. Quidant, and L. Novotny, "Subkelvin parametric feedback cooling of a laser- trapped nanoparticle," Physical review letters, vol. 109, no. 10, p. 103603, 2012.
Nonlinear mode coupling and synchronization of a vacuum-trapped nanoparticle. J Gieseler, M Spasenović, L Novotny, R Quidant, Physical review letters. 11210103603J. Gieseler, M. Spasenović, L. Novotny, and R. Quidant, "Nonlinear mode coupling and synchronization of a vacuum-trapped nanoparticle," Physical review letters, vol. 112, no. 10, p. 103603, 2014.
Nanooptomechanics with optically levitated nanoparticles. L P Neukirch, A N Vamivakas, Contemporary Physics. 561L. P. Neukirch and A. N. Vamivakas, "Nano- optomechanics with optically levitated nanoparticles," Contemporary Physics, vol. 56, no. 1, pp. 48-62, 2015.
Nonequilibrium steady state of a driven levitated particle with feedback cooling. J Gieseler, L Novotny, C Moritz, C Dellago, New Journal of Physics. 17445011J. Gieseler, L. Novotny, C. Moritz, and C. Dellago, "Non- equilibrium steady state of a driven levitated particle with feedback cooling," New Journal of Physics, vol. 17, no. 4, p. 045011, 2015.
Decoherence of rotational degrees of freedom. C Zhong, F Robicheaux, Phys. Rev. A. 9452109C. Zhong and F. Robicheaux, "Decoherence of rotational degrees of freedom," Phys. Rev. A, vol. 94, p. 052109, Nov 2016.
Spatioorientational decoherence of nanoparticles. B A Stickler, B Papendell, K Hornberger, Physical Review A. 94333828B. A. Stickler, B. Papendell, and K. Hornberger, "Spatio- orientational decoherence of nanoparticles," Physical Re- view A, vol. 94, no. 3, p. 033828, 2016.
Full rotational control of levitated silicon nanorods. S Kuhn, A Kosloff, B A Stickler, F Patolsky, K Hornberger, M Arndt, J Millen, arXiv:1608.07315arXiv preprintS. Kuhn, A. Kosloff, B. A. Stickler, F. Patolsky, K. Hornberger, M. Arndt, and J. Millen, "Full rotational control of levitated silicon nanorods," arXiv preprint arXiv:1608.07315, 2016.
Cavity-assisted manipulation of freely rotating silicon nanorods in high vacuum. S Kuhn, P Asenbaum, A Kosloff, M Sclafani, B A Stickler, S Nimmrichter, K Hornberger, O Cheshnovsky, F Patolsky, M Arndt, Nano letters. 158S. Kuhn, P. Asenbaum, A. Kosloff, M. Sclafani, B. A. Stickler, S. Nimmrichter, K. Hornberger, O. Chesh- novsky, F. Patolsky, and M. Arndt, "Cavity-assisted ma- nipulation of freely rotating silicon nanorods in high vac- uum," Nano letters, vol. 15, no. 8, pp. 5604-5608, 2015.
B A Stickler, S Nimmrichter, L Martinetz, S Kuhn, M Arndt, K Hornberger, arXiv:1605.05674Ro-translational cavity cooling of dielectric rods and disks. arXiv preprintB. A. Stickler, S. Nimmrichter, L. Martinetz, S. Kuhn, M. Arndt, and K. Hornberger, "Ro-translational cav- ity cooling of dielectric rods and disks," arXiv preprint arXiv:1605.05674, 2016.
Quantum model of cooling and force sensing with an optically trapped nanoparticle. B Rodenburg, L Neukirch, A Vamivakas, M Bhattacharya, Optica. 33B. Rodenburg, L. Neukirch, A. Vamivakas, and M. Bhat- tacharya, "Quantum model of cooling and force sensing with an optically trapped nanoparticle," Optica, vol. 3, no. 3, pp. 318-323, 2016.
Quantum measurement and control. H M Wiseman, G J Milburn, Cambridge University PressH. M. Wiseman and G. J. Milburn, Quantum measure- ment and control. Cambridge University Press, 2009.
A straightforward introduction to continuous quantum measurement. K Jacobs, D A Steck, Contemporary Physics. 47K. Jacobs and D. A. Steck, "A straightforward introduc- tion to continuous quantum measurement," Contempo- rary Physics, vol. 47, no. 5, pp. 279-303, 2006.
Feedback control of quantum systems using continuous state estimation. A C Doherty, K Jacobs, Physical Review A. 6042700A. C. Doherty and K. Jacobs, "Feedback control of quan- tum systems using continuous state estimation," Physical Review A, vol. 60, no. 4, p. 2700, 1999.
Coupling rotational and translational motion via a continuous measurement in an optomechanical sphere. J F Ralph, K Jacobs, J Coleman, Phys. Rev. A. 9432108J. F. Ralph, K. Jacobs, and J. Coleman, "Coupling ro- tational and translational motion via a continuous mea- surement in an optomechanical sphere," Phys. Rev. A, vol. 94, p. 032108, Sep 2016.
Classical electrodynamics. J D Jackson, Wiley457J. D. Jackson, Classical electrodynamics. Wiley, page 457, 1999.
Quantum optics. M O Scully, M S Zubairy, AAPTM. O. Scully and M. S. Zubairy, Quantum optics. AAPT, 1999.
Angular momentum in quantum mechanics. A R Edmonds, Princeton University PressA. R. Edmonds, Angular momentum in quantum me- chanics. Princeton University Press, 1996.
Absorption and scattering of light by small particles. C F Bohren, D R Huffman, John Wiley & SonsC. F. Bohren and D. R. Huffman, Absorption and scat- tering of light by small particles. John Wiley & Sons, 2008.
Optical alignment and confinement of an ellipsoidal nanorod in optical tweezers: a theoretical study. J Trojek, L Chvátal, P Zemánek, JOSA A. 297J. Trojek, L. Chvátal, and P. Zemánek, "Optical align- ment and confinement of an ellipsoidal nanorod in optical tweezers: a theoretical study," JOSA A, vol. 29, no. 7, pp. 1224-1236, 2012.
Numerical recipies in c. W H Press, S A Teukolsky, W T Vetterling, B P Flannery, W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, "Numerical recipies in c," 1992.
|
[] |
[
"The Phase Information Associated to Synchronized Electronic Fireflies",
"The Phase Information Associated to Synchronized Electronic Fireflies"
] |
[
"J L Guisset \nUniversité Libre de Bruxelles\nBelgium\n",
"J L Deneubourg \nUniversité Libre de Bruxelles\nBelgium\n",
"G M Ramírezávila \nUniversité Libre de Bruxelles\nBelgium\n\nUniversidad Mayor de San Andrés\nLa PazBolivia\n"
] |
[
"Université Libre de Bruxelles\nBelgium",
"Université Libre de Bruxelles\nBelgium",
"Université Libre de Bruxelles\nBelgium",
"Universidad Mayor de San Andrés\nLa PazBolivia"
] |
[] |
An electronic implementation referring to fireflies ensembles flashing in synchrony in a selforganization mode, shows the details of the phase-locking mechanism and how the phases between the electronic oscillators are related to their common period. Quantitative measurements of the timing signals link the limits of a steadily established synchronization to the physics of the electronic circuit. Preliminary observations suggest the existence of bifurcation-like phenomena.
| null |
[
"https://arxiv.org/pdf/nlin/0206036v1.pdf"
] | 13,261,774 |
nlin/0206036
|
85433007763b7ff0fa9d9c04e628d33bb4f7d314
|
The Phase Information Associated to Synchronized Electronic Fireflies
24 Jun 2002
J L Guisset
Université Libre de Bruxelles
Belgium
J L Deneubourg
Université Libre de Bruxelles
Belgium
G M Ramírezávila
Université Libre de Bruxelles
Belgium
Universidad Mayor de San Andrés
La PazBolivia
The Phase Information Associated to Synchronized Electronic Fireflies
24 Jun 2002arXiv:nlin/0206036v1 [nlin.AO]numbers: 0545Xt, 8560Bt, 8975-k Keywords: synchronizationcoupled oscillatorsoptoelectronic devices
An electronic implementation referring to fireflies ensembles flashing in synchrony in a selforganization mode, shows the details of the phase-locking mechanism and how the phases between the electronic oscillators are related to their common period. Quantitative measurements of the timing signals link the limits of a steadily established synchronization to the physics of the electronic circuit. Preliminary observations suggest the existence of bifurcation-like phenomena.
I. INTRODUCTION
Self-organization is a widespread feature appearing under a variety of living and inanimate systems. Synchronization that can be understood as an adjustment of rhythms of oscillating objects due to their weak interaction [1], represents one of the forms of self-organized matter [2].
There are numerous examples of systems of coupled oscillators able to induce structured behaviors between the interacting oscillators [3,4,5,6,7].
The synchronized flashes of huge ensembles of fireflies in south-asian countries swarm trees is one of these surprising self-organization effects. The phenomenon was already mentioned three centuries ago by the Dutch physician Kaempfer in 1727 [8], but it is only recently that experimental (see e.g. [9,10,11,12,13,14]) and theoretical [3,15] researches suggested an adequate operational model [16].
At the individual firefly level, the rhythm of the recurring flashes is supposed to be under the control of a neural center which itself may be optically influenced by the flashes of neighboring fireflies. From an experimental point of view, it is clear that the fireflies interact and modify each other's rhythms, which automatically leads to the acquisition of synchrony.
Although the anatomical details of the neural activity are largely unknown, a model has been proposed which accounts for the essential operational parameters. The model is based on a relaxation oscillator in which it is possible to reset the duty cycle by optical means. Moreover, the reset action is phase dependent: the duty cycle is lengthened or shortened depending of the time interval between the flashes of the interacting fireflies.
By constructing an electronic implementation of it, Garver and Moss showed that the model worked as it was supposed to do [17]. They report ensemble behaviors which indeed are analogous to what is observed with fireflies, although they experimented on a much smaller * Partially supported by Belgium Technical Cooperation scale.
We constructed an "open" version of this electronic firefly, whose free-run duty cycle can be modified and adjusted manually on the spot, and on which quantitative measurements of periods and phase differences may be performed with the required precision. We call it "LCO" the acronym of Light Controlled Oscillator.
Compared to a firefly, the workings of an LCO are without any mystery, which allows for a detailed quantitative description of the synchronization mechanism at least for small ensembles.
Our aim is to experiment with LCO's and to investigate the local level features of the self-organization they exhibit. At first we are looking for the parameters involved in a steady synchronization state achieved by two LCO's, bringing out the factors leading to synchrony. It appears that the period of synchronized LCO's is tightly related to the phase difference between them, showing how the electronics of the synchronization actually works, and why synchrony acquisition ceases outside limits set by the physics of the system. Similar phases relationships have been found for systems of three LCO's and more, revealing interesting features.
II. PRESENTATION OF AN LCO
Basically, our LCO ( Fig. 1) is composed of a LM555 circuit wired as an astable [18], the alternations of which are determined by a dual RC circuit in parallel with four photo-sensors [17].
We made nine LCO's ( Fig. 2): nine autonomous oscillators coupled by their IR beams. They had much success when, disposed on a table, they went to synchronize like exotic fireflies which they are aimed to mimic. Each LCO is a module made of the same electronic components and having the same structure. A square base (11 cm X 11 cm) gives the over-all horizontal dimensions of each LCO in the global pattern. With nine LCO's it is possible to achieve several different patterns.
Each base may sustain several printed circuits giving the possibility of vertical extension, but keeping the same over-all horizontal dimensions. For the time being, our LCO's have two levels (Fig. 1). The lower part consists of a 9 volt battery and its clamping system. The oscillator's printed circuit with the variable resistors allowing the adjustment of the period's two time intervals, makes the upper part of a LCO module. The circuit is square-like too but its size is smaller than the basis. Even smaller printed circuits bearing each an infrared LED and a photo-sensor, are fixed vertically on the sides of the upper part. Provision is made to mask the sensors allowing the LCO's to oscillate "in the dark". In the aim of public presentations, the upper part bears a fifth LED flashing visible light in synchrony with the IR one's, just to produce a "firefly effect".
The RC timing components of the LM555 consist of two resistors and a single capacitor (Fig. 3). Let R λ , R γ and C be the values of those components responsible for the LCO's timing with masked photo-sensors (timing "in the dark"). The period is made up of two states: a longer one that may be changed by manually adjusting R λ , a shorter one that may be changed by acting on R γ . The LED's are wired at the output of the LM555, switching on during the shorter part of the period.
The photo-sensors act as current sources when they are receiving light, shortening the charging time of the capacitor and making longer the time required to discharge it.
In our model the resistors R λ and R γ are partially variable
R λ = 68 kΩ + [0, 50] kΩ R γ = 1.2 kΩ + [0.0, 1.0] kΩ.
We use the same LCO's for two different missions: firstly, for demonstrations where it is required to carry out synchronization at a pace of about one flash per second (very impressive), and secondly for observing the synchronization by an oscilloscope, which requires a period of about 30 ms. This change of period range is made by modifying nothing else but the capacitor value, which has the advantage of leaving the lighting percentage per period unchanged (less than 2%) because the R λ /R γ ratio is not modified:
C = 10 µF → period T = 0.7 s C = 0.47 µF → period T = 30 ms
As the illumination of the photo-sensors modifies the period of an LCO, it is useful to distinguish the functioning of it in the dark from its functioning when receiving light pulses from its neighbors or diffuse light from the surroundings.
When all the photo-sensors of an LCO are masked, its period depends only from its electronics. We took this particular period as a reference for each LCO. In the framework of this article, we use the following parameters related to the synchronization of the LCO (Fig. 4):
• λ = (R λ + R γ )C ln 2, the larger alternation time, it corresponds with a capacitor's charging time between 1/3 and 2/3 of the total charge.
• γ = R γ C ln 2, the shorter alternation time, it corresponds with a capacitor's discharging time within the same limits.
• λ − , λ + , the beginning and the end of a long alternation.
• τ o , the instant coinciding with the transition from λ to γ ; here after we will use this parameter as the "reference moment" in the period of an LCO.
• T s , the common period of a set of synchronized LCO's.
• T A , τ o A , λ A , γ A , T B , τ o B , λ B , γ B , .
. ., the periods, the "reference moments", and the duration of the alternations of LCO A , LCO B , . . . in illumination situations.
• T d A , T d B , T dark C , . . . , γ dark A , γ dark B , γ dark C , .
. ., the periods and the durations of the short and long alternations "in the dark" of LCO A , LCO B , LCO C , . . . i.e. when all their photo-sensors are masked.
III. AN LCO COUPLED TO A SHORT PULSE BLIND LCO
In order to investigate the mechanisms inducing the synchronization, we have placed an LCO, namely LCO B , in interaction with the equivalent of an LCO "kept in the dark".
Let LCO A be this blind LCO; its IR pulse (lasting during γ dark A ) has been reduced to a quarter of γ B . In practice it is obtained from a low frequency generator controlling a monostable producing a γ A pulse of constant duration and sufficiently short.
The signals are picked up at the low impedance output of the LM555; the measurements of phases and periods are carried out with an oscilloscope Tektronix TDS 3012 according to well-known procedures; observations are done without difficulty with a precision of about 0,1%. To be coherent with the measurements presented further on in this article, the triggering is provided by the LCO A considered as the reference LCO. From the very first observation it is obvious that the synchronization implies a phase relation between the two oscillators. 6 cases 1 and 2), the phase-control is stable and T s can be measured easily. However, for ∆τ < 0 (Fig. 6 cases 4 and 5), the stability is much more precarious, even if also in that situation there is a synchronization of LCO B to the blind LCO A . The shape of ∆τ as a function of T dark A (= T s ) can be fairly well schematized by two straight lines intersecting at ∆τ = 0 for the abscissa T break .
The plot suggests two phase-control modes, situated at either side of ∆τ = 0 (Fig. 6, case 3), distinguishing themselves by two parameters which are easy to measure: -The domain of the phase-locking, that is to say the interval limited by the periods for which there is synchronization.
-The slope of the straight lines, which represents somehow the gain of a servo-system's feedback control.
These two modes correspond to different control mechanisms:
• For ∆τ < 0, there can be only widening of the period because the illumination and thus the photocurrent are totally included in the interval γ B (Fig. 6 cases 4 and 5), under these circumstances the photo-current adds to the discharging current of the capacitor C through R γ . As the extension of γ corresponds with an increase of the period, the phase-control is stable. Nevertheless the influence of the photo-current on the extension of the period is of little importance, first because the discharge current is by two orders of magnitude superior to the charge current, and secondly because γ represents only 2% of the period. The upper limit of the phase-locking is reached as soon as the photocurrent shortens the alternation λ B of the following period, that is to say as soon as the positive feedback resulting from this situation makes the phasecontrol unstable.
• For ∆τ > 0, Only a shortening of the free-run period T B is possible. The shortening takes place during λ + the end of the interval λ, by increasing the charging speed of capacitor C due to the photocurrent brought in, in parallel with R λ . Due to the photo-current, the voltage of C reaches more rapidly the value V C = 2V M /3 which triggers the switching from the λ alternation towards the γ alternation. The lower limit of the phase-locking domain is reached when the duration (∆τ ) min = γ A (Fig. 6, case 1) during which the photo-current speeds up the charging of the capacitor, is not sufficient anymore to reach the switching point of λ towards γ. As a consequence, for the shortest periods, the synchronization depends strongly on the intensity of the light received by phase-locked LCO B .
• For 0 < ∆τ < γ A , This constitutes the intermediate situations (Fig. 6, case 2): the excess of photo-current acts in accordance with the mode ∆τ < 0 already described, achieving a not so important lengthening of the synchronized period as compared to its shortening.
IV. AN LCO COUPLED TO A BLIND LCO FLASHING LARGE PULSES
The experimental setup that has been used to get Fig. 7, differs from the previous one, only by the width of the flashes emitted by the blind LCO: they have been widened to increase the ratio γ A /γ B from 1/4 to 2, allowing the blind LCO flashes to overlap the synchronized LCO flashes.
The observation of this second type of synchronization is important because it looks more like that of an actual LCO pair in mutual interaction, for which differences between the γ alternations are the rule with occurrences of overlapping.
In the phase-locking with short pulses (1/4 ratio) one distinguishes five unambiguous situations of synchronization. However, when the alternation γ A is significantly larger than γ B , it is possible that it overrides γ B by illuminating the end λ + and the beginning λ − of the neighboring λ B intervals of γ B (Fig. 8, case 4).
The graphs ∆τ and γ B as a function of T s = T dark A in Figs. 7(a) and (b) a situation in which γ A covers γ B completely and is about to start the covering of λ − B (the start of the following interval λ B (Fig. 8, case 3).
We observe that : a) For T A < T break , the interval γ A covers over two alternations of LCO B (Fig. 8, case 2); the phasecontrol acts as in the case γ A ≪ γ B (Fig. 6, case 2). In the same way, the lower limit of the synchronization domain depends of the total width γ A . b) For T A > T break , the interval γ A covers three alternations of LCO B (Fig. 8, case 4) and the phasecontrol functions differently:
In Fig. 7(a) : γ B is constant because it remains entirely covered by γ A ; however γ A is sufficiently large to illuminate either sides of γ B , that is to say to ensure the phase-control by truncating λ + (the end of alternation λ B ) while at the same time shortening λ − ) the beginning of the following λ B .
In Fig. 7(b) : T s = T dark A varies more slowly indicating a change of phase-control mode, indeed the increase of the period results from Finally, Fig. 7(a) shows that the upper limit of the synchronization domain is reached when ∆τ = 0 (Fig. 8, case 5), i.e. when the time-marks τ o A and τ o B are superposed. This observation is of major importance for the analysis of the mutual synchronization between two interacting LCO's , neither being blind. Indeed the loss of synchronization at ∆τ = 0, means that shortening λ − B at the start of a alternation is not sufficient to maintain the phase-control, the latter working only when it is the end of λ which is truncated, i.e. λ + B .
V. SYNCHRONIZATION BETWEEN TWO INTERACTING LCO'S. MEASURE AND ANALYSIS
When two interacting LCO's synchronize, their short alternations γ A and γ B cover each other mutually, including their time-mark τ o A and τ o B . As a consequence, the fractions of the alternations γ A and γ B which are not superposed illuminate the preceding λ + and the following λ − (Fig. 10, case 2 and its symmetrical case which is not represented).
This situation is similar to that described at the end of the preceding paragraph (Fig. 8, case 4 and 5); it allows us to deduce that of the two parts, λ + and λ − of an alternation λ, it is only λ + that controls the synchronization in a decisive way. Indeed (Fig. 8, case 5) shows that phase-locking and synchronization stops as soon as λ + ceases to be illuminated.
The plots of ∆τ , γ A and γ B , and T s , as a function of T dark A in Figs. 9(a), (b) and (c) show the binary structure of the interaction between two LCO's: In Fig. 9(a), may be of two polarities in an equivalent way, indicating that the two LCO's are interchangeable. In Fig. 9(b), the time intervals γ A and γ B change in the same manner when they are overlapping. When γ A and γ B are exactly superimposed, i.e. when ∆τ = 0 (Fig. 10, case 3), their length is at a maximum as does T s the common period of the synchronized LCO's ( Fig. 9(c)). On both sides of the maximum of T s , the light associated with the one or the other alternation γ truncates the corresponding λ + (Fig. 10, case 1, 2, 4 and 5). As observed earlier, the photo-effect on λ − is not as strong as on λ + . In fact two interacting LCO's have not the same status: one is a leader who cuts the λ + part of his coupled partner, the latter being the follower who cuts the λ − part of his leader. However this status may change,
VI. SYNCHRONIZATION BETWEEN SEVERAL LCO'S
As long as the interacting system has a binary symmetry, the choice of the reference LCO is irrelevant and is without importance for the quantitative observation of the synchronization: the reference LCO is simply that one whose period T dark A is modified manually. On the opposite, for sets of more than two LCO's it is mandatory to specify the position of the reference LCO in that of the interacting LCO's.
We have been able to observe phase-locking in sets made up of 5 LCO's, using two Tektronix TDS 3012 oscilloscopes triggered simultaneously by the output signal of the reference LCO. However those last measurements were rather difficult to perform, due to the intrinsic instability of the LM555 oscillators and/or the presence of There is obviously a symmetry in this 3-LCO synchronization: once the LCO's are synchronized, the phases may equally well have one or the other polarity ( Fig. 11(b)). Moreover, a closer look at the data producing Fig. 11(b) shows that ∆τ o B and ∆τ o C are always of opposite polarity; this suggests that LCO B and LCO C are in some way interchangeable. However, before the synchronization has been settled, it is not possible to foresee the mutually exclusive polarities of τ B and τ C , confirming that a bifurcation-like phenomenon is present.
VII. CONCLUSION
Obviously, synchronization is tightly linked to phaselocking and the domain associated with it. This is why we considered as a criterion that coupled LCO's are synchronized only if they exhibit a stable dependency of their phase as a function of their period differences.
Our measurements show that LCO's synchronize in as much that the phases involved in the phase-locking feedback do not exceed values related to the widths of the coupling light pulses.
Finally we suggest using the criterion associating synchronization and phase-locking domain as a test to compare theoretical models of interacting LCO's to their experimental implementations. In a further publication, we analyze this situation.
FIG. 1 :
1View of a single LCO. FIG. 2: Group of nine LCO's.
FIG. 3 :
3Block diagram of the LCO.
FIG. 4 :
4Definition of parameters in an LCO.
FIG. 5 :
5(a) Time difference as a function of the short impulsion oscillator period. (b) Duration of the LCOA's short alternation as a function of the short impulsion oscillator period.
Fig. 5 (
5a) represents ∆τ = τ o B − τ o A (i.e. the position of τ o B related to τ o A , the latter being taken as reference) as a function of the period T dark A (= T s ) of the blind LCO A . When ∆τ > 0, i.e. when τ o A appears before τ o B (Fig.
FIG. 6 :
6Signal shape for several phase differences between LCOB and the short pulse blind LCOA.
FIG. 7 :
7(a) Time difference as a function of the long impulsion blind LCO period. (b) Duration of the LCOB's short alternation as a function of the long impulsion blind LCO period.
FIG. 8 :
8Signal shape under different conditions for the coupling of a LCOB with a long pulse blind LCOA. the combination of a constant extension of γ B and of a shortening of the two neighboring alternations λ − B and λ + B .
FIG. 9 :
9Variation of some magnitudes as a function of the LCOA period in the dark. (a) Time difference. (b) Short alternations γA y γB of both LCO's. (c) Synchronization period.
FIG. 10 :
10Signal shape under different conditions for two coupled LCO's. depending of the sign of ∆τ , i.e. the relative positions of τ o A and τ o B .
FIG. 11 :
11Variation of some magnitudes as a function of the LCOA (reference, in the mid position) period in the dark in the case of three LCO's interacting in line.(a) Synchronization period. (b) Time difference of LCOB and LCOC with respect to LCOA. multiple states. Fig. 11(a) and (b) have been obtained with a set of 3 LCO's put in line: LCO B -LCO A -LCO C . The reference was LCO dark A in the mid position. Synchronization has been achieved for T dark A varying between 33.98 ms and 35.2 ms. The "periods in darkness" of the other are: T dark B =33.98 ms and T dark C =33.99 ms.Fig. 11(a)shows a bifurcation-like phenomenon: the 3-LCO system synchronizes according to two modes brought out by two significantly different values of T s
, show a singular value T dark A = T break for which there is a change of slope. T break corresponds to15
20
25
30
35
−1200
−1000
−800
−600
−400
−200
0
T s =T A
dark [ms]
∆τ [µs]
15
20
25
30
35
560
580
600
620
640
660
680
700
720
T s =T A
dark [ms]
γ
B
[µs]
T break
T break
Synchronization a universal concept in nonlinear sciences. A Pikovsky, M Rosenblum, J Kurths, Cambridge University PressCambridgeA. Pikovsky, M. Rosenblum and J. Kurths, Synchro- nization a universal concept in nonlinear sciences (Cam- bridge University Press, Cambridge, 2001).
I Blekhman, Synchronization in science and technology. New YorkAsme PressI. Blekhman, Synchronization in science and technology (Asme Press, New York, 1988).
Synchronization of pulsecoupled biological oscillators. R E Mirollo, S H Strogatz, SIAM Journal of Applied Mathematics. 50Mirollo RE, Strogatz SH, Synchronization of pulse- coupled biological oscillators , SIAM Journal of Applied Mathematics 50 (1990) 1645-1662.
Coupled nonlinear oscillators below the synchronization threshold: Relaxation by generalized Landau damping. S H Strogatz, R E Mirollo, P C Matthews, Phys. Rev. Lett. 68S.H. Strogatz, R.E.Mirollo and P.C. Matthews, Coupled nonlinear oscillators below the synchronization threshold: Relaxation by generalized Landau damping, Phys. Rev. Lett. 68 (1992) 2730-2733.
Synchronized chaos and spatiotemporal chaos in arrays of coupled lasers. H G Winful, L Rahman, Phys. Rev. Lett. 65H.G. Winful and L. Rahman, Synchronized chaos and spatiotemporal chaos in arrays of coupled lasers, Phys. Rev. Lett.65 (1990) 1575-1578.
Synchronization transitions in a disordered Josephson series array. K Wiesenfeld, P Colet, S H Strogatz, Phys. Rev. Lett. 76K. Wiesenfeld, P. Colet and S.H. Strogatz, Synchroniza- tion transitions in a disordered Josephson series array, Phys. Rev. Lett. 76 (1996) 404-407.
Localized synchronizationin two coupled nonidentical semiconductor lasers. A Hohl, A Gavrielides, T Erneux, V Kovanis, Phys. Rev. Lett. 78A. Hohl, A. Gavrielides, T. Erneux and V. Kovanis, Lo- calized synchronizationin two coupled nonidentical semi- conductor lasers, Phys. Rev. Lett. 78 (1997) 4745-4748.
Mechanism of rhythmic synchronous flashing of fireflies. J Buck, E Buck, Science. 159J. Buck and E. Buck, Mechanism of rhythmic syn- chronous flashing of fireflies, Science 159 (1968) 1319- 1327.
Control of flashing in fireflies. V. Pacemaker synchronization in Pteroptyx cribellata. J Buck, E Buck, J F Case, F E Hanson, J. Comp. Physiol. A. 144J. Buck, E. Buck, J.F. Case and F.E. Hanson, Control of flashing in fireflies. V. Pacemaker synchronization in Pteroptyx cribellata, J. Comp. Physiol. A 144 (1981) 287-298.
J Buck, E Buck, Synchronous fireflies. Sc234J. Buck and E. Buck, Synchronous fireflies, Sc. Am. 234 (1976) 74-85.
Toward a functional interpretation of synchronous flashing by fireflies. J Buck, E Buck, Am. Naturalist. 112J. Buck, E. Buck, Toward a functional interpretation of synchronous flashing by fireflies, Am. Naturalist. 112 (1978) 471-492.
J E Lloyd, Fireflies of Melanesia: Bioluminiscence, mating behavior and synchronous flashing (Coleoptera: Lampyridae). 2J.E. Lloyd, Fireflies of Melanesia: Bioluminiscence, mating behavior and synchronous flashing (Coleoptera: Lampyridae), Env. Entom. 2 (1973) 991-1008.
Model for the mating protocol of synchronously flashing fireflies. J E Lloyd, Nature. 245J.E. Lloyd, Model for the mating protocol of syn- chronously flashing fireflies, Nature 245 (1973) 268-270.
A new type of synchronized flashing in a North America firefly. A Moiseff, J Copeland, J. Ins. Behavior. 13A. Moiseff and J. Copeland, A new type of synchronized flashing in a North America firefly, J. Ins. Behavior 13 (2000) 597-612.
An adaptive model for synchrony in the firefly Pteroptyx malaccae. G B Ermentrout, J. Math. Biol. 29G.B. Ermentrout, An adaptive model for synchrony in the firefly Pteroptyx malaccae J. Math. Biol. 29 (1991) 571-585.
Self-organization in biological systems. S Camazine, J L Deneubourg, S Franks, J Sneyd, G Theraulaz, E Bonabeau, Princeton University PressPrincetonS. Camazine, J.L. Deneubourg, S. Franks, J. Sneyd, G. Theraulaz and E. Bonabeau Self-organization in bi- ological systems (Princeton University Press, Princeton, 2001).
. W Garver, F Moss, Electronic Fireflies, Sc. Am. 269Garver W, Moss F, Electronic fireflies, Sc. Am. 269 (1993) 128-130.
Linear Databook, LM555/LM555C Timer. National Semiconductor Corporation. LINEAR Databook, (1982), LM555/LM555C Timer. National Semiconductor Corporation (9): 33-38.
|
[] |
[
"Extending Utility Representations of Partial Orders",
"Extending Utility Representations of Partial Orders"
] |
[
"Pavel Chebotarev \nTrapeznikov Institute of Control and Management Sciences\n65 Profsoyuznaya117997MoscowRussia\n"
] |
[
"Trapeznikov Institute of Control and Management Sciences\n65 Profsoyuznaya117997MoscowRussia"
] |
[] |
The problem is considered as to whether a monotone function defined on a subset P of Euclidean space IR k can be strictly monotonically extended to IR k . It is proved that this is the case if and only if the function is separably increasing. Explicit formulas are given for a class of extensions which involves an arbitrary function. Similar results are obtained for utility functions that represent strict partial orders on abstract sets X. The special case where P is a Pareto subset of IR k (or of X) is considered.
|
10.1007/978-3-642-56038-5_4
|
[
"https://export.arxiv.org/pdf/math/0508199v1.pdf"
] | 3,263,893 |
math/0508199
|
263c32de6b0bdae93196c5f1ce97079ce5154bc3
|
Extending Utility Representations of Partial Orders
11 Aug 2005
Pavel Chebotarev
Trapeznikov Institute of Control and Management Sciences
65 Profsoyuznaya117997MoscowRussia
Extending Utility Representations of Partial Orders
11 Aug 2005Extension of utility functionsMonotonicityUtility representation of partial ordersPareto set
The problem is considered as to whether a monotone function defined on a subset P of Euclidean space IR k can be strictly monotonically extended to IR k . It is proved that this is the case if and only if the function is separably increasing. Explicit formulas are given for a class of extensions which involves an arbitrary function. Similar results are obtained for utility functions that represent strict partial orders on abstract sets X. The special case where P is a Pareto subset of IR k (or of X) is considered.
Introduction
Suppose that a decision maker defines his or her utility function on some subset P of the Euclidean space of alternatives IR k . Utility functions are usually assumed to be strictly increasing in the coordinates which correspond to partial criteria. Therefore, it is useful to specify the conditions under which a strictly monotone function defined on P can be strictly monotonically extended to IR k .
In this paper, we demonstrate that such an extension is possible if and only if the function defined on P is separably increasing. Explicit formulas for the extension are provided.
An interesting special case is where the structure of subset P does not permit any violation of strict monotonicity on P . This is the case where P is a Pareto set. A corollary given in Section 3 addresses this situation. The results are translated to the general case of utility functions that represent strict partial orders on arbitrary sets. We do not impose continuity requirements in these settings.
Notation, definitions, and main results
For any x, y ∈ IR k , x ≥ y means [x i ≥ y i for all i ∈ {1, . . . , k}]; x ≤ y means [x i ≤ y i for all i ∈ {1, . . . , k}]; x > y means [x ≥ y and not x = y ]; x < y means [x ≤ y and not x = y ]. These relations ≥, ≤, >, and < on IR k will be called Paretian.
Consider an arbitrary subset P of IR k and strictly increasing (with respect to the above > relation) real-valued functions f P (x) defined on P . The problem is to monotonically extend such a function f P (x) to IR k , provided that it is possible, and to indicate conditions under which this is possible.
An arbitrary function f P (x) defined on any P ⊆ IR k is said to be strictly increasing on P with respect to the > relation or simply strictly increasing on P if for every x, y ∈ P, x > y implies f P (x) > f P ( y ).
Definition 1 A real-valued function f P (x) defined on P ⊂ IR k is monotonically extendible to IR k if there exists a function f (x) : IR k → IR such that ( * ) the restriction of f (x) to P coincides with f P (x), and ( * * ) f (x) is strictly increasing with respect to >.
In this case, f (x) is a monotone extension of f P (x) to IR k .
The functions f P (x) and f (x) can be naturally considered as utility (or objective) functions. This means that f P (x) and f (x) can be interpreted as real-valued functions that represent the preferences of a decision maker on the corresponding sets of alternatives (the alternatives are identified with kdimensional vectors of partial criteria values or vectors of goods).
The Paretian > relation is a strict partial order on IR k , i.e., it is transitive and irreflexive. That is why we deal with a specific problem on the monotonic extension of functions which are monotone with respect to a partial order on their domain. We discuss some connections of this problem with the classical results of utility theory in Section 4 and turn to a more general formulation in Section 5.
For technical convenience, let us add two extreme points to IR:
IR = IR ∪ {−∞, +∞},
and extend the ordinary > relation to IR: for every x ∈ IR, set +∞ > x > −∞ and +∞ > −∞. This > relation will determine the values of min and max functions on finite subsets of IR.
The functions sup and inf will be considered as maps of 2 IR to IR defined for the empty set as follows: sup ∅ = −∞ and inf ∅ = +∞.
Given f P (x), define two auxiliary functions for every x ∈ IR k :
a(x) = sup {f P ( y ) | y ≤ x, y ∈ P }, (1) b(x) = inf {f P (z) | z ≥ x, z ∈ P }.(2)
By this definition, a(x) and b(x) can be infinite.
It follows from the transitivity of the Paretian > relation that for every function f P (x), functions a(x) and b(x) are nonstrictly increasing with respect to >:
For all x, x ′ ∈ IR k , x ′ > x implies [a(x ′ ) ≥ a(x) and b(x ′ ) ≥ b(x)]. (3)
Moreover,
For any x ∈ P, a(x) ≥ f P (x) ≥ b(x).(4)
It can be easily shown that f P (x) is nonstrictly increasing if and only if
b(x) ≥ a(x) for all x ∈ IR k .(5)
The following definition involves a strengthening of (5).
Definition 2 A function f P (x) defined on P ⊂ IR k is separably increasing if for any x, x ′ ∈ IR k , x ′ > x implies b(x ′ ) > a(x).
Definition 3 Given P ⊆ IR k , let us say that P ′ ⊆ P is an upper set if for some a ∈ IR k ,
P ′ = { x | x ≥ a, x ∈ P }; P ′′ ⊆ P is a lower set if for some a ∈ IR k , P ′′ = { x | x ≤ a, x ∈ P }.
Proposition 4 If f P (x) defined on P ⊂ IR k is separably increasing, then (a) f P (x) is strictly increasing;
(b) f P (x) is upper-bounded on lower sets and lower-bounded on upper sets; in other terms, there are no x ∈ IR k such that a(x) = +∞ or b(x) = −∞;
(c) For every x ∈ IR k , b(x) ≥ a(x); (d) For every x ∈ P, b(x) = a(x) = f P (x).
All proofs are given in Section 6. Proposition 4 and other statements are proved there in the more general case of utility functions that represent strict partial orders on arbitrary sets (cf. Section 5).
Observe that there are functions f P (x) that are strictly increasing, upperbounded on lower sets and lower-bounded on upper sets, but are not separably increasing. An example is
f P (x) = x 1 , x 1 ≤ 0, x 1 −1, x 1 > 1,(6)
where P = ] − ∞, 0] ∪ ]1, +∞] ⊂ IR 1 . This function satisfies (a) and (b) (and, as well as all nonstrictly increasing functions, it also satisfies (c) and (d)) of Proposition 4, but it is not separably increasing. Indeed, b(1) = 0 = a(0).
Suppose that f P (x) defined on P ⊂ IR k is separably increasing. Below we prove that this is a necessary and sufficient condition for the existence of monotone extensions of f P (x) to IR k and demonstrate how such extensions can be constructed.
Let u(x) : IR k → IR be any strictly increasing (with respect to the Paretian >) and bounded function defined on the whole space IR k . Suppose that α, β ∈ IR are such that α < u(x) < β for all x ∈ IR k .
As an example of such a function, adduce
u example (x) = β − α π arctan k i=1 x i + π 2 + α .(8)
Consider also the special case of strictly increasing functions u 1 (x) such that
0 < u 1 (x) < 1.(9)
These functions can be obtained from the strictly increasing functions u(x) that satisfy (7) as follows:
u 1 (x) = (β − α) −1 (u(x) − α).(10)
For an arbitrary strictly increasing function u 1 (x) that satisfies (9), every α and β > α, and every x ∈ IR k , let us define
f (x) = max a(x), min {b(x), β } − β + α (1 − u 1 (x)) + min b(x), max {a(x), α } − α + β u 1 (x).(11)
Note that for every separably increasing f P (x), function f (x) : IR k → IR is well defined, i.e., the two terms in the right-hand side of (11) are finite. This follows from item (b) of Proposition 4.
The main result of this section is
Theorem 5 Suppose that f P (x) is a real-valued function defined on some P ⊂ IR k . Then f P (x) is monotonically extendible to IR k if and only if f P (x) is separably increasing. Moreover, for every separably increasing f P (x), every strictly increasing u 1 (x) : IR k → IR that satisfies (9) and every α, β ∈ IR s.t. α < β,
the function f (x) defined by (11) is a strictly increasing extension of f P (x) to IR k .
The same family of extensions f (x) can be generated using all strictly increasing functions u(x) satisfying (7).
Proposition 6 For every separably increasing f P (x) and every strictly increasing u(x) : IR k → IR that satisfies (7), the function
f (x) = (β − α) −1 max a(x) − α, min {b(x) − β, 0} ( β −u(x)) + min b(x) − β, max {a(x) − α, 0} (u(x) − α ) + u(x)(12)
is a strictly increasing extension of f P (x) to IR k . This function coincides with (11) where u 1 (x) is defined by (10).
It is straightforward to verify that function (12) coincides with (11) where u 1 (x) is defined by (10). This fact will be employed in the proofs of Theorem 5 and some propositions, after which the remaining statement of Proposition 6 will follow from Theorem 5.
In (12) we assume −∞ − α = −∞ and +∞ − β = +∞ whenever a(x) = −∞ and b(x) = +∞, respectively, since α and β are finite by definition.
To simplify (12), we partition IR k \P into four regions:
A = { x ∈ IR k \P | (∃ y ∈ P : y < x) & (∃ z ∈ P : x < z)}, L = { x ∈ IR k \P | (¬∃ y ∈ P : y < x) & (∃ z ∈ P : x < z)}, U = { x ∈ IR k \P | (∃ y ∈ P : y < x) & (¬∃ z ∈ P : x < z)}, N = { x ∈ IR k \P | (¬∃ y ∈ P : y < x) & (¬∃ z ∈ P : x < z)}.(13)
Obviously, every two of these regions have empty meet and
IR k = P ∪ A ∪ L ∪ U ∪ N.
Proposition 7 For every separably increasing f P (x), the function f (x) defined by (12) can be represented as follows:
f (x) = f P (x), x ∈ P, min {b(x) − β, 0} + u(x), x ∈ L, max {a(x) − α, 0} + u(x), x ∈ U, u(x), x ∈ N, not simplified expression (12), x ∈ A.(14)
Proposition 7 clarifies the role of u(x) in the definition of f (x). According to (14), u(x) determines the rate of growth of f (x) on L and U, and f (x) = u(x) on the set N which consists of >-neutral points with respect to the elements of P . Now let us give one more representation for f (x), which can be used in (14) when x ∈ A. Define four regions of other nature in IR k :
S 1 = { x ∈ IR k | b(x) − a(x) ≤ β − α }, S 2 = { x ∈ IR k | b(x) − a(x) ≥ β − α and b(x) ≤ β }, S 3 = { x ∈ IR k | b(x) − a(x) ≥ β − α and a(x) ≥ α }, S 4 = { x ∈ IR k | a(x) ≤ α and b(x) ≥ β }.
It is easily seen that IR k = S 1 ∪ S 2 ∪ S 3 ∪ S 4 .
Proposition 8 For every separably increasing f P (x), the function f (x) defined by (12) or (11), with u 1 (x) and u(x) related by (10), can be represented as follows:
f (x) = a(x)(1 − u 1 (x)) + b(x) u 1 (x), x ∈ S 1 , b(x) + u(x) − β, x ∈ S 2 , a(x) + u(x) − α, x ∈ S 3 , u(x), x ∈ S 4 .(15)
The regions S 1 , S 2 , S 3 , and S 4 meet on some parts of the border sets b(x) − a(x) = β − α, a(x) = α, and b(x) = β. Accordingly, the expressions of f (x) given by Proposition 8 are concordant on these intersections.
Extendibility of arbitrary functions defined on Pareto sets
Consider the case where P is a Pareto set.
Definition 9 A set P ⊂ IR k is a Pareto set in IR k if there are no x, x ′ ∈ P such that x ′ > x.
Observe that for every function f P (x) defined on a Pareto set P, the set A introduced in (13) is empty. It turns out that such a function f P (x) is separably increasing if and only if it is upper-bounded on lower sets and lower-bounded on upper sets (see Definition 3). Based on this, the following corollary from Theorem 5 and Proposition 7 is true.
Corollary 10 Suppose that f P (x) : P → IR is a function defined on a Pareto set P ⊂ IR k . Then f P (x) can be strictly monotonically extended to IR k if and only if f P (x) is upper-bounded on lower sets and lower-bounded on upper sets. Moreover, for such a function f P (x) and every strictly increasing function u(x) : IR k → IR that satisfies (7), the function
f (x) = f P (x), x ∈ P, min {b(x), β } − (β −u(x)), x ∈ L, max {a(x), α } + u(x) − α, x ∈ U, u(x), x ∈ N,(16)
provides a monotone extension of f P (x) to IR k .
On regions L and U, f (x) is expressed through the "relative" functions β −u(x) and u(x) − α; this is equivalent to the form given in (14). A result closely related to Corollary 10 was used in Chebotarev and Shamis (1998) to construct an implicit form of monotonic scoring procedures for preference aggregation.
The extension problem in the context of utility theory
Extensions and utility representations of partial orders were studied since Zorn's lemma and the Szpilrajn theorem according to which every strict partial order extends to a strict linear order.
In general, neither strict partial orders nor their linear extensions must have utility representations. Let us discuss connections between these extensions and utility representations in more detail. Recall that Definition 11 A utility representation of a strict partial order ≻ on a set X is a function u : X → IR such that for every 2 x, y ∈ X,
x ≻ y ⇒ u(x) > u( y ), (17) x ≈ y ⇒ u(x) = u( y ),(18)
where, by definition,
x ≈ y ⇔ [ ∀ z ∈ X, x ∼ z ⇔ y ∼ z ], x ∼ y ⇔ [ x ≻ y and y ≻ x ].
A sufficient condition for a strict partial order ≻ to have a utility representation is the existence of a countable and dense (w.r.t. the induced strict partial order) subset in the factor set X/ ≈ (see Debreu, 1964;Fishburn, 1970). Generally, this sufficient condition is not necessary, but if ≻ is a strict weak order, then it is necessary.
The Paretian > relation on IR k is a special strict partial order. Any lexicographic order on IR k is its strict linear extension. Such extensions have no utility representations, whereas the > relation has a wide class of utility representations. 3 These are all functions strictly increasing in all coordinates.
Every such a strictly increasing function induces a strict weak order on IR k that extends >. Naturally, not all strict weak orders that extend > can be obtained in this manner. A sufficient condition for such representability is the Archimedean property which ensures the existence of a countable and dense (w.r.t. the strict weak order) subset in IR k .
Thus, the utility representations of > induce a special class of strict weak orders that extend >. Such a strict weak order determines its utility representation up to arbitrary monotone transformations (some related specific results are given in Morkeliūnas, 1986b).
In the previous sections, we considered a (utility) function f P that is defined on a subset P of IR k and represents the restriction of > to P . If P is a Pareto subset, then this imposes no constraints on f P . The problem was to find conditions under which there exist functions f that ( * ) reduce to f P on P and ( * * ) represent > on IR k , and to provide an explicit form of such functions.
Observe that every strictly increasing function f P induces some strict weak order ≻ on P that contains the restriction of > to P . For ≻, there exists a countable and dense (w.r.t. ≻) subset in the factor set P/ ∼, where ∼ corresponds to ≻. Combining this subset with the set of vectors in IR k that have rational coordinates, we obtain a countable and dense subset in the factor set IR k / ∼ (the ≈ relation that corresponds to > is the identify relation). Consequently, there exist utility functions g that represent T (> ∪ ≻) on IR k , where T (·) designates transitive closure.
The restriction of such a function g to P need not coincide with f P , but it is related with f P by a strictly increasing transformation, since they represent the same weak order ≻. Thus, we obtain
Proposition 12 For every strictly increasing function f P on P ⊂ IR k , there exists a strictly increasing map ϕ of the range of f P to IR such that ϕ(f P ) is monotonically extendible to IR k . If f P is not strictly increasing, then there are no such maps.
Proposition 12 elucidates a difference between separably increasing and strictly increasing functions f P with respect to the extendibility. The former functions are monotonically extendible to IR k (Theorem 5), whereas for the latter functions, only some their strictly increasing transformations are extendible in the general case.
Extending utility functions on arbitrary sets
A shortcoming of the definition (11) is that it provides discontinuous extensions even for continuous functions f P . On the other hand, the extension technique does not strongly rely on the structure of IR k . Indeed, the connection of f (x) with IR k , its domain, reduces to the dependence on a(x), b(x), and u(x), which have a rather general nature. This enables one to translate the above results to an abstract set X substituted for IR k and any strict partial order > on X substituted for the Paretian > relation on IR k . The author thanks Andrey Vladimirov for his suggestion to consider the problem in this general setting. To formulate a counterpart of Theorem 5, we will use the following notation:
X is a nonempty set; > is a fixed strict partial order on X; P ⊂ X; > P is the restriction of > to P ; z ∈ X is a maximal element of > iff x > z for no x ∈ X; y ∈ X is a minimal element of > iff x < y for no x ∈ X.
We will also explore all the above definitions and formulas (except for (8) and
Definition 2), where the substitution of X for IR k is implied.
For arbitrary strict partial orders-which may have maximal and minimal elements-a somewhat stronger condition should be used in the definition of separably increasing functions. Let
X = X ∪ {−∞, +∞}.
Extend > to X in the usual way to get a strict partial order on X: +∞ > −∞ and +∞ > x > −∞ for every x ∈ X. We use the same symbol > for a strict partial order on X, for its extension to X, and for the ordinary "greater" relation on IR, which should not lead to confusion.
Definition 13 A function f P (x) defined on P ⊂ X is separably increasing if for any x, x ′ ∈ X, x ′ > x implies b(x ′ ) > a(x).
Since for every f P , b(+∞) = +∞ and a(−∞) = −∞, Definition 13 implies that for a separably increasing function, a(x) < +∞ and b(x) > −∞ whenever x ∈ X is a maximal and a minimal element of >, respectively. If > has neither maximal nor minimal elements (as for the Paretian > relation on IR k ), then the replacement of X with X in Definition 13 does not alter the class of separably increasing functions.
Obviously, if > has a utility representation, then it has bounded utility representations (which can be constructed, say, by transformations like (8)).
Theorem 14 Suppose that a strict partial order > defined on X has a utility representation, P ⊂ X, and f P : P → IR is a utility representation of > P . Then f P is monotonically extendible to X if and only if f P is separably increasing. Moreover, if u 1 (x) is a utility representation of > that satisfies (9) and α < β, then (11) provides a monotone extension of f P to X.
If a function f P is not separably increasing according to Definition 13 but it satisfies this definition with X replaced by X, then (11) can be used to obtain extensions of f P in terms of "quasiutility" functions f : X → IR.
Propositions 4, 6, 7, and 8 are preserved in the case of arbitrary X. Proposition 12, as well as Theorem 14, is valid for the strict partial orders > on X that have a utility representation. The proofs given in Section 6 are conducted for this general case.
Interesting further problems are describing the complete class of extensions of f P to IR k (and to X), specifying necessary and sufficient conditions for the existence of continuous extensions and constructing them, and considering the extension problem with an ordered extension of the field IR as the range of f P , u, and f . An interesting result on the existence of continuous utility representations for weak orders on IR k was obtained in Morkeliūnas (1986a).
Proofs
The proofs of all propositions and Corollary 10 are given in the general case of a strict partial order on an abstract set X (see Section 5, especially, Definition 13). Theorem 14 which generalizes Theorem 5 is proved separately.
Proof of Proposition 4. (a) Assume that f P (x) is not strictly increasing. Then there are x, x ′ ∈ P such that x ′ > x and f P (x ′ ) ≤ f P (x). Then, by (4)
, b(x ′ ) ≤ f P (x ′ ) ≤ f P (x) ≤ a(x) holds, i.e., f P (x)
is not separably increasing.
(b) Let P ′ be a lower set. Then there exists a ∈ X such that P
′ = { x | x ≤ a, x ∈ P }. Consider any a ′ ∈ X such that a ′ > a. Since f P (x) is separably increasing, b(a ′ ) > a(a). Therefore, a(a) < +∞. Since a(a) = sup {f P ( y ) | y ∈ P ′ }, f P (x) is upper-bounded on P ′ . Similarly, f P (x) is lower-bounded on upper sets. (c) Let x ∈ X. By (a), if y , z ∈ P, z ≥ x, and y ≤ x, then f P (z) ≥ f P ( y ).
Having in mind that sup ∅ = −∞ and inf ∅ = +∞, we obtain b(x) ≥ a(x).
(d) By (a), for every x ∈ P, b(x) = f P (x) and a(x) = f P (x) hold. This completes the proof.
Now we prove Proposition 8; thereafter Proposition 8 will be used to prove Proposition 7 and Theorem 5.
Proof of Proposition 8. Let x ∈ S 1 . Since b(x) − a(x) ≤ β − α, we have min {b(x), β } − a(x) ≤ β − α, b(x) − max {a(x), α } ≤ β − α, whence a(x) ≥ min {b(x), β } − β + α, b(x) ≤ max {a(x), α } − α + β . Therefore, (12) reduces to f (x) = a(x)(1 − u 1 (x)) + b(x)u 1 (x). Let x ∈ S 2 . Inequalities b(x) − a(x) ≥ β − α and b(x) ≤ β imply a(x) ≤ α, therefore, (11) reduces to f (x) = b(x) + u(x) − β. Let x ∈ S 3 . Inequalities b(x) − a(x) ≥ β − α and a(x) ≥ α imply b(x) ≥ β, therefore, (11) reduces to f (x) = a(x) + u(x) − α.
The proof for the case x ∈ S 4 is straightforward.
Proof of Proposition 7. Let x ∈ P. Then, by item (d) of Proposition 4, b(x) = a(x) = f P (x), therefore, by (7)
, b(x) − a(x) ≤ β −a, hence x ∈ S 1 . Using Proposition 8, we have f (x) = f P (x)(1 − u 1 (x)) + f P (x) u 1 (x) = f P (x). Let x ∈ U. Then b(x) = +∞, hence (12) reduces to f (x) = max {a(x) − α, 0} + u(x). Similarly, if x ∈ L, then a(x) = −∞ and (12) reduces to f (x) = min {b(x) − β, 0} + u(x).
Finally, if x ∈ N, then a(x) = −∞ and b(x) = +∞, whence a(x) < α and b(x) > β, and Proposition 8 provides f (x) = u(x).
If x, x ′ ∈ S 1 , then by (15), (19), (9), and (c) of Proposition 4,
f (x ′ ) − f (x) ≥ a(x)(1 − u 1 (x ′ )) + b(x)u 1 (x ′ ) − a(x)(1 − u 1 (x)) − b(x)u 1 (x) = (b(x) − a(x))(u 1 (x ′ ) − u 1 (x)) ≥ 0.(21)
This implies that f (
x ′ ) = f (x) is possible only if b(x ′ ) = b(x) and b(x) = a(x), i.e., only if b(x ′ ) = a(x).
The last equality is impossible, since f P (x) is separably increasing by assumption. Therefore, f (x ′ ) > f (x), and f (x) is strictly increasing on S 1 .
Let now x and x ′ belong to different regions S i and S j . Consider the points that correspond to x and x ′ in the 3-dimensional space with coordinate axes a(·), b(·), and u(·) and connect these two points, (a(x), b(x), u(x)) and (a(x ′ ), b(x ′ ), u(x ′ )), by a line segment. The projection of the line segment and the borders of the regions S 1 , S 2 , S 3 , and S 4 to the plane u = 0 are illustrated in Fig. 1. Fig. 1. An example of line segment [(a(x), b(x), u(x)), (a(x ′ ), b(x ′ ), u(x ′ ))] in the space with coordinate axes a(·), b(·), and u(·) projected to the plane u = 0.
(α, β) (a(x), b(x)) (a 1 , b 1 ) (a 2 , b 2 ) (a 3 , b 3 ) (a(x ′ ), b(x ′ )) α β S 1 S 2 S 3 S 4 ③ ③
Suppose that (a 1 , b 1 , u 1 ), . . . , (a p , b p , u p ), p ≤ 3, are the points where the line segment between (a(x), b(x), u(x)) and (a(x ′ ), b(x ′ ), u(x ′ )) crosses the borders of the regions. Then
a(x) ≤ a 1 ≤ · · · ≤ a p ≤ a(x ′ ),(22)b(x) ≤ b 1 ≤ · · · ≤ b p ≤ b(x ′ ),(23)u(x) < u 1 < · · · < u p < u(x ′ )(24)
with strict inequalities in (22) or in (23) (or in the both).
Consider f (x) represented by (15) as a function f (a, b, u) of a(x), b(x), and u(x). Then, using the fact that f (a, b, u) is nondecreasing in a and b on each region, strictly increasing in u on S 2 , S 3 , and S 4 , and strictly increasing in u on S 1 unless a(x) = b(x ′ ) (which is not the case, since f P (x) is separably increasing), we obtain
f (x) = f (a(x), b(x), u(x)) < f (a 1 , b 1 , u 1 ) < · · · < f (a p , b p , u p ) < f (a(x ′ ), b(x ′ ), u(x ′ )) = f (x ′ ).(25)
This completes the proof that f (x) is strictly increasing.
It remains to prove that if f P (x) is not separably increasing, then it cannot be strictly monotonically extended to IR k . Indeed, if there are x, x ′ ∈ IR k such that x ′ > x and b(x ′ ) ≤ a(x), then strict monotonicity of f (x) requires f (x ′ ) ≤ b(x ′ ) and f (x) ≥ a(x) to hold, whence f (x) ≥ f (x ′ ), and strict monotonicity is violated. Theorem 5 is proved.
Proof of Corollary 10. Suppose that a strict partial order > on X has a utility representation. Let f P be a utility representation of > P , where > P is the restriction of > to a Pareto set P ⊂ X. Then, since the > relation is transitive, the set A is empty. It remains to prove that f P is separably increasing if and only if it is upper-bounded on lower sets and lower-bounded on upper sets. If f P is separably increasing, then these boundedness conditions are satisfied by Proposition 4. Suppose now that f P is upper-bounded on lower sets and lower-bounded on upper sets and assume that f P is not separably increasing. Then there exist x, x ′ ∈ X such that x ′ > x and b(x ′ ) ≤ a(x). This is possible only if (a) b(x ′ ) = +∞ or (b) a(x) = −∞ or (c) there are y , z ∈ P such that y ≤ x and z ≥ x ′ . However, in (a), a(x) = +∞ and x ∈ X, hence f P is not upper-bounded on a lower set; in (b), b(x ′ ) = −∞ and x ′ ∈ X, hence f P is not lower-bounded on an upper set; in (c), by transitivity, z > y , which contradicts the definition of Pareto set. Therefore, f P is separably increasing, and the corollary is proved.
Proof of Theorem 14. If f P is not separably increasing, then it is not monotonically extendible to X. Indeed, if the implication x ′ > x ⇒ b(x ′ ) > a(x) is violated for some x, x ′ ∈ X, then the argument is the same as for Theorem 5. Otherwise, if this implication is violated for x ∈ X \ X, then x ′ > x implies x = −∞, and since b(x ′ ) ≤ a(x) = −∞, x ′ ∈ X holds, hence f (x ′ ) cannot be evaluated without violation of (17); the case of x ′ ∈ X \ X is considered similarly.
The argument in the proof of Theorem 5 works here with no change to demonstrate that if > has a utility representation and f P is separably increasing, then every f defined by (11) satisfies condition (17) of utility representability. It remains to show that f satisfies (18). Let x, y ∈ X and x ≈ y . Then for every z ∈ X, [x ≻ z ⇔ y ≻ z and z ≻ x ⇔ z ≻ y ] (see, e.g., Fishburn, 1970). Consequently, a(x) = a( y ) and b(x) = b( y ). Since u represents >, u(x) = u( y ) holds. Therefore, f (x) = f ( y ). This completes the proof.
The author thanks Elena Yanovskaya, Elena Shamis, Andrey Vladimirov, and Fuad Aleskerov for helpful discussions.
For uniformity, we designate the elements of X by boldface letters. 3 They do exist, say, because IR k contains countable and dense (w.r.t. >) subsets, one of which being the set of vectors with rational coordinates.
Proof of Theorem 5. Let f P (x) be separably increasing. By Proposition 7, the restriction of f (x) to P coincides with f P (x).Prove that f (x) is strictly increasing on IR k . This can be demonstrated directly by analyzing equation(11). Here, we give another proof, which does not require any additional calculations with min and max .By Proposition 8, function (11) coincides with (15), where u(x) is related with u 1 (x) by (10).Suppose that x, x ′ ∈ IR k and x ′ > x. Then, by (3) and the strict monotonicity of u(x) and u 1 (x), we haveSuppose first that x and x ′ belong to the same region: S 2 , S 3 , or S 4 . Then (19) yieldshence, by(15), f (x) is strictly increasing on each of these regions.
Characterizations of scoring methods for preference aggregation. P Chebotarev, E Yu, Shamis, Annals of Operations Research. 80Chebotarev, P.Yu. and E. Shamis, 1998, Characterizations of scoring methods for preference aggregation, Annals of Operations Research 80, 299-332.
Continuity properties of Paretian utility. H Debreu, International Economic Review. 5Debreu, H., 1964, Continuity properties of Paretian utility, International Eco- nomic Review 5, 285-293.
Utility Theory and Decision Making. P Fishburn, WileyNew YorkFishburn, P., 1970, Utility Theory and Decision Making (Wiley, New York).
On the existence of a continuous superutility function. A Morkeliūnas, Lietuvos Matematikos Rinkinys / Litovskiy Matematicheskiy Sbornik. 26RussianMorkeliūnas, A., 1986a, On the existence of a continuous superutility func- tion, Lietuvos Matematikos Rinkinys / Litovskiy Matematicheskiy Sbornik 26, 292-297 (Russian).
On strictly increasing numerical transformations and the Pareto condition. A Morkeliūnas, Lietuvos Matematikos Rinkinys / Litovskiy Matematicheskiy Sbornik. 26RussianMorkeliūnas, A., 1986b, On strictly increasing numerical transformations and the Pareto condition, Lietuvos Matematikos Rinkinys / Litovskiy Matem- aticheskiy Sbornik 26, 729-737 (Russian).
|
[] |
[
"Inferring Logical Forms From Denotations",
"Inferring Logical Forms From Denotations"
] |
[
"Panupong Pasupat [email protected] \nComputer Science Department\nComputer Science Department\nStanford University\nStanford University\n\n",
"Percy Liang [email protected] \nComputer Science Department\nComputer Science Department\nStanford University\nStanford University\n\n"
] |
[
"Computer Science Department\nComputer Science Department\nStanford University\nStanford University\n",
"Computer Science Department\nComputer Science Department\nStanford University\nStanford University\n"
] |
[
"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics"
] |
A core problem in learning semantic parsers from denotations is picking out consistent logical forms-those that yield the correct denotation-from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.
|
10.18653/v1/p16-1003
|
[
"https://www.aclweb.org/anthology/P16-1003.pdf"
] | 2,434,931 |
1606.06900
|
274086577deba029c594a4d3106c2a477f211fee
|
Inferring Logical Forms From Denotations
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 7-12, 2016. 2016
Panupong Pasupat [email protected]
Computer Science Department
Computer Science Department
Stanford University
Stanford University
Percy Liang [email protected]
Computer Science Department
Computer Science Department
Stanford University
Stanford University
Inferring Logical Forms From Denotations
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational LinguisticsAugust 7-12, 2016. 2016
A core problem in learning semantic parsers from denotations is picking out consistent logical forms-those that yield the correct denotation-from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.
Introduction
Consider the task of learning to answer complex natural language questions (e.g., "Where did the last 1st place finish occur?") using only question-answer pairs as supervision (Clarke et al., 2010;Liang et al., 2011;Berant et al., 2013;. Semantic parsers map the question into a logical form (e.g., R [Venue].argmax(Position.1st, Index)) that can be executed on a knowledge source to obtain the answer (denotation). Logical forms are very expressive since they can be recursively composed, but this very expressivity makes it more difficult to search over the space of logical forms. Previous work sidesteps this obstacle by restricting the set of possible logical form compositions, but this is limiting. For instance, for the system in Pasupat and Liang (2015), in only 53.5% of the examples was the correct logical form even in the set of generated logical forms.
The goal of this paper is to solve two main challenges that prevent us from generating more expressive logical forms. The first challenge is computational: the number of logical forms grows exponentially as their size increases. Directly enumerating over all logical forms becomes infeasible, and pruning techniques such as beam search can inadvertently prune out correct logical forms.
The second challenge is the large increase in spurious logical forms-those that do not reflect the semantics of the question but coincidentally execute to the correct denotation. For example, while logical forms z 1 , . . . , z 5 in Figure 1 are all consistent (they execute to the correct answer y), the logical forms z 4 and z 5 are spurious and would give incorrect answers if the table were to change.
We address these two challenges by solving two interconnected tasks. The first task, which addresses the computational challenge, is to enumerate the set Z of all consistent logical forms given a question x, a knowledge source w ("world"), and the target denotation y (Section 4). Observing that the space of possible denotations grows much more slowly than the space of logical forms, we perform dynamic programming on denotations (DPD) to make search feasible. Our method is guaranteed to find all consistent logical forms up to some bounded size.
Given the set Z of consistent logical forms, the second task is to filter out spurious logical forms from Z (Section 5). Using the property that spurious logical forms ultimately give a wrong answer when the data in the world w changes, we create Figure 1: Six logical forms generated from the question x. The first five are consistent: they execute to the correct answer y. Of those, correct logical forms z 1 , z 2 , and z 3 are different ways to represent the semantics of x, while spurious logical forms z 4 and z 5 get the right answer y for the wrong reasons.
fictitious worlds to test the denotations of the logical forms in Z. We use crowdsourcing to annotate the correct denotations on a subset of the generated worlds. To reduce the amount of annotation needed, we choose the subset that maximizes the expected information gain. The pruned set of logical forms would provide a stronger supervision signal for training a semantic parser.
We test our methods on the WIKITABLEQUES-TIONS dataset of complex questions on Wikipedia tables. We define a simple, general set of deduction rules (Section 3), and use DPD to confirm that the rules generate a correct logical form in . . .
r 1 · · · 1 Finland 1st r 2 · · · 2 Germany 11th r 3 · · · 3
Thailand 1st . . . 76% of the examples, up from the 53.5% in Pasupat and . Moreover, unlike beam search, DPD is guaranteed to find all consistent logical forms up to a bounded size. Finally, by using annotated data on fictitious worlds, we are able to prune out 92.1% of the spurious logical forms.
Setup
The overarching motivation of this work is allowing people to ask questions involving computation on semi-structured knowledge sources such as tables from the Web. This section introduces how the knowledge source is represented, how the computation is carried out using logical forms, and our task of inferring correct logical forms.
Worlds. We use the term world to refer to a collection of entities and relations between entities. One way to represent a world w is as a directed graph with nodes for entities and directed edges for relations. (For example, a world about geography would contain a node Europe with an edge Contains to another node Germany.)
In this paper, we use data tables from the Web as knowledge sources, such as the one in Figure 1. We follow the construction in Pasupat and Liang (2015) for converting a table into a directed graph (see Figure 2). Rows and cells become nodes (e.g., r 0 = first row and Finland) while columns become labeled directed edges between them (e.g., Venue maps r 1 to Finland). The graph is augmented with additional edges Next (from each row to the next) and Index (from each row to its index number). In addition, we add normalization edges to cell nodes, including Number (from the cell to the first number in the cell), Num2 (the second number), Date (interpretation as a date), and Part (each list item if the cell represents a list). For example, a cell with content "3-4" has a Number edge to the integer 3, a Num2 edge to 4, and a Date edge to XX-03-04.
Logical forms. We can perform computation on a world w using a logical form z, a small program that can be executed on the world, resulting in a denotation z w .
We use lambda DCS (Liang, 2013) as the language of logical forms. As a demonstration, we will use z 1 in Figure 2 as an example. The smallest units of lambda DCS are entities (e.g., 1st) and relations (e.g., Position). Larger logical forms can be constructed using logical operations, and the denotation of the new logical form can be computed from denotations of its constituents. For example, applying the join operation on Position and 1st gives Position.1st, whose denotation is the set of entities with relation Position pointing to 1st. With the world in Figure 2, the denotation is Position.1st w = {r 1 , r 3 }, which corresponds to the 2nd and 4th rows in the table. The partial logical form Position.1st is then used to construct argmax(Position.1st, Index), the denotation of which can be computed by mapping the entities in Position.1st w = {r 1 , r 3 } using the relation Index ({r 0 : 0, r 1 : 1, . . . }), and then picking the one with the largest mapped value (r 3 , which is mapped to 3). The resulting logical form is finally combined with R[Venue] with another join operation. The relation R[Venue] is the reverse of Venue, which corresponds to traversing Venue edges in the reverse direction.
Semantic parsing. A semantic parser maps a natural language utterance x (e.g., "Where did the last 1st place finish occur?") into a logical form z. With denotations as supervision, a semantic parser is trained to put high probability on z's that are consistent-logical forms that execute to the correct denotation y (e.g., Thailand). When the space of logical forms is large, searching for consistent logical forms z can become a challenge.
As illustrated in Figure 1, consistent logical forms can be divided into two groups: correct logical forms represent valid ways for computing the answer, while spurious logical forms accidentally get the right answer for the wrong reasons (e.g., z 4 picks the row with the maximum time but gets the correct answer anyway).
Tasks. Denote by Z and Z c the sets of all consistent and correct logical forms, respectively. The first task is to efficiently compute Z given an utterance x, a world w, and the correct denotation y (Section 4). With the set Z, the second task is to infer Z c by pruning spurious logical forms from Z (Section 5).
Deduction rules
The space of logical forms given an utterance x and a world w is defined recursively by a set of deduction rules (Table 1). In this setting, each constructed logical form belongs to a category (Set, Rel, or Map). These categories are used for type checking in a similar fashion to categories in syntactic parsing. Each deduction rule specifies the categories of the arguments, category of the resulting logical form, and how the logical form is constructed from the arguments.
Deduction rules are divided into base rules and compositional rules. A base rule follows one of the following templates:
TokenSpan[span] → c [f (span)]
(1)
∅ → c [f ()](2)
A rule of Template 1 is triggered by a span of tokens from x (e.g., to construct z 1 in Figure 2 from x in Figure 1, Rule B1 from Table 1 constructs 1st of category Set from the phrase "1st"). Meanwhile, a rule of Template 2 generates a logical form without any trigger (e.g., Rule B5 generates Position of category Rel from the graph edge Position without a specific trigger in x). Compositional rules then construct larger logical forms from smaller ones:
c 1 [z 1 ] + c 2 [z 2 ] → c [g(z 1 , z 2 )]
(3)
c 1 [z 1 ] → c [g(z 1 )](4)
A rule of Template 3 combines partial logical forms z 1 and z 2 of categories c 1 and c 2 into g(z 1 , z 2 ) of category c (e.g., Rule C1 uses 1st of category Set and Position of category Rel to construct Position.1st of category Set). Template 4 works similarly.
Most rules construct logical forms without requiring a trigger from the utterance x. This is Rule Semantics Base Rules B1
TokenSpan → Set fuzzymatch(span) (entity fuzzily matching the text: "chinese" → China) B2
TokenSpan → Set val(span) (interpreted value: "march 2015" → 2015-03-XX) B3
∅ → Set Type.Row (the set of all rows) B4
∅ → Set c ∈ ClosedClass (any entity from a column with few unique entities) (e.g., 400m or relay from the Event column) B5
∅ → Rel r ∈ GraphEdges (any relation in the graph:
Venue, Next, Num2, . . . ) B6 ∅ → Rel != | < | <= | > | >= Compositional Rules C1 Set + Rel → Set z2.z1 | R[z2].z1 (R[z] is the reverse of z; i.e., flip the arrow direction) C2 Set → Set a(z1) (a ∈ {count, max, min, sum, avg}) C3 Set + Set → Set z1 z2 | z1 z2 | z1 − z2 (subtraction is only allowed on numbers) Compositional Rules with Maps Initialization M1 Set → Map (z1, x) (identity map) Operations on Map M2 Map + Rel → Map (u1, z2.b1) | (u1, R[z2].b1) M3 Map → Map (u1, a(b1)) (a ∈ {count, max, min, sum, avg}) M4 Map + Set → Map (u1, b1 z2) | . . . M5 Map + Map → Map (u1, b1 b2) | . . .
(Allowed only when u1 = u2) (Rules M4 and M5 are repeated for and −) crucial for generating implicit relations (e.g., generating
Finalization M6 Map → Set argmin(u1, R[λx.b1]) | argmax(u1, R[λx.b1])
Year from "what's the venue in 2000?" without a trigger "year"), and generating operations without a lexicon (e.g., generating argmax from "where's the longest competition"). However, the downside is that the space of possible logical forms becomes very large.
The Map category. The technique in this paper requires execution of partial logical forms. This poses a challenge for argmin and argmax operations, which take a set and a binary relation as arguments. The binary could be a complex function (e.g., in z 3 from Figure 1). While it is possible to build the binary independently from the set, executing a complex binary is sometimes impossible (e.g., the denotation of λx.count(x) is impossible to write explicitly without knowledge of x).
We address this challenge with the Map category. A Map is a pair (u, b) of a finite set u (unary) and a binary relation b. The denotation of (u, b) is ( u w , b w ) where the binary b w is b w with the domain restricted to the set u w . For example, consider the construction of argmax(Position.1st, Index). After constructing Position.1st with denotation
{r 1 , r 3 }, Rule M1 initializes (Position.1st, x) with denotation ({r 1 , r 3 }, {r 1 : {r 1 }, r 3 : {r 3 }}). Rule M2 is then applied to generate (Position.1st, R[Index].x) with denotation ({r 1 , r 3 }, {r 1 : {1}, r 3 : {3}}).
Finally, Rule M6 converts the Map into the desired argmax logical form with denotation {r 3 }.
Generality of deduction rules. Using domain knowledge, previous work restricted the space of logical forms by manually defining the categories c or the semantic functions f and g to fit the domain. For example, the category Set might be divided into Records, Values, and Atomic when the knowledge source is a table (Pasupat and Liang, 2015). Another example is when a compositional rule g (e.g., sum(z 1 )) must be triggered by some phrase in a lexicon (e.g., words like "total" that align to sum in the training data). Such restrictions make search more tractable but greatly limit the scope of questions that can be answered.
Here, we have increased the coverage of logical forms by making the deduction rules simple and general, essentially following the syntax of lambda DCS. The base rules only generates entities that approximately match the utterance, but all possible relations, and all possible further combinations.
Beam search. Given the deduction rules, an utterance x and a world w, we would like to generate all derived logical forms Z. We first present the floating parser (Pasupat and Liang, 2015), which uses beam search to generate Z b ⊆ Z, a usually incomplete subset. Intuitively, the algorithm first constructs base logical forms based on spans of the utterance, and then builds larger logical forms of increasing size in a "floating" fashion-without requiring a trigger from the utterance.
Formally, partial logical forms with category c and size s are stored in a cell (c, s). The algorithm first generates base logical forms from base deduction rules and store them in cells (c, 0) (e.g., the cell (Set, 0) contains 1st, Type.Row, and so on). Then for each size s = 1, . . . , s max , we populate · · · · · · · · · · · · · · · · · · · · · (Set, 7, {Thailand}) (Set, 7, {Finland}) Figure 3: The first pass of DPD constructs cells (c, s, d) (square nodes) using denotationally invariant semantic functions (circle nodes). The second pass enumerates all logical forms along paths that lead to the correct denotation y (solid lines).
the cells (c, s) by applying compositional rules on partial logical forms with size less than s. For instance, when s = 2, we can apply Rule C1 on logical forms Number.1 from cell (Set, s 1 = 1) and Position from cell (Rel, s 2 = 0) to create Position.Number.1 in cell (Set, s 0 +s 1 +1 = 2). After populating each cell (c, s), the list of logical forms in the cell is pruned based on the model scores to a fixed beam size in order to control the search space. Finally, the set Z b is formed by collecting logical forms from all cells (Set, s) for s = 1, . . . , s max .
Due to the generality of our deduction rules, the number of logical forms grows quickly as the size s increases. As such, partial logical forms that are essential for building the desired logical forms might fall off the beam early on. In the next section, we present a new search method that compresses the search space using denotations.
Dynamic programming on denotations
Our first step toward finding all correct logical forms is to represent all consistent logical forms (those that execute to the correct denotation). Formally, given x, w, and y, we wish to generate the set Z of all logical forms z such that z w = y.
As mentioned in the previous section, beam search does not recover the full set Z due to pruning. Our key observation is that while the number of logical forms explodes, the number of distinct denotations of those logical forms is much more controlled, as multiple logical forms can share the same denotation. So instead of directly enumerating logical forms, we use dynamic programming on denotations (DPD), which is inspired by similar methods from program induction (Lau et al., 2003;Liang et al., 2010;Gulwani, 2011).
The main idea of DPD is to collapse logical forms with the same denotation together. Instead of using cells (c, s) as in beam search, we perform dynamic programming using cells (c, s, d) where d is a denotation. For instance, the logical form Position.Number.1 will now be stored in cell (Set, 2, {r 1 , r 3 }).
For DPD to work, each deduction rule must have a denotationally invariant semantic function g, meaning that the denotation of the resulting logical form g(z 1 , z 2 ) only depends on the denotations of z 1 and z 2 :
z 1 w = z 1 w ∧ z 2 w = z 2 w ⇒ g(z 1 , z 2 ) w = g(z 1 , z 2 ) w
All of our deduction rules in Table 1 are denotationally invariant, but a rule that, for instance, returns the argument with the larger logical form size would not be. Applying a denotationally invariant deduction rule on any pair of logical forms from (c 1 , s 1 , d 1 ) and (c 2 , s 2 , d 2 ) always results in a logical form with the same denotation d in the same cell (c, s 1 + s 2 + 1, d). 1 (For example, the cell (Set, 4, {r 3 }) contains z 1 := argmax(Position.1st, Index) and z 1 := argmin(Event.Relay, Index). Combining each of these with Venue using Rule C1 gives R[Venue].z 1 and R[Venue].z 1 , which belong to the same cell (Set, 5, {Thailand})).
Algorithm. DPD proceeds in two forward passes. The first pass finds the possible combinations of cells (c, s, d) that lead to the correct denotation y, while the second pass enumerates the logical forms in the cells found in the first pass. Figure 3 illustrates the DPD algorithm.
In the first pass, we are only concerned about finding relevant cell combinations and not the actual logical forms. Therefore, any logical form that belongs to a cell could be used as an argument of a deduction rule to generate further logical forms. Thus, we keep at most one logical form per cell; subsequent logical forms that are generated for that cell are discarded.
After populating all cells up to size s max , we list all cells (Set, s, y) with the correct denotation y, and then note all possible rule combinations (cell 1 , rule) or (cell 1 , cell 2 , rule) that lead to those final cells, including the combinations that yielded discarded logical forms.
The second pass retrieves the actual logical forms that yield the correct denotation. To do this, we simply populate the cells (c, s, d) with all logical forms, using only rule combinations that lead to final cells. This elimination of irrelevant rule combinations effectively reduces the search space. (In Section 6.2, we empirically show that the number of cells considered is reduced by 98.7%.)
The parsing chart is represented as a hypergraph as in Figure 3. After eliminating unused rule combinations, each of the remaining hyperpaths from base predicates to the target denotation corresponds to a single logical form. making the remaining parsing chart a compact implicit representation of all consistent logical forms. This representation is guaranteed to cover all possible logical forms under the size limit s max that can be constructed by the deduction rules.
In our experiments, we apply DPD on the deduction rules in Table 1 and explicitly enumerate the logical forms produced by the second pass. For efficiency, we prune logical forms that are clearly redundant (e.g., applying max on a set of size 1). We also restrict a few rules that might otherwise create too many denotations. For example, we restricted the union operation ( ) except unions of two entities (e.g., we allow Germany Finland but not Venue.Hungary . . . ), subtraction when building a Map, and count on a set of size 1. 2
Fictitious worlds
After finding the set Z of all consistent logical forms, we want to filter out spurious logical forms. To do so, we observe that semantically correct logical forms should also give the correct denotation in worlds w other than than w. In contrast, spurious logical forms will fail to produce the correct denotation on some other world.
Generating fictitious worlds. With the observation above, we generate fictitious worlds w 1 , w 2 , . . . , where each world w i is a slight alteration of w. As we will be executing logical forms z ∈ Z on w i , we should ensure that all entities and relations in z ∈ Z appear in the fictitious world w i (e.g., z 1 in Figure 1 would be meaningless if the entity 1st does not appear in w i ). To this end, we Figure 4: From the example in Figure 1, we generate a table for the fictitious world w 1 . impose that all predicates present in the original world w should also be present in w i as well.
w w 1 w 2 · · · z 1 Thailand China Finland · · · q 1 z 2 Thailand China Finland · · · z 3 Thailand China Finland · · · z 4 Thailand Germany China · · · } q 2 z 5 Thailand China China · · · q 3 z 6 Thailand China China · · · . . . . . . . . . . . .
In our case where the world w comes from a data table t, we construct w i from a new table t i as follows: we go through each column of t and resample the cells in that column. The cells are sampled using random draws without replacement if the original cells are all distinct, and with replacement otherwise. Sorted columns are kept sorted. To ensure that predicates in w exist in w i , we use the same set of table columns and enforce that any entity fuzzily matching a span in the question x must be present in t i (e.g., for the example in Figure 1, the generated t i must contain "1st"). Figure 4 shows an example fictitious table generated from the table in Figure 1.
Fictitious worlds are similar to test suites for computer programs. However, unlike manually designed test suites, we do not yet know the correct answer for each fictitious world or whether a world is helpful for filtering out spurious logical forms. The next subsections introduce our method for choosing a subset of useful fictitious worlds to be annotated.
Equivalence classes. Let W = (w 1 , . . . , w k ) be the list of all possible fictitious worlds. For each z ∈ Z, we define the denotation tuple z W = ( z w 1 , . . . , z w k ). We observe that some logical forms produce the same denotation across all fictitious worlds. This may be due to an algebraic equivalence in logical forms (e.g., z 1 and z 2 in Figure 1) or due to the constraints in the construction of fictitious worlds (e.g., z 1 and z 3 in Figure 1 are equivalent as long as the Year column is sorted). We group logical forms into equivalence classes based on their denotation tuples, as illustrated in Figure 5. When the question is unambiguous, we expect at most one equivalence class to contain correct logical forms.
Annotation. To pin down the correct equivalence class, we acquire the correct answers to the question x on some subset W = (w 1 , . . . , w ) ⊆ W of fictitious worlds, as it is impractical to obtain annotations on all fictitious worlds in W . We compile equivalence classes that agree with the annotations into a set Z c of correct logical forms.
We want to choose W that gives us the most information about the correct equivalence class as possible. This is analogous to standard practices in active learning (Settles, 2010). 3 Let Q be the set of all equivalence classes q, and let q W be the denotation tuple computed by executing an arbitrary z ∈ q on W . The subset W divides Q into partitions F t = {q ∈ Q : q W = t} based on the denotation tuples t (e.g., from Figure 5, if W contains just w 2 , then q 2 and q 3 will be in the same partition F (China) ). The annotation t * , which is also a denotation tuple, will mark one of these partitions F t * as correct. Thus, to prune out many spurious equivalence classes, the partitions should be as numerous and as small as possible.
More formally, we choose a subset W that maximizes the expected information gain (or equivalently, the reduction in entropy) about the correct equivalence class given the annotation. With random variables Q ∈ Q representing the correct equivalence class and T * W for the annotation on worlds W , we seek to find arg min W H(Q | T * W ). Assuming a uniform prior on Q (p(q) = 1/|Q|) and accurate annotation (p(t * | q) = I[q ∈ F t * ]):
H(Q | T * W ) = q,t p(q, t) log p(t) p(q, t) = 1 |Q| t |F t | log |F t |.(*)
We exhaustively search for W that minimizes (*). The objective value follows our intuition since t |F t | log |F t | is small when the terms |F t | are small and numerous.
In our experiments, we approximate the full set W of fictitious worlds by generating k = 30 worlds to compute equivalence classes. We choose a subset of = 5 worlds to be annotated.
Experiments
For the experiments, we use the training portion of the WIKITABLEQUESTIONS dataset (Pasupat and Liang, 2015), which consists of 14,152 questions on 1,679 Wikipedia tables gathered by crowd workers. Answering these complex questions requires different types of operations. The same operation can be phrased in different ways (e.g., "best", "top ranking", or "lowest ranking number") and the interpretation of some phrases depend on the context (e.g., "number of " could be a table lookup or a count operation). The lexical content of the questions is also quite diverse: even excluding numbers and symbols, the 14,152 training examples contain 9,671 unique words, only 10% of which appear more than 10 times.
We attempted to manually annotate the first 300 examples with lambda DCS logical forms. We successfully constructed correct logical forms for 84% of these examples, which is a good number considering the questions were created by humans who could use the table however they wanted. The remaining 16% reflect limitations in our setupfor example, non-canonical table layouts, answers appearing in running text or images, and common sense reasoning (e.g., knowing that "Quarterfinal" is better than "Round of 16").
Generality of deduction rules
We compare our set of deduction rules with the one given in Pasupat and Liang (2015) (henceforth PL15). PL15 reported generating the annotated logical form in 53.5% of the first 200 examples. With our more general deduction rules, we use DPD to verify that the rules are able to generate the annotated logical form in 76% of the first 300 examples, within the logical form size limit s max of 7. This is 90.5% of the examples that were successfully annotated. Figure 6 shows some examples of logical forms we cover that PL15 could not. Since DPD is guaranteed to find all consistent logical forms, we can be sure that the logical Figure 6: Several example logical forms our system can generated that are not covered by the deduction rules from the previous work PL15.
forms not covered are due to limitations of the deduction rules. Indeed, the remaining examples either have logical forms with size larger than 7 or require other operations such as addition, union of arbitrary sets, etc.
Dynamic programming on denotations
Search space. To demonstrate the savings gained by collapsing logical forms with the same denotation, we track the growth of the number of unique logical forms and denotations as the logical form size increases. The plot in Figure 7 shows that the space of logical forms explodes much more quickly than the space of denotations. The use of denotations also saves us from considering a significant amount of irrelevant partial logical forms. On average over 14,152 training examples, DPD generates approximately 25,000 consistent logical forms. The first pass of DPD generates ≈ 153,000 cells (c, s, d), while the second pass generates only ≈ 2,000 cells resulting from ≈ 8,000 rule combinations, resulting in a 98.7% reduction in the number of cells that have to be considered.
Comparison with beam search. We compare DPD to beam search on the ability to generate (but not rank) the annotated logical forms. We consider two settings: when the beam search parameters are uninitialized (i.e., the beams are pruned randomly), and when the parameters are trained using the system from PL15 (i.e., the beams are pruned based on model scores). The plot in Figure 8 shows that DPD generates more annotated logical forms (76%) compared to beam search (53.7%), even when beam search is guided heuristically by learned parameters. Note that DPD is an exact algorithm and does not require a heuristic. (solid), increases with the number of candidates generated (controlled by beam size), but lacks behind DPD (star).
Fictitious worlds
We now explore how fictitious worlds divide the set of logical forms into equivalence classes, and how the annotated denotations on the chosen worlds help us prune spurious logical forms.
Equivalence classes. Using 30 fictitious worlds per example, we produce an average of 1,237 equivalence classes. One possible concern with using a limited number of fictitious worlds is that we may fail to distinguish some pairs of nonequivalent logical forms. We verify the equivalence classes against the ones computed using 300 fictitious worlds. We found that only 5% of the logical forms are split from the original equivalence classes.
Ideal Annotation. After computing equivalence classes, we choose a subset W of 5 fictitious worlds to be annotated based on the informationtheoretic objective. For each of the 252 examples with an annotated logical form z * , we use the denotation tuple t * = z * W as the annotated answers on the chosen fictitious worlds. We are able to rule out 98.7% of the spurious equivalence classes and 98.3% of spurious logical forms. Furthermore, we are able to filter down to just one equivalence class in 32.7% of the examples, and at most three equivalence classes in 51.3% of the examples. If we choose 5 fictitious worlds randomly instead of maximizing information gain, then the above statistics are 22.6% and 36.5%, respectively. When more than one equivalence classes remain, usually only one class is a dominant class with many equivalent logical forms, while other classes are small and contain logical forms with unusual patterns (e.g., z 5 in Figure 1). The average size of the correct equivalence class is ≈ 3,000 with the standard deviation of ≈ 8,000. Because we have an expressive logical language, there are fundamentally many equivalent ways of computing the same quantity.
Crowdsourced Annotation. Data from crowdsourcing is more susceptible to errors. From the 252 annotated examples, we use 177 examples where at least two crowd workers agree on the answer of the original world w. When the crowdsourced data is used to rule out spurious logical forms, the entire set Z of consistent logical forms is pruned out in 11.3% of the examples, and the correct equivalent class is removed in 9% of the examples. These issues are due to annotation errors, inconsistent data (e.g., having date of death before birth date), and different interpretations of the question on the fictitious worlds. For the remaining examples, we are able to prune out 92.1% of spurious logical forms (or 92.6% of spurious equivalence classes).
To prevent the entire Z from being pruned, we can relax our assumption and keep logical forms z that disagree with the annotation in at most 1 fictitious world. The number of times Z is pruned out is reduced to 3%, but the number of spurious logical forms pruned also decreases to 78%.
Related Work and Discussion
This work evolved from a long tradition of learning executable semantic parsers, initially from annotated logical forms (Zelle and Mooney, 1996;Kate et al., 2005;Zettlemoyer and Collins, 2005;Zettlemoyer and Collins, 2007;Kwiatkowski et al., 2010), but more recently from denotations (Clarke et al., 2010;Liang et al., 2011;Berant et al., 2013;Kwiatkowski et al., 2013;Pasupat and Liang, 2015). A central challenge in learning from denotations is finding consistent logical forms (those that execute to a given denotation).
As Kwiatkowski et al. (2013) and Berant and Liang (2014) both noted, a chief difficulty with executable semantic parsing is the "schema mismatch"-words in the utterance do not map cleanly onto the predicates in the logical form. This mismatch is especially pronounced in the WIKITABLEQUESTIONS of Pasupat and Liang (2015). In the second example of Figure 6, "how long" is realized by a logical form that computes a difference between two dates. The ramification of this mismatch is that finding consistent logical forms cannot solely proceed from the language side. This paper is about using annotated denotations to drive the search over logical forms.
This takes us into the realm of program induction, where the goal is to infer a program (logical form) from input-output pairs (for us, world-denotation pairs). Here, previous work has also leveraged the idea of dynamic programming on denotations (Lau et al., 2003;Liang et al., 2010;Gulwani, 2011), though for more constrained spaces of programs. Continuing the program analogy, generating fictitious worlds is similar in spirit to fuzz testing for generating new test cases (Miller et al., 1990), but the goal there is coverage in a single program rather than identifying the correct (equivalence class of) programs. This connection can potentially improve the flow of ideas between the two fields.
Finally, the effectiveness of dynamic programming on denotations relies on having a manageable set of denotations. For more complex logical forms and larger knowledge graphs, there are many possible angles worth exploring: performing abstract interpretation to collapse denotations into equivalence classes (Cousot and Cousot, 1977), relaxing the notion of getting the correct denotation (Steinhardt and Liang, 2015), or working in a continuous space and relying on gradient descent (Guu et al., 2015;Neelakantan et al., 2016;Yin et al., 2016;Reed and de Freitas, 2016). This paper, by virtue of exact dynamic programming, sets the standard.
Figure 2 :
2= R[Venue]. argmax( Position. 1st , Index) The table in Figure 1 is converted into a graph. The recursive execution of logical form z 1 is shown via the different colors and styles.
Figure 5 :
5We execute consistent logical forms z i ∈ Z on fictitious worlds to get denotation tuples. Logical forms with the same denotation tuple are grouped into the same equivalence class q j .
Figure 7 :Figure 8 :
78The median of the number of logical forms (dashed) and denotations (solid) as the formula size increases. The space of logical forms grows much faster than the space of denotations. The number of annotated logical forms that can be generated by beam search, both uninitialized (dashed) and initialized
Among rows with Position number 1, pick one with latest date in the Year column and return the Venue. Spurious z4: R[Venue].argmax(Position.Number.1, R[λx.R[Number].R[Time].x]) Among rows with Position number 1, pick the one with maximum Time number. Return the Venue. z5: R[Venue].Year.Number.( R[Number].R[Year].argmax(Type.Row, Index)−1) Subtract 1 from the Year in the last row, then return the Venue of the row with that Year.Year
Venue
Position Event
Time
2001 Hungary
2nd
400m
47.12
2003
Finland
1st
400m
46.69
2005 Germany
11th
400m
46.62
2007 Thailand
1st
relay
182.05
2008
China
7th
relay
180.32
x: "Where did the last 1st place finish occur?"
y: Thailand
Consistent
Correct
z1: R[Venue].argmax(Position.1st, Index)
Among rows with Position = 1st, pick the one with
maximum index, then return the Venue of that row.
z2: R[Venue].Index.max(R[Index].Position.1st)
Find the maximum index of rows with Position =
1st, then return the Venue of the row with that index.
z3: R[Venue].argmax(Position.Number.1,
R[λx.R[Date].R[Year].x])
Inconsistent
z: R[Venue].argmin(Position.1st, Index)
Among rows with Position = 1st, pick the one with
minimum index, then return the Venue. (= Finland)
Table 1 :
1Deduction rules define the space of logical forms by specifying how partial logical forms are constructed. The logical form of the i-th argument is denoted by z i (or (u i , b i ) if the argument is a Map). The set of final logical forms contains any logical form with category Set.
"which opponent has the most wins" z = argmax(R[Opponent].Type.Row, R[λx.count(Opponent.x Result.Lost]) "how long did ian armstrong serve?" z = R[Num2].R[Term].Member.IanArmstrong − R[Number].R[Term].Member.IanArmstrong "which players came in a place before lukas bauer?" z = R[Name].Index.<.R[Index].Name.LukasBauer "which players played the same position as ardo kreek?" z = R[Player].Position.R[Position].Player.Ardo !=.Ardo
Semantic functions f with one argument work similarly.
While we technically can apply count on sets of size 1, the number of spurious logical forms explodes as there are too many sets of size 1 generated.
The difference is that we are obtaining partial information about an individual example rather than partial information about the parameters.
Acknowledgments. We gratefully acknowledge the support of the Google Natural Language Understanding Focused Program. In addition, we would like to thank anonymous reviewers for their helpful comments.Reproducibility. Code and experiments for this paper are available on the CodaLab platform at https://worksheets.codalab.org/worksheets/ 0x47cc64d9c8ba4a878807c7c35bb22a42/.
UW SPF: The University of Washington semantic parsing framework. Y Artzi, L Zettlemoyer, arXiv:1311.3011arXiv preprintY. Artzi and L. Zettlemoyer. 2013. UW SPF: The Uni- versity of Washington semantic parsing framework. arXiv preprint arXiv:1311.3011.
Semantic parsing via paraphrasing. J Berant, P Liang, Association for Computational Linguistics (ACL). J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL).
Semantic parsing on Freebase from question-answer pairs. J Berant, A Chou, R Frostig, P Liang, Empirical Methods in Natural Language Processing (EMNLP). J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).
Driving semantic parsing from the world's response. J Clarke, D Goldwasser, M Chang, D Roth, Computational Natural Language Learning (CoNLL). J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world's re- sponse. In Computational Natural Language Learn- ing (CoNLL), pages 18-27.
Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. P Cousot, R Cousot, Principles of Programming Languages (POPL). P. Cousot and R. Cousot. 1977. Abstract interpreta- tion: a unified lattice model for static analysis of programs by construction or approximation of fix- points. In Principles of Programming Languages (POPL), pages 238-252.
Automating string processing in spreadsheets using input-output examples. S Gulwani, ACM SIGPLAN Notices. 461S. Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. ACM SIGPLAN Notices, 46(1):317-330.
Traversing knowledge graphs in vector space. K Guu, J Miller, P Liang, Empirical Methods in Natural Language Processing (EMNLP). K. Guu, J. Miller, and P. Liang. 2015. Travers- ing knowledge graphs in vector space. In Em- pirical Methods in Natural Language Processing (EMNLP).
Learning to transform natural to formal languages. R J Kate, Y W Wong, R J Mooney, Association for the Advancement of Artificial Intelligence (AAAI). R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial In- telligence (AAAI), pages 1062-1068.
Inducing probabilistic CCG grammars from logical form with higher-order unification. T Kwiatkowski, L Zettlemoyer, S Goldwater, M Steedman, Empirical Methods in Natural Language Processing (EMNLP). T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order uni- fication. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223-1233.
Scaling semantic parsers with on-the-fly ontology matching. T Kwiatkowski, E Choi, Y Artzi, L Zettlemoyer, Empirical Methods in Natural Language Processing (EMNLP). T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly on- tology matching. In Empirical Methods in Natural Language Processing (EMNLP).
Programming by demonstration using version space algebra. T Lau, S Wolfman, P Domingos, D S Weld, Machine Learning. 53T. Lau, S. Wolfman, P. Domingos, and D. S. Weld. 2003. Programming by demonstration using version space algebra. Machine Learning, 53:111-156.
Learning programs: A hierarchical Bayesian approach. P Liang, M I Jordan, D Klein, International Conference on Machine Learning (ICML). P. Liang, M. I. Jordan, and D. Klein. 2010. Learn- ing programs: A hierarchical Bayesian approach. In International Conference on Machine Learning (ICML), pages 639-646.
Learning dependency-based compositional semantics. P Liang, M I Jordan, D Klein, Association for Computational Linguistics (ACL). P. Liang, M. I. Jordan, and D. Klein. 2011. Learn- ing dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590-599.
P Liang, Lambda dependency-based compositional semantics. arXiv. P. Liang. 2013. Lambda dependency-based composi- tional semantics. arXiv.
An empirical study of the reliability of UNIX utilities. B P Miller, L Fredriksen, B So, Communications of the ACM. 3312B. P. Miller, L. Fredriksen, and B. So. 1990. An empir- ical study of the reliability of UNIX utilities. Com- munications of the ACM, 33(12):32-44.
Neural programmer: Inducing latent programs with gradient descent. A Neelakantan, Q V Le, I Sutskever, International Conference on Learning Representations. ICLRA. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR).
Compositional semantic parsing on semi-structured tables. P Pasupat, P Liang, Association for Computational Linguistics (ACL). P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).
Neural programmerinterpreters. S Reed, N De Freitas, International Conference on Learning Representations (ICLR). S. Reed and N. de Freitas. 2016. Neural programmer- interpreters. In International Conference on Learn- ing Representations (ICLR).
Active learning literature survey. B Settles, MadisonUniversity of WisconsinTechnical reportB. Settles. 2010. Active learning literature survey. Technical report, University of Wisconsin, Madison.
Learning with relaxed supervision. J Steinhardt, P Liang, Advances in Neural Information Processing Systems (NIPS). J. Steinhardt and P. Liang. 2015. Learning with re- laxed supervision. In Advances in Neural Informa- tion Processing Systems (NIPS).
P Yin, Z Lu, H Li, B Kao, Neural enquirer: Learning to query tables with natural language. arXiv. P. Yin, Z. Lu, H. Li, and B. Kao. 2016. Neural en- quirer: Learning to query tables with natural lan- guage. arXiv.
Learning to parse database queries using inductive logic programming. M Zelle, R J Mooney, Association for the Advancement of Artificial Intelligence (AAAI). M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050-1055.
Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. L S Zettlemoyer, M Collins, Uncertainty in Artificial Intelligence (UAI). L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Un- certainty in Artificial Intelligence (UAI), pages 658- 666.
Online learning of relaxed CCG grammars for parsing to logical form. L S Zettlemoyer, M Collins, Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). L. S. Zettlemoyer and M. Collins. 2007. Online learn- ing of relaxed CCG grammars for parsing to log- ical form. In Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP/CoNLL), pages 678-687.
|
[] |
[
"KIC 9246715: THE DOUBLE RED GIANT ECLIPSING BINARY WITH ODD OSCILLATIONS",
"KIC 9246715: THE DOUBLE RED GIANT ECLIPSING BINARY WITH ODD OSCILLATIONS"
] |
[
"Meredith L Rawls [email protected] \nDepartment of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA\n",
"Patrick Gaulme \nDepartment of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA\n\nApache Point Observatory\nApache Point Road\nP.O. Box 592001, 88349SunspotNMUSA\n",
"Jean Mckeever \nDepartment of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA\n",
"Jason Jackiewicz \nDepartment of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA\n",
"Jerome A Orosz \nDepartment of Astronomy\nSan Diego State University\n5500 Campanile Drive91945San DiegoCAUSA\n",
"Enrico Corsaro \nLaboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay\n91191Gif-sur-Yvette CedexFrance\n\nInstituto de Astrofísica de Canarias\n38205La Laguna, TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\n38206La Laguna, TenerifeSpain\n",
"Paul Beck \nLaboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay\n91191Gif-sur-Yvette CedexFrance\n",
"Benoît Mosser \nLESIA, Observatoire de Paris\nPSL Research University\nCNRS\nUniversité Pierre et Marie Curie\nUniversité Paris Diderot\n92195MeudonFrance\n",
"David W Latham \nHarvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA\n",
"Christian A Latham \nHarvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA\n"
] |
[
"Department of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA",
"Department of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA",
"Apache Point Observatory\nApache Point Road\nP.O. Box 592001, 88349SunspotNMUSA",
"Department of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA",
"Department of Astronomy\nNew Mexico State University\nMSC 4500P.O. Box 3000188003Las CrucesNMUSA",
"Department of Astronomy\nSan Diego State University\n5500 Campanile Drive91945San DiegoCAUSA",
"Laboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay\n91191Gif-sur-Yvette CedexFrance",
"Instituto de Astrofísica de Canarias\n38205La Laguna, TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\n38206La Laguna, TenerifeSpain",
"Laboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay\n91191Gif-sur-Yvette CedexFrance",
"LESIA, Observatoire de Paris\nPSL Research University\nCNRS\nUniversité Pierre et Marie Curie\nUniversité Paris Diderot\n92195MeudonFrance",
"Harvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA",
"Harvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA"
] |
[] |
We combine Kepler photometry with ground-based spectra to present a comprehensive dynamical model of the double red giant eclipsing binary KIC 9246715. While the two stars are very similar in mass (M 1 = 2.171 +0.006 −0.008 M , M 2 = 2.149 +0.006 −0.008 M ) and radius (R 1 = 8.37 +0.03 −0.07 R , R 2 = 8.30 +0.04−0.03 R ), an asteroseismic analysis finds one main set of solar-like oscillations with unusually low-amplitude, wide modes. A second set of oscillations from the other star may exist, but this marginal detection is extremely faint. Because the two stars are nearly twins, KIC 9246715 is a difficult target for a precise test of the asteroseismic scaling relations, which yield M = 2.17 ± 0.14 M and R = 8.26 ± 0.18 R . Both stars are consistent with the inferred asteroseismic properties, but we suspect the main oscillator is Star 2 because it is less active than Star 1. We find evidence for stellar activity and modest tidal forces acting over the 171-day eccentric orbit, which are likely responsible for the essential lack of solar-like oscillations in one star and weak oscillations in the other. Mixed modes indicate the main oscillating star is on the secondary red clump (a core-He-burning star), and stellar evolution modeling supports this with a coeval history for a pair of red clump stars. This system is a useful case study and paves the way for a detailed analysis of more red giants in eclipsing binaries, an important benchmark for asteroseismology.
|
10.3847/0004-637x/818/2/108
|
[
"https://arxiv.org/pdf/1601.00038v1.pdf"
] | 119,285,343 |
1601.00038
|
204b3575db2d771c8e26a3c54fbc5f4d1ac93eb7
|
KIC 9246715: THE DOUBLE RED GIANT ECLIPSING BINARY WITH ODD OSCILLATIONS
Meredith L Rawls [email protected]
Department of Astronomy
New Mexico State University
MSC 4500P.O. Box 3000188003Las CrucesNMUSA
Patrick Gaulme
Department of Astronomy
New Mexico State University
MSC 4500P.O. Box 3000188003Las CrucesNMUSA
Apache Point Observatory
Apache Point Road
P.O. Box 592001, 88349SunspotNMUSA
Jean Mckeever
Department of Astronomy
New Mexico State University
MSC 4500P.O. Box 3000188003Las CrucesNMUSA
Jason Jackiewicz
Department of Astronomy
New Mexico State University
MSC 4500P.O. Box 3000188003Las CrucesNMUSA
Jerome A Orosz
Department of Astronomy
San Diego State University
5500 Campanile Drive91945San DiegoCAUSA
Enrico Corsaro
Laboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay
91191Gif-sur-Yvette CedexFrance
Instituto de Astrofísica de Canarias
38205La Laguna, TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna
38206La Laguna, TenerifeSpain
Paul Beck
Laboratoire AIM, CEA/DSM -CNRS -Univ. Paris Diderot -IRFU/SAp, Centre de Saclay
91191Gif-sur-Yvette CedexFrance
Benoît Mosser
LESIA, Observatoire de Paris
PSL Research University
CNRS
Université Pierre et Marie Curie
Université Paris Diderot
92195MeudonFrance
David W Latham
Harvard-Smithsonian Center for Astrophysics
60 Garden Street02138CambridgeMAUSA
Christian A Latham
Harvard-Smithsonian Center for Astrophysics
60 Garden Street02138CambridgeMAUSA
KIC 9246715: THE DOUBLE RED GIANT ECLIPSING BINARY WITH ODD OSCILLATIONS
Accepted for publication in ApJ, 2015 December 31 Accepted for publication in ApJ, 2015 December 31Preprint typeset using L A T E X style emulateapj v. 5/2/11Subject headings: stars: activity -binaries: eclipsing -stars: evolution -stars: fundamental parameters -stars: individual (KIC 9246715) -stars: oscillations
We combine Kepler photometry with ground-based spectra to present a comprehensive dynamical model of the double red giant eclipsing binary KIC 9246715. While the two stars are very similar in mass (M 1 = 2.171 +0.006 −0.008 M , M 2 = 2.149 +0.006 −0.008 M ) and radius (R 1 = 8.37 +0.03 −0.07 R , R 2 = 8.30 +0.04−0.03 R ), an asteroseismic analysis finds one main set of solar-like oscillations with unusually low-amplitude, wide modes. A second set of oscillations from the other star may exist, but this marginal detection is extremely faint. Because the two stars are nearly twins, KIC 9246715 is a difficult target for a precise test of the asteroseismic scaling relations, which yield M = 2.17 ± 0.14 M and R = 8.26 ± 0.18 R . Both stars are consistent with the inferred asteroseismic properties, but we suspect the main oscillator is Star 2 because it is less active than Star 1. We find evidence for stellar activity and modest tidal forces acting over the 171-day eccentric orbit, which are likely responsible for the essential lack of solar-like oscillations in one star and weak oscillations in the other. Mixed modes indicate the main oscillating star is on the secondary red clump (a core-He-burning star), and stellar evolution modeling supports this with a coeval history for a pair of red clump stars. This system is a useful case study and paves the way for a detailed analysis of more red giants in eclipsing binaries, an important benchmark for asteroseismology.
INTRODUCTION
Mass and radius are often-elusive stellar properties that are critical to understanding a star's past, present, and future. Eclipsing binaries are the only astrophysical laboratories that allow for a direct measurement of these and other fundamental physical parameters. Recently, however, observing solar-like oscillations in stars with convective envelopes has opened a window to stellar interiors and provided a new way to measure global stellar properties. A pair of asteroseismic scaling relations use the Sun as a benchmark between these oscillations and a star's effective temperature to yield mass and radius (Kjeldsen & Bedding 1995;Huber et al. 2010;Mosser et al. 2013).
While both the mass and radius scaling relations are useful, it is important to test their validity. Recent work has investigated the radius relation by comparing the asteroseismic large-frequency separation ∆ν and stellar radius between models and simulated data (e.g. Stello et al. 2009;White et al. 2011;Miglio et al. 2013), and by comparing asteroseismic radii with independent radius measurements such as interferometry or binary star modeling (e.g. Huber et al. 2011;Huber et al. 2012;Silva Aguirre et al. 2012). All of these find that radius estimates from asteroseismology are precise within a few percent, with greater scatter for red giants than main sequence stars. The mass scaling relation remains rela-tively untested. Most studies test the ∆ν scaling with average stellar density and not the scaling of ν max (the asteroseismic frequency of maximum oscillation power) with stellar surface gravity, because the latter has a lesssecure theoretical basis ). It is not yet possible to reliably predict oscillation mode amplitudes as a function of frequency (Christensen-Dalsgaard 2012). One study by Frandsen et al. (2013) did test both scaling laws with the red giant eclipsing binary KIC 8410637. They found good agreement between Keplerian and asteroseismic mass and radius, but a more recent analysis from Huber (2014) indicates that the asteroseismic density of KIC 8410637 is underestimated by ∼7 % (1.8 σ, accounting for the density uncertainties), which results in an overestimate of the radius by ∼9 % (2.7 σ) and mass by ∼17 % (1.9 σ). Additional benchmarks for the asteroseismic scaling relations are clearly needed.
Evolved red giants are straightforward to characterize through pressure-mode solar-like oscillations in their convective zones, and red giant asteroseismology is quickly becoming an important tool to study stellar populations throughout the Milky Way (for a review of this topic, see Chaplin & Miglio 2013). Compared to main-sequence stars, red giants oscillate with larger amplitudes and longer periods-several hours to days instead of minutes. Oscillations appear as spikes in the amplitude spectrum of a light curve that is sampled both frequently enough and for a sufficiently long duration. Therefore, observations from the Kepler space telescope taken every 29.4 minutes (long-cadence) over many 90-day quarters are ideal for asteroseismic studies of red giant stars.
Kepler 's primary science goal is to find Earth-like exoplanets orbiting sun-like stars (Borucki et al. 2010). However, in addition to successes in planet-hunting and suitability for red giant asteroseismology, Kepler is also incredibly useful for studies of eclipsing binary stars. Kepler has discovered numerous long-period eclipsing systems from consistent target monitoring over several years Slawson et al. 2011). Eclipsing binaries are important tools for understanding fundamental stellar properties, testing stellar evolutionary models, and determining distances. When radial velocity curves exist for both stars in an eclipsing binary, along with a well-sampled light curve, the inclination is precisely constrained and a full orbital solution with masses and radii can be found. Kepler's third law applied in this way is the only direct method for measuring stellar masses.
Taken together, red giants in eclipsing binaries (hereafter RG/EBs) that exhibit solar-like oscillations are ideal testbeds for asteroseismology. There are presently 18 known RG/EBs that show solar-like oscillations (Hekker et al. 2010;Gaulme et al. 2013Gaulme et al. , 2014Beck et al. 2014Beck et al. , 2015 with orbital periods ranging from 19 to 1058 days, all in the Kepler field of view.
In this paper, we present physical parameters for the unique RG/EB KIC 9246715 with a combination of dynamical modeling, stellar atmosphere modeling, and asteroseismology. KIC 9246715 contains two nearlyidentical red giants in a 171-day eccentric orbit with a single main set of solar-like oscillations. A second set of oscillations, potentially attributable to the other star, is marginally detected. In §2, we describe how we acquire and process photometric and spectroscopic data, and §3 explains our radial velocity extraction process. In §4, we disentangle each star's contribution to the spectra to perform stellar atmosphere modeling. We then present our final orbital solution and physical parameters for KIC 9246715 in §5. Finally, §6 compares our results with global asteroseismology and discusses the connection among solar-like oscillations, stellar evolution, and effects such as star spots and tidal forces, as well as implications for future RG/EB studies.
2. OBSERVATIONS 2.1. Kepler light curves Our light curves are from the Kepler Space Telescope in long-cadence mode (one data point every 29.4 minutes), and span 17 quarters-roughly four years-with only occasional gaps. These light curves are well-suited for red giant asteroseismology, as main sequence stars with convective envelopes oscillate too rapidly to be measured with Kepler long-cadence data.
When studying long-period eclipsing binaries, it is important to remove instrumental effects in the light curve while preserving the astrophysically interesting signal. In this work, we prioritize preserving eclipses. Our detrending algorithm uses the simple aperture photometry (SAP) long-cadence Kepler data for quarters 0-17. First, any observations with NaNs are removed, and observations from different quarters are put onto the same me-dian level so that the eclipses line up. The out-of-eclipse portions of the light curve are flattened, which removes any out-of-eclipse variability. For eclipse modeling, we use only the portions of the light curve that lie within one eclipse duration of the start and end of each eclipse. This differs from the light curve processing needed for asteroseismology, which typically "fills" the eclipses to minimize their effect on the power spectrum (Gaulme et al. 2014).
The processed light curve is presented in Figure 1. The top panel shows the entire detrended light curve, while the middle and bottom panels indicate the regions near each eclipse used in this work. We adopt the convention that the "primary" eclipse is the deeper of the two, when Star 1 is eclipsing Star 2. The geometry of the system creates partial eclipses with different depths due to similarly-sized stars in an eccentric orbit viewed with an inclination less than 90 degrees. For comparison, we present the detrended light curve with eclipses removed in Figure 2. The system shows out-of-eclipse photometric modulations on the order of 2%.
Ground-based spectroscopy
We have a total of 25 high-resolution spectra from three spectrographs. At many orbital phases, prominent absorption lines show a clear double-lined signature when inspected by eye. We find that KIC 9246715 is an excellent target for obtaining radial velocity curves for both stars in the binary as the stellar flux ratio is close to unity. A long time span of observations was necessary due to the 171.277-day orbital period and visibility of the Kepler field from the observing sites.
TRES echelle from FLWO
We obtained 13 high-resolution optical spectra from the Fred Lawrence Whipple Observatory (FLWO) 1.5m telescope in Arizona using the Tillinghast Reflector Echelle Spectrograph (TRES) from 2012 March through 2013 April. The wavelength range for TRES is 3900-9100Å, and the resolution for the medium fiber used is 44,000. The spectra were extracted and blaze-corrected with the pipeline developed by Buchhave et al. (2010).
ARCES echelle from APO
We also obtained ten high-resolution optical spectra from the Apache Point Observatory (APO) 3.5-m telescope in New Mexico using the Astrophysical Research Consortium Echelle Spectrograph (ARCES) from 2012 June through 2013 September. The wavelength range for ARCES is 3200-10,000Å with no gaps, and the average resolution is 31,000. We reduced the data using standard echelle reduction techniques and Karen Kinemuchi's ARCES cookbook (private communication) 1 .
APOGEE spectra from APO
We finally obtained two near-IR spectra of KIC 9246715 from the Sloan Digital Sky Survey-III (SDSS-III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey (Alam et al. 2015). The wavelength range for APOGEE is 1.5-1.7 µm with a quarters. The detrending process is described in Section 2.1. Middle: Folded version of the above over one orbit. The dotted lines indicate the portion of the light curve used in subsequent modeling. Bottom: A zoomed view of secondary and primary eclipses corresponding to the dotted lines above. To avoid overlaps, each observed eclipse is offset in magnitude from the previous one. The colored disks illustrate the eclipse configuration, with the red disk representing Star 1 and the yellow disk representing Star 2. nominal resolution of 22,500. The pair of spectra were reduced with the standard APOGEE pipeline, but not combined.
Global wavelength solution
Because the observations come from three different spectrographs at two different observatory sites, it is critical to apply a consistent wavelength solution that yields the same radial velocity zeropoint for all observations. This zeropoint is a function of the atmospheric conditions at the observatory and the instrument being used. Typically such a correction can be done with RV standard stars after a wavelength solution has been applied based on ThAr lamp observations. However, we lacked RV standard star observations, and some of the earlier ARCES observations had insufficiently frequent ThAr calibration images to arrive at a reliable wavelength solution. (We subsequently took ThAr images more frequently to address the latter issue.) To arrive at a consistent velocity zeropoint for all spectra, we use TelFit (Gullikson et al. 2014) to generate a telluric line model of the O2 A-band (7595-7638Å) with R = 31, 000 at STP. We then shift the ARCES and TRES spectra in velocity space using the broadening function technique (see Section 3.1) so they all line up with the TelFit model. The shifts range from −0.88 to 2.18 km s −1 , with the majority having a magnitude < 0.3 km s −1 .
3. RADIAL VELOCITIES 3.1. The broadening function To extract radial velocities from the spectra, we use the broadening function (BF) technique as outlined by Rucinski (2002). In the simplest terms, the BF is a function that transforms any sharp-line spectrum into a Doppler-broadened spectrum. The BF technique involves solving a convolution equation for the Doppler broadening kernel B, P (x) = B(x )T (x − x )dx , where P is an observed spectrum of a binary and T is a spectral template spanning the same wavelength window (Rucinski 2015). In practice, the BF can be used to characterize any deviation of an observed spectrum from an idealized sharp-line spectrum: various forms of line broadening, shifted lines due to Doppler radial velocity shifts, two sets of lines in the case of a spectroscopic binary, etc. The BF deconvolution is solved with singular value decomposition. This technique is generally preferred over the more familiar cross-correlation function (CCF), because the BF is a true linear deconvolution while the CCF is a non-linear proxy and is less suitable for double-lined spectra. The BF technique normalizes the result so that the velocity integral B(v)dv = 1 for an exact spectral match of the observed and template spectra. For this analysis, we adapt the IDL routines provided by Rucinski 2 into python 3 .
We use a PHOENIX BT-Settl model atmosphere spectrum as a BF template (Allard et al. 2003). This particular model uses Asplund et al. (2009) solar abundance values for a star with T eff = 4800 K, log g = 2.5, and solar metallicity, selected based on revised KIC values for 2 http://www.astro.utoronto.ca/~rucinski/SVDcookbook. html 3 https://github.com/mrawls/BF-rvplotter ). Since the BF handles line broadening between template and target robustly, we do not adjust the resolution of the template.
Using a model template avoids inconsistencies between the optical and IR regime, additional barycentric corrections, spurious telluric line peaks, and uncertainties from a template star's systemic RV. In comparison, we test the BF with an observation of Arcturus as a template, and find that using a real star template gives BF peaks that are narrower and have larger amplitudes. These qualities may be essential to measure RVs in the situation where a companion star is extremely faint, because the signal from a faint companion may not appear above the noise if the BF peaks are weaker and broader. However, each star contributes roughly equally to the overall spectrum here, so we choose a model atmosphere template for simplicity. The advantages of using a real star spectrum as a BF template instead of a model will likely be crucial for future work, as most other RG/EBs are composed of a bright RG and relatively faint main sequence companion.
For the optical spectra, we consider the wavelength range 5400-6700Å. This region is chosen because it has a high signal-to-noise ratio and minimal telluric features. For the near-IR APOGEE spectra, we consider the wavelength range 15150-16950Å. We smooth the BF with a Gaussian to remove un-correlated, small-scale noise below the size of the spectrograph slit, and then fit Gaussian profiles with a least-squares technique to measure the location of the BF peaks in velocity space. The geocentric (uncorrected) results from the BF technique are shown for the optical spectra in Figure 3. The results look similar for the near-IR spectra. The final derived radial velocity points with barycentric corrections are presented in Table 1 and Figure 4. The radial velocities vary from about −50 to 40 km s −1 , with uncertainties on the order of 0.02 km s −1 . Uncertainties are assigned based on the error in position from the least-squares best-fit Gaussian to each BF peak.
Comparison with TODCOR
To confirm that the BF-extracted radial velocities are accurate, we also use TODCOR (Zucker & Mazeh 1994) to extract radial velocities for the TRES spectra. TODCOR, which stands for two-dimensional crosscorrelation, uses a template spectrum from a library with a narrow spectral range (5050-5350Å) to make a twocomponent radial velocity curve for spectroscopic binaries. It is commonly used with TRES spectra for eclipsing binary studies. From the radial velocity curve, TOD-COR subsequently calculates an orbital solution. We use the full TODCOR RV extractor + orbital solution calculator for the TRES spectra, and compare this with the TODCOR orbital solution calculator for the combined ARCES, TRES, and APOGEE RV points which were extracted with the BF technique. We find that the two orbital solutions are in excellent agreement. The TODCOR RVs (available for TRES spectra only) are on average 0.22 ± 0.25 km s −1 systematically lower than the BF RVs, which we attribute to a physically unimportant difference in RV zeropoint.
STELLAR ATMOSPHERE MODEL
4.1. Spectral disentangling Before the two stars' atmospheres can be modeled, it is necessary to extract each star's spectrum from the observed binary spectra. While the location of a set of absorption lines in wavelength space is the only requirement for radial velocity studies, using an atmosphere model to measure T eff , log g, and metallicity [Fe/H] for each star requires precise equivalent widths of particular absorption lines.
To this end, we use the FDBinary tool (Ilijic et al. 2004) on the spectral window 4900-7130Å to perform spectral decomposition. Following the approach in Beck et al. (2014), we break the window into 222 pieces that each span about 10Å. FDBinary does not require a template, and instead uses the orbital parameters of a binary to separate a set of double-lined spectral observations in Fourier space. We test FDBinary's capabilities by creating a set of simulated double-lined spectra from a weighted sum of two identical spectra of Arcturus. When the orbital solution and flux ratio is correctly specified, the program returns a pair of single-lined spectra that are indistinguishable from the original.
FDBinary requires a set of double-lined spectral observations re-sampled evenly in ln λ. For each input spectrum, it is important to apply barycentric corrections and subtract the binary's systemic velocity (−4.48 km s −1 in this case, see Section 5 and Table 2). FDBinary further requires six parameters to define the shape of the radial velocity curve: orbital period, time of periastron passage (zeropoint), eccentricity, longitude of periastron, and amplitudes of each star's radial velocity curve. We set these to 171.277 days, 319.7 days 5 , 0.35, 17.3 deg, 33.1 km s −1 , and 33.4 km s −1 , respectively. While FDBinary does include an optimization algorithm for any subset of these parameters, we use more robust fixed values from a preliminary dynamical model similar to the ones in Section 5. Finally, FDBinary requires a light ratio for each observation. Because the two stars are so similar, and none of our spectra were taken during eclipse, we set all light ratios to 1. This is further justified by the nearly-equal amplitude of each star's broadening function (see Figure 3). We tried adjusting the light ratio and found that the result is qualitatively similar, but systematically increases the strength of all features in one spectrum while systematically decreasing the strength of all features in the other.
All 23 optical spectra of KIC 9246715 are processed together in FDBinary, and the result is a pair of disentangled spectra with zero radial velocity. A portion of the resulting individual spectra are shown in Figure 5 with a characteristic ARCES spectrum containing signals from both stars for comparison.
Parameters from atmosphere modeling
We use the radiative transfer code MOOG (Sneden 1973) to estimate T eff , log g, and metallicity [Fe/H] for the disentangled spectrum of each star in KIC 9246715. First, we use ARES (Automatic Routine for line Equivalent widths in stellar Spectra, Sousa et al. 2007) with a modified Fei and Fe ii linelist from Tsantaki et al. (2013). ARES automatically measures equivalent widths for spectral lines which can then be used by MOOG. An excellent outline of the process is given by Sousa (2014).
We use ARES to identify 66 Fei and 9 Fe ii lines in the spectrum of Star 1, and 74 Fei and 10 Fe ii lines in the spectrum of Star 2, all in the 4900-7130Å region. To arrive at a best-fit stellar atmosphere model with MOOG, we follow the approach of Magrini et al. (2013). Error bars are determined based on the standard deviation of the derived abundances and the range spanned in excitation potential or equivalent width. For Star 1, we find T eff = 4990±90 K, log g = 3.21±0.45, and [Fe/H] = −0.22±0.12, with a microturbulence velocity of 1.86±0.09 km s −1 . For Star 2, we find T eff = 5030±80 K, log g = 3.33 ± 0.37, and [Fe/H] = −0.10 ± 0.09, with a microturbulence velocity of 1.44 ± 0.09 km s −1 .
Projected rotational velocities can also be measured from stellar spectra. To estimate this, we compare the disentangled spectra to a grid of rotationally broadened spectra. We find both stars have v broad 8 km s −1 . It is important to consider that this observed broadening is a combination of each star's rotational velocity and macroturbulence: v broad = v rot sin i + ζ RT , where ζ RT is the radial-tangential macroturbulence dispersion (Gray 1978). We note that rotational broadening is Gaussian while broadening due to macroturbulence is cuspier, but these subtle line profile differences are not distinguishable here. Carney et al. (2008) find a large range of macroturbulence dispersions for giant stars which may vary as a function of luminosity, gravity, and temperature, and introduce a non-physically-motivated empirical The zoom panel is a clearer view of individual spectral features, including Hα, and clearly shows that the observed double-lined spectrum has been decomposed into two single-lined components. The full decomposed spectra span 4900-7130Å; only a portion is shown here.
relation v broad = [(v rot sin i) 2 + 0.95 ζ 2 RT ] 1/2 ,
to be on order 10% of the observed broadening. In any case, at least some of the observed line broadening is attributable to macroturbulence, and we conclude neither star in KIC 9246715 is a particularly fast rotator.
PHYSICAL PARAMETERS FROM LIGHT
CURVE & RADIAL VELOCITIES To derive physical and orbital parameters for KIC 9246715, we use the Eclipsing Light Curve (ELC) code (Orosz & Hauschildt 2000). ELC computes model light and velocity curves and uses a Differential Evolution Markov Chain Monte Carlo optimizing algorithm (Ter Braak 2006) to simultaneously solve for a suite of stellar parameters. It is able to consider any set of input constraints simultaneously, i.e., a combination of light curves and radial velocities, and can use a full treatment of Roche geometry (Kopal 1969;Avni & Bahcall 1975). ELC uses a grid of NextGen model atmospheres integrated over the Kepler bandpass to assign an intensity at the surface normal of each star. Intensities for the other portions of each star's visible surface are then computed with a quadratic limb darkening law. By including the temperature of Star 1 as a fit parameter, ELC will try different model atmospheres, thereby indirectly computing stellar temperature. ELC uses χ 2 as a measure of fitness to refine a best-fit model:
χ 2 = i (f mod (φ i ; a) − f obs (Kepler )) 2 σ 2 i (Kepler ) + i (f mod (φ i ; a) − f obs (RV 1 )) 2 σ 2 i (RV 1 )(1)+ i (f mod (φ i ; a) − f obs (RV 2 )) 2 σ 2 i (RV 2 ) , where f mod (φ i ; a)
is the ELC model flux at a given phase φ i for a set of parameters a, f obs is the observed value at the same phase, and σ i is the associated uncertainty. We compute two sets of ELC models: the first uses all eclipses from the light curve together with all radial velocity points, and the second breaks the light curves into segments to investigate how photometric variations from one orbit to another affect the results. Both sets of models employ ELC's "fast analytic mode." This uses the equations in Mandel & Agol (2002) to treat both stars as spheres, which is reasonable for a well-detached binary like KIC 9246715 (R/a < 0.04 for both stars). The results from both sets of models are presented in Table 2. We adopt the "All-eclipse model" as the accepted solution, for reasons described below.
All-eclipse ELC model
We use ELC to compute more than 2 million models which fit 16 parameters: orbital period P orb , zeropoint T conj (this sets the primary eclipse to orbital phase φ ELC = 0.5 instead of φ = 0), orbital inclination i, e sin ω and e cos ω (where e is eccentricity and ω is the longitude of periastron), the temperature of the primary star T 1 , the mass of the primary star M 1 , the amplitude of the primary star's radial velocity curve K 1 , the fractional radii of each star R 1 /a and R 2 /a, the temperature ratio T 2 /T 1 , the Kepler contamination factor, and stellar limb darkening parameters for the triangular limb darkening law (Kipping 2013). The scale of the system (and hence the component masses and radii) is uniquely determined given the primary star mass, the amplitude of its radial velocity curve, and the orbital period. Error bars are determined from the cumulative distribution frequency of each fit parameter after the first 10,000 models are excluded to allow for an appropriate MCMC burn-in period. Quoted values are 50% of the cumulative distribu- Figure 1. While one primary eclipse epoch suffers from increased contamination due to a nearby star (see Section 5.2), the overall scatter in the eclipse residuals is greater during primary eclipse than during secondary eclipse. This suggests Star 1 is more active than Star 2, and is discussed further in Section 6.3.
tion function with the one-sigma upper error at 84.25% and one-sigma lower error at 15.75%. The results are in Figure 6 and Table 2.
Light curve segment ELC models
To investigate secular changes in KIC 9246715, we split the Kepler light curve into seven segments such that each contains one primary and one secondary eclipse. This is particularly motivated by the photometric variability seen in Figure 2 and the residuals of the primary eclipse in the all-eclipse model, as shown in Figure 6. Of all the observed primary eclipses, the one in the seventh light curve segment is slightly shallower than the others by about 0.004 magnitudes. To learn why, we examine the Kepler Target Pixel Files, which reveal the aperture used for KIC 9246715 includes a larger portion of a nearby contaminating star every fourth quarter. This higher contamination is coincident with the secondary eclipse in the fifth light curve segment and both eclipses in the seventh light curve segment. Higher contamination results in shallower eclipses because there is an overall increase in flux, and we conclude that the shallower primary eclipse is a result of this contamination rather than a star spot or other astrophysical signal. 0.52 ± 0.05 triangular limb darkening b q 2 0.33 ± 0.02 0.41 ± 0.02 triangular limb darkening b a The uncertainties reported for γ vel are based on the internal consistency of the model using relative velocities. The true error is on the order of 0.2-0.3 km s −1 (Section 3.2). b The triangular limb darkening law (Kipping 2013) re-parameterizes the quadratic limb darkening law, I(µ)/I(1) = 1−u 1 (1−µ)−u 2 (1−µ) 2 , with new coefficients q 1 ≡ (u 1 +u 2 ) 2 and q 2 ≡ 0.5u 1 (u 1 + u 2 ) −1 .
We therefore calculate a second set of parameters based on the root-mean-square (RMS) of six ELC models, one for each light curve segment, excluding the seventh segment which has significantly higher contamination in both eclipses. Each segment still includes the full set of radial velocity data. The values reported are the RMS of these seven models, a RMS = 1 n n i=1 (a 2 i ), plus or minus the RMS error, 1 n n i=1 (a i − a RMS ) 2 . These are reported in Table 2. Temperature is not reported because the white-light Kepler bandpass is not well-suited to constrain stellar temperatures, and the RMS errors among the light curve segments are artificially small.
For all parameters, the all-eclipse model and the LC segment model agree within 2σ. We note that ω, the Kepler contamination, and R 1 all have significantly larger uncertainties in the LC segment results than the all-eclipse results. This reflects an inherent degeneracy between viewing angle and stellar radius in a binary with grazing eclipses, which is exacerbated by uncertainties in limb darkening and temperature, as well as varying contamination between quarters. When we hold both stars' limb darkening coefficients fixed with theoretical values q 1 = 0.49 and q 2 = 0.37 (Claret et al. 2013), we find an ELC solution that gives R 1 7.9 R , R 2 8.2 R , ω 17.4 deg, and contamination as high as 5 %. However, this solution has a higher χ 2 than the models which allow triangularly sampled quadratic limb darkening coefficients (Kipping 2013) to be free parameters, and it is important to consider that theoretical limb darkening values are poorly constrained for both giant stars and wide bandpasses. We therefore adopt the all-eclipse ELC solution in this work because it has the lowest χ 2 and uses all available data to constrain the system. 6. DISCUSSION 6.1. Comparison with asteroseismology We expect both evolved giants in KIC 9246715 to exhibit solar-like oscillations. These should be observable as pure p-modes for radial oscillations ( = 0), mixed p-and g-modes for dipolar oscillations ( = 1), and pdominated modes for quadrupolar oscillations ( = 2) in Kepler long-cadence data. For solar-like oscillators, the average large frequency separation between consecutive p-modes of the same spherical degree , ∆ν, has been shown to scale with the square root of the mean density of the star. The frequency of maximum oscillation power, ν max , carries information about the physical conditions near the stellar surface and is a function of surface gravity and effective temperature (Kjeldsen & Bedding 1995). These scaling relations may be used to estimate a star's mean density and surface gravity:
ρ ρ ∆ν ∆ν 2 (2) and g g ν max ν max, T eff T eff, −1/2 .(3)
Equation 2 is valid only for oscillation modes of large radial order n, where pressure modes can be mathematically described in the frame of the asymptotic development (Tassoul 1980). Even though red giants do not perfectly match these conditions, because the observed oscillation modes have small radial orders on the order of n ∼ 10, the scaling relations do appear to work. Quantifying how well they work and in what conditions is more challenging. This is why measuring oscillating stars' masses and radii independently from seismology is so important.
Surprisingly, when Gaulme et al. (2013) and Gaulme et al. (2014) analyzed the oscillation modes of KIC 9246715 to estimate global asteroseismic parameters, only one set of modes corresponding to a single oscillating star was found. Of the 18 oscillating RG/EBs in the Kepler field, KIC 9246715 is the only one with a pair of giant stars (the rest are composed of a giant star and a main sequence star).
In addition, the light curve displays photometric variability as large as 2% peak-to-peak, as shown in Figure 2, which is typical of the signal created by spots on stellar surfaces. The pseudo-period of this variability was observed to be about half the orbital period, which suggests resonances in the system. Gaulme et al. (2014) speculated that star spots may be responsible for inhibiting oscillations on the smaller star, and a similar behavior was observed in other RG/EB systems. In this section, we reestimate the global seismic parameters of the oscillation spectrum that was previously identified (Section 6.1.1), analyze the mixed oscillation modes to determine the oscillating star's evolutionary state (Section 6.1.2), investigate which star is more likely to be exhibiting oscillations (Section 6.1.3), and address the discrepancy between different surface gravity measurements (Section 6.1.4).
Global asteroseismic parameters of the oscillating star
We now re-estimate ν max and ∆ν for the oscillation spectrum in the same way as Gaulme et al. (2014), but by using the whole Kepler dataset (Q0-Q17). The frequency at maximum amplitude of solar-like oscillations ν max is measured by fitting the mode envelope with a Gaussian function and the background stellar activity with a sum of two semi-Lorentzians. The large frequency separation ∆ν is obtained from the filtered autocorrelation of the time series (Mosser & Appourchaux 2009). Differences with respect to previous estimates are negligible, as we find ν max = 106.4 ± 0.8 and ∆ν = 8.31 ± 0.02 µHz. Because the ELC results yield T 2 /T 1 = 0.989 (Table 2) and the stellar atmosphere analysis gives T 1 = 4990 ± 90 K and T 2 = 5030 ± 80 K (Section 4.2), we use an effective temperature of T eff = 5000 ± 100 K in the asteroseismic scaling equations. Assuming a single oscillating star, the mode amplitudes are only ∼ 60% as high as expected (A max ( = 0) 15 ppm, and not 6.6 ppm as erroneously reported by Gaulme et al. 2014) when compared to the ∼ 24 ppm predicted from mode amplitude scaling relations (Corsaro et al. 2013). The modes are four times wider than expected as well, with = 0 linewidths 0.4 µHz near ν max rather than a value closer to 0.1 µHz as predicted for stars with similar ν max , ∆ν, and T eff (Corsaro et al. 2015).
To determine mass, radius, surface gravity, and mean density, we use the scaling relations after correcting ∆ν for the red giant regime 6 . In essence, instead of directly plugging the observed ∆ν obs into Equations 2 and 3, we estimate the asymptotic large spacing via ∆ν as = ∆ν obs (1 + ζ), where ζ = 0.038. With this correction of the large spacing, we obtain M = 2.17±0.14 M and R = 8.26±0.18 R . In terms of mean density and surface gravity, which independently test the ∆ν and ν max relations, respectively, we find ρ/ρ = (3.86 ± 0.02) × 10 −3 and log g = 2.942 ± 0.008. A comparison of key parameters determined from all our different modeling techniques is in Table 3.
Mixed oscillation modes
Based on the distribution of mixed = 1 modes, Gaulme et al. (2014) reported that the oscillation pattern period spacing was typical of that of a star from the secondary red clump, i.e., a core-He-burning star that has not experienced a helium flash. This was based on a dipole gravity mode period spacing of ∆Π 1 150 s. Red giant branch stars have smaller period spacings than red clump stars, and (∆Π 1 = 150 s, ∆ν = 8.31 µHz) puts the oscillating star on the very edge of the asteroseismic parameter space that defines the secondary red clump ). Due to noise and damped oscillations, it is difficult to unambiguously determine the mixed mode pattern described by Mosser et al. (2012). To more accurately assess the evolutionary stage of the oscillating star in KIC 9246715, we employ three different techniques to identify and characterize mixed modes. MOOG Stellar Atmosphere, Star 1 · · · · · · 3.21 ± 0.45 · · · 4990 ± 90 MOOG Stellar Atmosphere, Star 2 · · · · · · 3.33 ± 0.37 · · · 5030 ± 80 Global Asteroseismology, Star 1 a · · · · · · · · · 4.14 ± 0.02 · · · Global Asteroseismology, Star 2 a 2.17 ± 0.14 8.26 ± 0.18 2.942 ± 0.008 3.86 ± 0.02 b a As discussed in Sections 6.1.3 and 6.2, we tentatively assign Star 2 to the main set of oscillations and Star 1 to the marginally detected oscillations. b A fixed temperature of 5000 ± 100 K was assumed to calculate the other asteroseismic parameters. First, we perform a Bayesian fit to the individual oscillation modes of the star using the Diamonds code (Corsaro & De Ridder 2014) and the methodology for the peak bagging analysis of a red giant star in Corsaro et al. (2015). We then compare the set of the obtained frequencies of mixed dipole modes with those from the asymptotic relation proposed by Mosser et al. (2012), which we compute using different values of ∆Π 1 . The result shows a significantly better match when values of ∆Π 1 around 200 s are used. This confirms that the oscillating star is settled on the core-He-burning phase of stellar evolution. The results of the Diamonds fit are in the Appendix.
Second, we search for stars with a power density spectrum that resembles the oscillation spectrum of KIC 9246715. As shown in Figure 7, a good match is found with the star KIC 11725564, which exhibits very similar radial and quadrupole modes as well as the mixed mode pattern. To find this "twin," we calculate the autocorrelation of the KIC 9246715 oscillation spectrum, pre-whiten its radial and quadrupole modes, and convert it into period. We find a weak, broad peak at about ∆P obs = 80 s. A similar result of ∆P obs = 87 s is found for KIC 11725564, with a notably cleaner signal thanks to higher mode amplitudes. This corresponds to the observed period spacing as defined by Bedding et al. (2011) and Mosser et al. (2011), and indicates that the star is indeed likely to be a secondary clump star.
Finally, we measure the asymptotic period spacing with the new method developed by Mosser et al. (2015). The signature ∆Π 1 = 150.4 ± 1.4 s is very clear, despite binarity. In fact, the presence of a second oscillation spectrum cannot mimic a mixed-mode pattern because its global amplitude is too small for us to observe a mode disturbance. Only one signature of an oscillating star is visible in a period spacing diagram.
We conclude that the mixed oscillation modes in KIC 9246715 are indicative of a secondary red clump star. This result is supported statistically by Miglio et al. (2014), who report it is more likely to find red clump stars than red giant branch stars in asteroseismic binaries in Kepler data. This is largely due to the fact that evolved stars spend more time on the horizontal branch than the red giant branch. Due to the large noise level of the mixed modes, we are unable to measure a core rotation rate in the manner of Beck et al. (2012) and Mosser et al. (2012). However, the mixed modes appear to be doublets which support an inclination near 90 degrees.
Identifying the oscillating star
The asteroseismic mass and radius are consistent with those from the ELC model for both stars. The surface gravity of the two stars from ELC are nearly identical, and both agree with the asteroseismic value. While neither star's mean density agrees with the asteroseismic value, Star 2 is slightly closer than Star 1. Since one of the scaling equations gives mean density independent of temperature and ν max (Equation 2), one might naïvely expect a better asteroseismic estimation of density compared to surface gravity. It is therefore important to consider the temperature dependence of Equation 3. From Gaulme et al. (2013), Gaulme et al. (2014), and the present work, asteroseismic masses and radii were reported to be (1.7 ± 0.3 M , 7.7 ± 0.4 R ), (2.06 ± 0.13 M , 8.10 ± 0.18 R ), and (2.17 ± 0.14 M , 8.26 ± 0.18 R ), respectively. Among these, ν max does not vary much (102.2, 106.4, 106.4 µHz), and ∆ν varies even less (8.3,8.32,8.31 µHz), while the assumed temperatures were 4699 K (from the KIC), 4857 K (from Huber et al. 2014), and 5000 K (this work). Even if temperature is the least influential parameter in the asteroseismic scalings, we are at a level of precision where errors on temperature dominate the global asteroseismic results. In this case, while Star 2 appears to be a better candidate for the main oscillator at a glance, scaling relations alone cannot be used to prefer one star over the other. However, in Section 6.3 we demonstrate that Star 2 is likely less active than Star 1. Based on this, we tentatively assign Star 2 as the main oscillator.
Surface gravity disagreement
The asteroseismic log g measurement nearly agrees with those from ELC, yet all three are some 0.3 dex lower than the spectroscopic log g values, as can be seen in Table 3. This discrepancy is similar to the difference found for giant stars by Holtzman et al. (2015). They investigate a large sample of stars from the ASP-CAP (APOGEE Stellar Parameters and Chemical Abundances Pipeline) which have log g measured via spectroscopy and asteroseismology. They find that spectroscopic surface gravity measurements are roughly 0.2-0.3 dex too high for core-He-burning (red clump) stars and roughly 0.1-0.2 dex too high for shell-H-burning (red giant branch) stars. Holtzman et al. (2015) speculate the difference may be partially due to a lack of treatment of stellar rotation, and derive an empirical calibration relation for a "correct" log g for red giant branch stars only. However, the stars in KIC 9246715 do not rotate particularly fast (v rot sin i 8 km s −1 , which includes a con-tribution from macroturbulence as discussed in Section 4.2), so we cannot dismiss this discrepancy so readily.
ℓ = 0 ℓ = 1 ℓ = 2 ℓ = 0 ℓ = 1 ℓ = 2
Frequency (µ Hz) Frequency Modulo ∆ ν = 8.31 µ Hz The power density spectrum is smoothed by a boxcar over seven bins and cut into 8.31-µHz chunks; each is then stacked on top of its lower-frequency neighbor. This representation allows for visual identification of the modes. Lines are plotted to guide the eye toward a theoretical mode distribution according to the red giant universal pattern . It illustrates how we expect the modes to appear, but is not the result of a fit. Solid blue and dashed red lines are associated with the main (nominally Star 2) and marginally-detected (nominally Star 1) oscillations, respectively. The variable labels each mode by its spherical degree. Large spacing is ∆ν = 8.31 µHz for the main (blue) lines and 8.60 µHz for the marginal (red) lines (see Section 6.2).
A hint of a second set of oscillations
Given that the giants in KIC 9246715 are nearly twins, we test whether it is possible that we see only one set of oscillation modes because both stars are oscillating with virtually identical frequencies. The predicted ν max values for these not-quite-identical stars are 103.4 +1.6 −1.1 and 104.1 +1.1 −1.2 µHz for Star 1 and Star 2, respectively (from an inversion of Equations 2 and 3), and the predicted ∆ν obs are 8.14 +0.06 −0.03 and 8.20 +0.03 −0.04 µHz for Star 1 and Star 2, respectively. As described in Section 6.1.1, the intrinsic observed mode linewidths is 0.4 µHz, which is about four times wider than expected. To quantify how likely it is for oscillation modes like this to overlap one another, we use the ELC model results from Section 5.1 to calculate distributions of expected ∆ν for each star. We find that in 89% of the cases, |∆ν 1 − ∆ν 2 | < 0.4 µHz. This suggests that, if both stars do indeed exhibit solar-like oscillations, some degree of mode overlap is likely.
Searching for a second set of oscillations is motivated by the broad, mixed-mode-like appearance of the = 0 modes in Figure 8, where mixed modes are not physically possible, and by the faint diagonal structure mostly present on the upper left side of the = 1 mode ridge. Even though oscillation modes from the two stars should not perfectly overlap, modes of degree = 0, 1 of one star can almost overlap modes of degree = 1, 0 of the other star.
The universal red giant oscillation pattern ) yields ∆ν = 8.31 ± 0.02 µHz for this system (Section 6.1.1). However, it appears that the asymptotic relations for pressure-modes and mixed modes from the main oscillating star alone may not reproduce the position of all the peaks in the power spectrum. We therefore test the hypothesis of a binary companion. The universal oscillation pattern allows us to tentatively allocate the extra peaks to a pressure-mode oscillation pattern based on ∆ν = 8.60 ± 0.04 µHz 7 . This putative oscillation spectrum is globally interlaced with the main oscillations, with the dipole modes of one component close to the radial modes of the other component, and vice versa.
This value aligns the diagonal structure seen in thé echelle diagram and satisfies the ( = 0, 1 − = 1, 0) near-overlap evident in Figure 8. However, because these peaks are only marginally detected, ν max cannot be measured. The asteroseismic scaling connecting ∆ν with the mean density yieldsρ/ρ = (4.14 ± 0.02) × 10 −3 . This density is larger than we expect; in fact, we expect Star 1 to be less dense than Star 2, the suspected main oscillator. This casts further doubt on the second set of oscillations, and it may be a spurious detection.
Finally, we investigate whether the modes show any frequency modulation as a function of orbital phase by examining portions of the power spectrum spanning less than the orbital period. However, the solar-like oscillations modes are short-lived (about 23 days from an average 0.5 µHz width of l = 0 modes), so it is difficult to clearly resolve Doppler-shifted modes in a power spectrum of a light curve segment. At ν max = 106 µHz, the maximum frequency shift expected from a 60 km s −1 difference in radial velocity is 0.02 µHz. This is less than the intrinsic mode line width, and therefore not observable.
6.3. Signatures of stellar activity KIC 9246715 is an interesting pair of well-separated red giants that exhibit photometric variations from stellar activity, weak or absent solar-like oscillations, and a notably eccentric orbit. In this and the following section, we discuss how stellar activity and tidal forces have acted over the binary's lifetime to arrive at the system we see today. The first confirmed case of activity and/or tides suppressing convection-driven oscillations was Derekas et al. (2011), and as Gaulme et al. (2014) showed, stellar activity and tides likely play an important role in many RG/EBs.
In this system, the light curve residuals discussed in Section 5.2 and Figure 6 show significant scatter during both eclipses, and especially primary eclipse (when Star 1 is in front). This means at least Star 1 is magnetically active, and activity in the system is further supported by photometric variability of up to 2% on a timescale approximately equal to half the orbital period (Gaulme et al. 2014). A magnetically active Star 1 is also consistent with Star 2 as the suspected main oscillator, because strong magnetic fields may be responsible for damping solar-like oscillations, as described in Fuller et al. (2015).
Figures 9 and 10 investigate whether magnetic activity has any appreciable effect on absorption lines in either star. Following the approach of Fröhlich et al. (2012), we plot each target spectrum (solid colored line) on top of a model (dotted line), and show the difference below (solid black line). The model spectrum is a PHOENIX BT-Settl stellar atmosphere like the one described in Section 3.1 (Allard et al. 2003;Asplund et al. 2009), with T eff = 5000 and log g = 3.0. It has been convolved to a lower resolution much closer to that of the ARCES and TRES spectrographs.
We examine a selection of the strongest Fei lines which fall in the disentangled wavelength region and are either prone to Zeeman splitting in the presence of strong magnetic fields (Harvey 1973), or not (Sistla & Harvey 1970). The non-magnetic lines serve as a control. We find none of the six panels of Fei absorption lines in either star show any significant deviation from the model spectrum. Thus, there is no apparent Zeeman broadening, which is unsurprising for evolved red giants. Magnetic fields must be quite strong to produce this effect. However, the Hα and Caii absorption lines, which can be indicators of chromospheric activity, are somewhat more interesting. The Hα line appears significantly deeper and broader than the model in both stars. While net emission is typically associated with activity, Robinson et al. (1990) show several examples of main sequence stars with increased Hα absorption due to chromospheric heating, although they caution it is difficult to separate the photospheric and chromospheric contributions to the line. Still, the increased Hα absorption equivalent width is slightly more pronounced in Star 1 than Star 2. While this may not be a significant difference on its own, taken together with the increased scatter in the primary eclipse residuals from Figure 6, it also suggests Star 1 is the more magnetically active of the pair. It is unclear whether the Ca ii doublet shows signs of excess broadening or increased equivalent width, but these lines certainly do not have smaller equivalent widths than the model.
The overall photometric variability from Figure 2 and increased scatter in the primary eclipse residual from Figure 6 indicate that both stars are moderately magnetically active, and Star 1 more so than Star 2. This is consistent with increased Hα absorption in both stars (and especially Star 1), and supports our suspicion that Star 2 is the main oscillator, and that stellar activity is suppressing solar-like oscillations in Star 1.
Stellar evolution and tidal forces
Over the course of KIC 9246715's life, both stars have evolved in tandem to reach the configuration we see today. We quantify this with simple stellar evolution models created using the Modules for Experiments in Stellar Astrophysics (MESA) code (Paxton et al. 2011(Paxton et al. , 2013(Paxton et al. , 2015. Figure 11 presents a suite of models with various initial stellar masses. All the models include overshooting for all the convective zone boundaries with an efficiency of f = 0.016 (Herwig 2000), assume no mass loss, and set the mixing-length parameter α = 2.5. The standard solar value of α = 2 does not allow for sufficiently small stars beyond the red giant branch. The stage of each model star's life as it ages in Figure 11 is color-coded, and curved lines of constant radii corresponding to R 1 ±σ R1 (gray) and R 2 ±σ R2 (white), within the ranges of M 1 ± σ M1 and M 2 ± σ M2 , respectively, are shown. There are two instances in each pair of model stars' lives when they have the same radii as the stars in KIC 9246715: once on the red giant branch, and again Figure 9, but for Star 2 (yellow). No signatures of Zeeman broadening or chromospheric emission are present. The Hα absorption is slightly deeper and broader than expected, but not as much as that of Star 1. on the secondary red clump (horizontal branch).
In general, coeval stars on the red giant branch must have masses within 1% of each other, whereas masses can differ more on the horizontal branch due to its longer evolutionary lifetime. Both model stars in Figure 11 can be the same age on the horizontal branch, but not on the red giant branch. Stars 1 and 2 in Figure 11 have red giant branch ages of 8.13 +0.08 −0.06 × 10 8 yr and 8.36 +0.08 −0.06 × 10 8 yr, respectively, and horizontal branch ages of 9.17 ± 0.17 × 10 8 yr and 9.42 +0.20 −0.13 × 10 8 yr, respectively. Without α > 2, the MESA model stars on the horizontal branch are always larger than those in KIC 9246715. We consider several ideas as to why the MESA models and the evolutionary stage determined from asteroseismic mixed-mode period spacing in Section 6.1.2 may differ:
• Mass loss: Adding a prescription for red-giantbranch mass loss (η = 0.4, a commonly adopted value of the parameter describing mass-loss efficiency, see Miglio et al. 2012) to the MESA model does not appreciably change stellar radius as a function of evolutionary stage. Even a more extreme mass-loss rate (η = 0.7) does not significantly affect the radii, essentially because the star is too low-mass to lose much mass.
• He abundance: Increasing the initial He fraction in the MESA model does not allow for smaller stars in the red clump phase, because shell-H burning is very efficient with additional He present. As a result, the star maintains a high luminosity and therefore a larger radius as it evolves from the tip of the red giant branch to the red clump.
• Convective overshoot: The MESA models in this work assume a reasonable overshoot efficiency as described above (f = 0.016). We tried varying this from 0-0.03, and can barely make a red clump star as small as 8.3 R when f = 0.01. With less overshoot, the RGB phase as shown in Figure 11 increases in duration, which allows a higher probability for stars of M 1 and M 2 to both be on the RGB.
• Period spacing: The period spacing ∆Π 1 = 150 s may not be measuring what we expect due to rotational splitting of mixed oscillation modes. If the true period spacing is closer to ∆Π 1 80 s, this would put the oscillating star on the red giant branch. However, as demonstrated in Section 6.1.2, the mixed modes do agree best with a secondary red clump star. A detailed discussion of rotational splitting behavior in slowly rotating red giants is explored in Goupil et al. (2013).
• Mixing length: As discussed above, increasing the mixing-length parameter from the standard solar value of α = 2 to α = 2.5 in the MESA model, which effectively increases the efficiency of convection, produces a red clump star small enough to agree with both measured radii. This is because it reduces the temperature gradient in the nearsurface layers, increasing the effective temperature while reducing the radius at constant luminosity. This is what we employ to make horizontal branch stars that agree with R 1 and R 2 .
Beyond a stellar evolution model, it is important to consider how each star has affected the other over time. When the two stars in KIC 9246715 reach the tip of the red giant branch, they have radii of approximately 25 R , which is still significantly smaller than the periastron separation (r peri = (1 − e) a = 137 R ). We never expect the stars to experience a common envelope phase, so this cannot be used to constrain the present evolutionary state.
To estimate how tidal forces change orbital eccentricity, we follow the approach of Verbunt & Phinney (1995). They use a theory of the equilibrium tide first proposed by Zahn (1977) to calculate a timescale for orbit circularization as a star evolves. It is important to note that Verbunt & Phinney (1995) assumed circularization would proceed by a small secondary star (main sequence or white dwarf) imposing an equilibrium tide on a large giant, while the situation with KIC 9246715 is more complicated. For a thorough review of tidal forces in stars, see Ogilvie (2014).
From Equation 2 in Verbunt & Phinney (1995), the timescale τ c on which orbital circularization occurs is given by
1 τ c ≡ d ln e dt −1.7 T eff 4500K 4/3 M env M 2/3 (4) × M M M 2 M M + M 2 M R a 8 yr −1 ,
where M , R, and T eff are the mass, radius, and temperature of a giant star with dissipative tides, M env is the mass of its convective envelope, M 2 is the mass of the companion star, and a is the semi-major axis of the binary orbit. We integrate this expression over the lifetime of KIC 9246715 to estimate the total expected change in orbital eccentricity, ∆ ln e. We assume a is constant and that there is no mass loss. Because KIC 9246715 is a detached binary, we can separate the integral into a part that is independent of the orbit and a part that must be integrated over time:
∆ ln e = t 0 dt τ c (t ) −1.7 × 10 −5 M M −11/3 (5) × q(1 + q) −5/3 I(t) P orb day −16/3 ,
where q is the mass ratio and
I(t) ≡ t 0 T eff (t ) 4500K 4/3 M env (t ) M 2/3 R(t ) R 8 dt .
For the MESA model described above with M = 2.15 M , we compute ∆ ln e = −2.3 × 10 −5 up until . Lines of constant radius equal to R 1 and R 2 that fall within the one-sigma errors in mass are shown (gray, R 1 ± σ R 1 corresponding to M 1 ± σ M 1 ; white, R 2 ± σ R 2 corresponding to M 2 ± σ M 2 ). All models assume a mixing-length parameter of α = 2.5. It is possible for both stars in KIC 9246715 to be the same age on the HB, but not on the RGB. t = 8.3 × 10 8 and ∆ ln e = −0.17 up until t = 9.4 × 10 8 years (the ages corresponding to R 8.3 R ). Rewriting these as log[−∆ ln e] = −4.6 and log[−∆ ln e] = −0.77, which are both less than zero, indicates that the binary has not had sufficient time to circularize its orbit, though it is possible the system's initial eccentricity was higher than the e = 0.35 we observe today.
The two stars in KIC 9246715 have very similar masses, radii, and temperatures, so this rough calculation is valid both for Star 1 acting on Star 2 and vice versa. Given more time to evolve past the tip of the red giant branch and well onto the red clump (with R 25 R for the second time), log[−∆ ln e] becomes greater than zero and the expectation is a circular orbit. Therefore, the observed eccentricity is consistent with both a red giant branch star aged approximately 8.3 × 10 8 years and with a secondary red clump star just past the tip of the red giant branch aged approximately 9.4 × 10 8 years.
Tidal forces also tend to synchronize a binary star's orbit with the stellar rotation period, generally on shorter timescales than required for circularization (Ogilvie 2014). Hints of KIC 9246715's stellar rotation behavior are present throughout this study: quasi-periodic light curve variability on the order of half the orbital period, residual scatter between light curve observations and the best-fit model during both eclipses, a constraint on v rot sin i from spectra, and an asteroseismic period spacing consistent with a red clump star yet not clear enough to measure a robust core rotation rate.
While full tidal circularization has not occurred, it is clear that modest tidal forces have played a role in the evolution of KIC 9246715, and may be linked to the absence or weakness of solar-like oscillations. Future studies of RG/EBs with different evolutionary histories and orbital configurations will help explore this connection further.
CONCLUSIONS
We have characterized the double red giant eclipsing binary KIC 9246715 with a combination of dynamical modeling, stellar atmosphere modeling, and global asteroseismology, and have investigated the roles of magnetic activity, tidal forces, and stellar evolution in creating the system we observe today. KIC 9246715 represents a likely future state of similar-mass RG/EB systems and raises interesting questions about the interactions among stellar activity, tides, and solar-like oscillations.
The two stars in KIC 9246715 are nearly twins (M 1 = 2.171 +0.006 −0.008 M , M 2 = 2.149 +0.006 −0.008 M , R 1 = 8.37 +0.03 −0.07 R , R 2 = 8.30 +0.04 −0.03 R ), yet we find only one set of solar-like oscillations strong enough to measure robustly (M = 2.17 ± 0.14 M , R = 8.26 ± 0.18 R ). The asteroseismic mass and radius agree with both Star 1 and Star 2, as does the surface gravity derived from asteroseismology (log g = 2.942 ± 0.008; compare with log g 1 = 2.929 +0.007 −0.003 and log g 2 = 2.932 +0.003 −0.004 ). The asteroseismic density, which is not a function of effective temperature, is systematically larger than Star 1 and Star 2, but is a slightly closer match with Star 2 (ρ/ρ = (3.86 ± 0.02) × 10 −3 ; compare withρ 1 /ρ = (3.70 +0.04 −0.09 ) × 10 −3 andρ 2 /ρ = (3.76 +0.06 −0.04 ) × 10 −3 ). As a result, we cannot conclude which star is the source of the main oscillations from asteroseismology alone. However, Star 2 appears to be less active than Star 1, and we therefore tentatively assign the main oscillations to Star 2. The modes are four times wider than expected with amplitudes only ∼ 60% as high as those in red giants with similar global oscillation properties, likely due to a combination of overlapping adjacent modes and magnetic damping. We identify a second set of marginally detectable oscillations potentially attributable to Star 1, for which only ∆ν can be estimated, yielding a higher average density than the main oscillation spectrum. This is not consistent with the expected density of Star 1, however, which is less than that of Star 2. These extra modes may represent a spurious detection.
Surface gravities from dynamical modeling and asteroseismology nearly agree, while surface gravities from stellar atmosphere modeling are higher (log g 1 = 3.21 ± 0.45, log g 2 = 3.33 ± 0.37). A similar discrepancy has been found between the asteroseismic and spectroscopic surface gravities of other giant stars, but the physical cause is unknown. Radii from stellar evolution models are consistent with a pair of nearly-coeval stars either on the red giant branch with an age of approximately 8.3 × 10 8 years, or coeval stars on the horizontal branch with an age of about 9.4 × 10 8 years. However, the period spacing of mixed oscillation modes clearly indicates that the main oscillator in KIC 9246715 is on the secondary red clump, and we conclude that KIC 9246715 is a pair of secondary red clump stars.
Red giants are ideal tools for probing the Milky Way Galaxy via asteroseismology, so it is crucial that we understand the accuracy and precision of asteroseismicallyderived physical parameters. Along the same lines, more than half of cool stars should be in binary or multiple systems, so galactic studies must be done carefully due to external influences of binarity on solar-like oscillations. Detailed studies of the handful of known RG/EBs are crucial to ensure we understand these galactic beacons. Future work will characterize the other known oscillating RG/EBs as well as several non-oscillating RG/EBs. These have the potential to become some of the beststudied stars while simultaneously helping us better understand the structure of the Milky Way. This paper was written collaboratively on the web with Authorea at authorea.com/2409. M. L. R. thanks the New Mexico Space Grant Council for support, D. Chojnowski for assistance with APOGEE, D. Muna for the SciCoder workshop, and L. C. Mayorga for programming assistance. J. J. acknowledges support from NASA ADAP grant NNX14AR85G. E. C. & P. B. received funding from the European Community's Seventh Framework Programme [FP7/2007[FP7/ -2013 under grant agreements 312844 (SPACEINN) and 269194 (IRSES/ASK). P. B. also received funding from CNES grants at CEA. D. W. L. acknowledges partial support from NASA's Kepler Mission under Cooperative Agreement NNX13AB58A with the Smithsonian Astrophysical Observatory.
We thank Leo Girardi and the anonymous referee for valuable feedback and Kresimir Pavlovski for useful discussions. This paper uses data from the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium, and data collected by the Kepler mission, which is funded by the NASA Science Mission directorate. This research made use of Astropy (Robitaille et al. 2013), PyAstronomy (github.com/sczesla/PyAstronomy), PyKE (Still & Barclay 2012), NASA's ADS Bibliographic Services, and the AstroBetter blog and wiki. In this appendix, we present the frequencies fit by Diamonds (Corsaro & De Ridder 2014), as described in Section 6.1.2. We follow the methodology for the peak bagging analysis of a red giant star in Corsaro et al. (2015). Each fit mode's frequency together with its angular degree , azimuthal order m, amplitude or height, linewidth (when applicable), and probability of detection is listed in Table A1. Figure A1 shows these modes superimposed on the power density spectrum of KIC 9246715, which is split up like anéchelle diagram for clarity. For comparison, we also plot the locations of where modes should fall according to the asymptotic relation (Mosser et al. 2012) for the main set of oscillations (∆ν = 8.31 µHz) and the marginally detected second set of oscillations (∆ν = 8.60 µHz). The power spectrum is quite noisy overall, exhibits wide modes with low amplitudes, and is challenging to interpret unambiguously. For a full discussion, see Section 6. Figure 8). The PSD has been whitened, or divided by the background fitting, which casts the y-axis in terms of sigma. Solid blue lines indicate the universal pattern for an oscillator with ∆ν = 8.31 µHz (main oscillator), while red lines indicate the same for ∆ν = 8.60 µHz (marginal detection). Dark green triangles correspond to the location of fit peaks from Diamonds (Table A1). The width of each triangle's base is the mode linewidth from the fit, and taller triangles represent higher detection confidence.
Fig. 1 .
1-Kepler light curve of the eclipsing binary KIC 9246715 with out-of-eclipse points flattened. Top: Detrended SAP flux over 17
Fig. 2 .
2-Kepler light curve of the eclipsing binary KIC 9246715 with eclipses removed, but retaining out-of-eclipse variability. The times of eclipses are indicated with dotted lines.
Fig
. 3.-Radial velocities extracted for 23 ARCES and TRES observations of KIC 9246715 with the broadening function (BF) technique. Each panel represents one spectral observation, ordered chronologically, for which the BF convolution of the target star with a template PHOENIX model spectrum is shown in black. To identify the location of each BF in radial velocity space, we fit a pair of Gaussians, which are plotted in red. The date of observation, orbital phase, and instrument used are printed in the upper corners of each panel. Barycentric corrections have not yet been applied to these velocities.
Fig. 4 .
4-Radial velocity curves for both stars in KIC 9246715. The top panel shows the velocities as a function of time, with a light dotted line to guide the eye. The bottom panel shows the folded radial velocity curve over one orbit. Symbol shape indicates which spectrograph took each observation.
Fig. 5 .
5while Tayar et al. (2015) estimate the macroturbulence for giant stars -Disentangled spectra from FDBinary for the two stars in KIC 9246715. The y-axis is offset by an arbitrary amount for clarity. For comparison, a typical observation from the ARCES spectrograph taken close to primary eclipse (φ = 0.982) on 2013-09-02 is in black.
Fig. 6 .
6-ELC model for all eclipses of KIC 9246715 taken together. The top two panels show the folded radial velocities, while the middle two panels show the folded light curve. A single full orbit is shown. The bottom four panels are a zoom of each eclipse. Residuals are indicated by a ∆ symbol. Red and yellow points are observations and the black line is the all-eclipse ELC model fit. The primary and secondary eclipses are the same configurations as illustrated in
Fig. 7 .
7-Power density spectrum of KIC 11725564 (gray), a seismic "twin" of KIC 9246715 (red). Both power spectra are smoothed with a boxcar of 1/50 of the large separation. The modes in KIC 11725564, a secondary red clump star, have ∆ν and νmax which very nearly match KIC 9246715. They are less noisy and have larger amplitudes than the modes in KIC 9246715, making this star a useful asteroseismic comparison.
Fig. 8 .
8-Échelle diagram of KIC 9246715's power density spectrum. Darker regions correspond to larger peaks in power density.
Fig. 9 .Fig
9-Top of each panel: observed FDBinary-extracted spectrum of Star 1 (red) together with a stellar template (dotted black line). Bottom of each panel: difference between the observed and model spectra. Vertical lines show the position of each absorption line. Broadened magnetic-sensitive lines would indicate Zeeman splitting, but this is not observed. Net emission in the Hα and Ca ii lines is a characteristic signature of chromospheric magnetic activity, but this is not observed either. Instead, the Hα line is deeper and broader than the model. . 10.-The same as in
Fig. 11 .
11-Ages for a suite of MESA stellar evolution models for stars of different masses. Color indicates the evolutionary state of a star as it moves from the Main Sequence (MS) → Red Giant Branch (RGB) → Secondary Red Clump/Horizontal Branch (HB) → Asymptotic Giant Branch and beyond (AGB+)
Facilities:
Kepler, APO:3.5m (ARCES), FLWO:1.5m (TRES), Sloan (APOGEE) APPENDIX A. OSCILLATION MODES FIT WITH DIAMONDS
Fig
. A1.-Power spectral density (PSD) of KIC 9246715 as function of frequency, in the form of anéchelle diagram (compare to
TABLE 1
1Radial velocities for KIC 9246715 extracted from spectra with the broadening function technique.UTC
v 1
v 2
Midpoint a
Phase
Inst b
Date
(km s −1 )
(km s −1 )
2012-03-01 5988.047280 0.773
20.72(14)
−29.88(14)
T
2012-03-11 5998.009344 0.831
34.91(14)
−44.26(14)
T
2012-04-02 6020.026793 0.960
20.25(15)
−29.77(15)
T
2012-05-08 6055.977358 0.170 −22.49(14)
13.69(14)
T
2012-05-26 6073.937068 0.275 −26.35(14)
17.53(14)
T
2012-06-02 6080.976302 0.316 −26.37(14)
17.64(14)
T
2012-06-12 6090.904683 0.374 −25.55(15)
16.67(15)
A
2012-06-27 6105.752943 0.460 −22.83(15)
12.51(15)
A
2012-06-30 6108.894850 0.479 −21.01(14)
12.20(14)
T
2012-07-24 6132.758456 0.618
−8.72(31)
−0.55(32)
T
2012-08-26 6165.786902 0.811
29.96(15)
−39.77(15)
A
2012-08-26 6165.947831 0.812
28.86(15)
−41.26(15)
A
2012-08-27 6166.889910 0.817
33.01(15)
−39.84(15)
A
2012-09-04 6174.917425 0.864
40.45(15)
−48.07(15)
A
2012-09-05 6175.777945 0.869
39.85(14)
−49.22(14)
T
2012-09-30 6200.689766 0.015
2.35(18)
−11.53(20)
T
2012-10-24 6224.736100 0.155 −21.22(14)
12.80(14)
T
2012-11-21 6252.572982 0.318 −26.39(14)
17.67(14)
T
2013-04-02 6384.991673 0.091 −14.11(15)
5.13(14)
T
2013-04-20 6402.975545 0.196 −23.98(15)
15.28(15)
A
2013-06-13 6456.959033 0.511 −17.91(14)
11.00(15)
A
2013-09-02 6537.599166 0.982
12.23(15)
−22.55(15)
A
2013-09-09 6544.591214 0.022
−0.18(25)
−9.94(26)
A
2014-04-23 6770.897695 0.344 −25.70(15)
17.52(15)
E
2014-05-17 6794.863326 0.484 −20.44(15)
11.92(15)
E
a Exposure midpoint timestamp, (BJD-2450000)
b T = TRES, A = ARCES, E = APOGEE
KIC 9246715 4
TABLE 2
2Physical parameters of KIC 9246715 from ELC modeling.Parameter
All-eclipse model
LC segment RMS
Comment
P orb [d]
171.27688 ± 0.00001
171.276 ± 0.001
T conj [d]
337.51644 ± 0.00005
337.519 ± 0.002
0 d ≡ 2454833 BJD
i [deg]
87.051 +0.009
−0.003
87.08 ± 0.03
e
0.3559 +0.0002
−0.0003
0.355 ± 0.001
ω [deg]
18.4 +0.1
−0.2
17.7 ± 0.7
e cos ω
0.33773 +0.00005
−0.00003
0.3379 ± 0.0001
e sin ω
0.1123 +0.0007
−0.0012
0.108 ± 0.004
T 2 /T 1
1.001 +0.001
−0.002
0.993 ± 0.008
a [R ]
211.3 +0.2
−0.3
211.0 ± 0.3
contam
0.002 +0.004
−0.001
0.02 ± 0.01
Kepler contamination
γ vel [km s −1 ]
−4.4779 ± 0.002
−4.4797 ± 0.0007
systemic velocity a
Star 1
M [M ]
2.171 +0.006
−0.008
2.162 ± 0.008
R [R ]
8.37 +0.03
−0.07
8.27 ± 0.09
R/a
0.0396 +0.0001
−0.0003
0.0392 ± 0.0004
T [K]
4930 +140
−230
· · ·
K [km s −1 ]
33.19 +0.04
−0.05
33.13 ± 0.06
log g [cgs]
2.929 +0.007
−0.003
2.938 ± 0.008
q 1
0.66 +0.02
−0.04
0.72 ± 0.02
triangular limb darkening b
q 2
0.25 +0.02
−0.01
0.31 ± 0.02
triangular limb darkening b
Star 2
M [M ]
2.149 +0.006
−0.008
2.140 ± 0.008
R [R ]
8.30 +0.04
−0.03
8.29 ± 0.01
R/a
0.0393 ± 0.0001
0.03928 ± 0.00002
T [K]
4930 +140
−230
· · ·
K [km s −1 ]
33.53 +0.04
−0.05
33.47 ± 0.06
log g [cgs]
2.932 +0.003
−0.004
2.9315 ± 0.0005
q 1
0.55 +0.03
−0.04
TABLE 3
3Physical parameter comparisons for KIC 9246715 with different modeling techniques.Mass
Radius
log gρ
T eff
Model
(M )
(R )
(cgs)
(ρ × 10 −3 )
(K)
ELC (Light Curve + RV), Star 1
2.171 +0.006
−0.008
8.37 +0.03
−0.07
2.929 +0.007
−0.003
3.70 +0.04
−0.09
4930 +140
−230
ELC (Light Curve + RV), Star 2
2.149 +0.006
−0.008
8.30 +0.04
−0.03
2.932 +0.003
−0.004
3.76 +0.06
−0.04
4930 +140
−230
TABLE A1 Oscillation
A1modes in KIC 9246715 fit with DIAMONDS.
We later confirm that the RV results are indistinguishable from those measured with a more accurate BF model template (T eff = 5000 K, log g = 3.0, seeTable 3).
Units of BJD-2454833
Other scaling relation applications, such asChaplin et al. (2011) andKallinger et al. (2010), assume the observed ∆ν is equal to the asymptotic ∆ν.Mosser et al. (2013) uses a correction factor to account for the fact that oscillating red giants are not in the asymptotic regime, which we apply here.
The quoted uncertainty here is an "internal" error bar which assumes an underlying distribution of modes that corresponds to the red giant universal pattern.
Frequency( , m) Amplitude or Height a Linewidth Detection Probability b (µHz) (ppm) or (ppm 2 µHz −1 ) (3, 0) 3.1 ± 0.1 0.27 ± 0.02 0.68 a An amplitude is measured when the peak is a resolved Lorentzian, while height is measured instead when the peak is an unresolved Sinc 2 function. Linewidth is not defined in the latter case. b Values of 0.99 and above are ensured to be significant.
. S Alam, F D Albareti, C Prieto, ApJS. 21912Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, ApJS, 219, 12
. F Allard, T Guillot, H.-G Ludwig, Brown Dwarfs IAU Symposium. 211325Allard, F., Guillot, T., Ludwig, H.-G., et al. 2003, Brown Dwarfs IAU Symposium, 211, 325
. M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
. Y Avni, J N Bahcall, ApJ. 197675Avni, Y., & Bahcall, J. N. 1975, ApJ, 197, 675
P G Beck, K Hambleton, J Vos, The Space Photometry Revolution -CoRoT Symposium. 36004Beck, P. G., Hambleton, K., Vos, J., et al. 2015, The Space Photometry Revolution -CoRoT Symposium 3, 101, 06004
. P G Beck, J Montalbán, T Kallinger, Nature. 48155Beck, P. G., Montalbán, J., Kallinger, T., et al. 2012, Nature, 481, 55
. P G Beck, K Hambleton, J Vos, A&A. 56436Beck, P. G., Hambleton, K., Vos, J., et al. 2014, A&A, 564, A36
. T R Bedding, B Mosser, D Huber, Nature. 471608Bedding, T. R., Mosser, B., Huber, D., et al. 2011, Nature, 471, 608
. K Belkacem, M J Goupil, M A Dupret, A&A. 530142Belkacem, K., Goupil, M. J., Dupret, M. A., et al. 2011, A&A, 530, A142
. W J Borucki, D Koch, G Basri, Science. 327977Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
. L A Buchhave, G Á Bakos, J D Hartman, ApJ. 7201118Buchhave, L. A., Bakos, G.Á., Hartman, J. D., et al. 2010, ApJ, 720, 1118
. B W Carney, D F Gray, D Yong, AJ. 135892Carney, B. W., Gray, D. F., Yong, D., et al. 2008, AJ, 135, 892
. W J Chaplin, A Miglio, ARA&A. 51353Chaplin, W. J., & Miglio, A. 2013, ARA&A, 51, 353
. W J Chaplin, H Kjeldsen, J Christensen-Dalsgaard, Science. 332213Chaplin, W. J., Kjeldsen, H., Christensen-Dalsgaard, J., et al. 2011, Science, 332, 213
. J Christensen-Dalsgaard, AN. 333914Christensen-Dalsgaard, J. 2012, AN, 333, 914
. A Claret, P H Hauschildt, S Witte, A&A. 55216Claret, A., Hauschildt, P. H., & Witte, S. 2013, A&A, 552, A16
. E Corsaro, J De Ridder, A&A. 57171Corsaro, E., & De Ridder, J. 2014, A&A, 571, A71
. E Corsaro, J De Ridder, R A Garcia, A&A. 57983Corsaro, E., De Ridder, J., & Garcia, R. A. 2015, A&A, 579, A83
. E Corsaro, H E Fröhlich, A Bonanno, MNRAS. 4302313Corsaro, E., Fröhlich, H. E., Bonanno, A., et al. 2013, MNRAS, 430, 2313
. A Derekas, L L Kiss, T Borkovits, Science. 332216Derekas, A., Kiss, L. L., Borkovits, T., et al. 2011, Science, 332, 216
. S Frandsen, H Lehmann, S Hekker, A&A. 556138Frandsen, S., Lehmann, H., Hekker, S., et al. 2013, A&A, 556, A138
. H E Fröhlich, A Frasca, G Catanzaro, A&A. 543146Fröhlich, H. E., Frasca, A., Catanzaro, G., et al. 2012, A&A, 543, A146
. J Fuller, M Cantiello, D Stello, R A García, L Bildsten, Science. 350423Fuller, J., Cantiello, M., Stello, D., García, R. A., & Bildsten, L. 2015, Science, 350, 423
. P Gaulme, J Jackiewicz, T Appourchaux, B Mosser, ApJ. 7855Gaulme, P., Jackiewicz, J., Appourchaux, T., & Mosser, B. 2014, ApJ, 785, 5
. P Gaulme, J Mckeever, M L Rawls, ApJ. 76782Gaulme, P., McKeever, J., Rawls, M. L., et al. 2013, ApJ, 767, 82
. M J Goupil, B Mosser, J P Marques, A&A. 54975Goupil, M. J., Mosser, B., Marques, J. P., et al. 2013, A&A, 549, A75
. D F Gray, SoPh. 59193Gray, D. F. 1978, SoPh, 59, 193
. K Gullikson, S Dodson-Robinson, A Kraus, AJ. 14853Gullikson, K., Dodson-Robinson, S., & Kraus, A. 2014, AJ, 148, 53
. J W Harvey, SoPh. 289Harvey, J. W. 1973, SoPh, 28, 9
. S Hekker, J Debosscher, D Huber, ApJL. 713187Hekker, S., Debosscher, J., Huber, D., et al. 2010, ApJL, 713, L187
. F Herwig, A&A. 360952Herwig, F. 2000, A&A, 360, 952
. J A Holtzman, M Shetrone, J A Johnson, AJ. 150148Holtzman, J. A., Shetrone, M., Johnson, J. A., et al. 2015, AJ, 150, 148
. D Huber, arXiv, 1404.7501Huber, D. 2014, arXiv, 1404.7501
. D Huber, T R Bedding, D Stello, ApJ. 7231607Huber, D., Bedding, T. R., Stello, D., et al. 2010, ApJ, 723, 1607
. D Huber, T R Bedding, D Stello, ApJ. 743143Huber, D., Bedding, T. R., Stello, D., et al. 2011, ApJ, 743, 143
. D Huber, M J Ireland, T R Bedding, ApJ. 76032Huber, D., Ireland, M. J., Bedding, T. R., et al. 2012, ApJ, 760, 32
. D Huber, V Silva Aguirre, J M Matthews, ApJS. 2112Huber, D., Silva Aguirre, V., Matthews, J. M., et al. 2014, ApJS, 211, 2
S Ilijic, H Hensberge, K Pavlovski, L M Freyhammer, Spectroscopically and Spatially Resolving the Components of the Close Binary Stars. R. W. Hilditch, H. Hensberge, & K. Pavlovski318ASP Conference SeriesIlijic, S., Hensberge, H., Pavlovski, K., & Freyhammer, L. M. 2004, in ASP Conference Series, Vol. 318, Spectroscopically and Spatially Resolving the Components of the Close Binary Stars, ed. R. W. Hilditch, H. Hensberge, & K. Pavlovski, 111-113
. T Kallinger, W W Weiss, C Barban, A&A. 50977Kallinger, T., Weiss, W. W., Barban, C., et al. 2010, A&A, 509, A77
. D M Kipping, MNRAS. 4352152Kipping, D. M. 2013, MNRAS, 435, 2152
. H Kjeldsen, T R Bedding, A&A. 29387Kjeldsen, H., & Bedding, T. R. 1995, A&A, 293, 87
. Z Kopal, Ap&SS. 5360Kopal, Z. 1969, Ap&SS, 5, 360
. L Magrini, S Randich, E Friel, A&A. 55838Magrini, L., Randich, S., Friel, E., et al. 2013, A&A, 558, A38
. K Mandel, E Agol, ApJL. 580171Mandel, K., & Agol, E. 2002, ApJL, 580, L171
. A Miglio, W J Chaplin, R Farmer, ApJL. 7843Miglio, A., Chaplin, W. J., Farmer, R., et al. 2014, ApJL, 784, L3
. A Miglio, K Brogaard, D Stello, MNRAS. 4192077Miglio, A., Brogaard, K., Stello, D., et al. 2012, MNRAS, 419, 2077
A Miglio, C Chiappini, T Morel, 40th Liège International Astrophysical Colloquium. Ageing Low Mass Stars: From Red Giants to White Dwarfs. 433004Miglio, A., Chiappini, C., Morel, T., et al. 2013, 40th Liège International Astrophysical Colloquium. Ageing Low Mass Stars: From Red Giants to White Dwarfs, 43, 03004
. B Mosser, T Appourchaux, A&A. 508877Mosser, B., & Appourchaux, T. 2009, A&A, 508, 877
. B Mosser, M Vrard, K Belkacem, S Deheuvels, M J Goupil, arXiv, 1509.06193Mosser, B., Vrard, M., Belkacem, K., Deheuvels, S., & Goupil, M. J. 2015, arXiv, 1509.06193
. B Mosser, K Belkacem, M J Goupil, A&A. 5259Mosser, B., Belkacem, K., Goupil, M. J., et al. 2011, A&A, 525, L9
. B Mosser, M J Goupil, K Belkacem, A&A. 540143Mosser, B., Goupil, M. J., Belkacem, K., et al. 2012, A&A, 540, A143
. B Mosser, E Michel, K Belkacem, A&A. 550126Mosser, B., Michel, E., Belkacem, K., et al. 2013, A&A, 550, A126
. B Mosser, O Benomar, K Belkacem, A&A. 5725Mosser, B., Benomar, O., Belkacem, K., et al. 2014, A&A, 572, L5
. G I Ogilvie, ARA&A. 52171Ogilvie, G. I. 2014, ARA&A, 52, 171
. J A Orosz, P H Hauschildt, A&A. 364265Orosz, J. A., & Hauschildt, P. H. 2000, A&A, 364, 265
. B Paxton, L Bildsten, A Dotter, ApJS. 1923Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3
. B Paxton, M Cantiello, P Arras, ApJS. 2084Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4
. B Paxton, P Marchant, J Schwab, ApJS. 22015Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15
. A Prša, N Batalha, R W Slawson, AJ. 14183Prša, A., Batalha, N., Slawson, R. W., et al. 2011, AJ, 141, 83
. R D Robinson, L E Cram, M S Giampapa, ApJS. 74891Robinson, R. D., Cram, L. E., & Giampapa, M. S. 1990, ApJS, 74, 891
. T P Robitaille, E J Tollerud, P Greenfield, A&A. 55833Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, A&A, 558, A33
. S M Rucinski, AJ. 1241746Rucinski, S. M. 2002, AJ, 124, 1746
. S M Rucinski, AJ. 14949Rucinski, S. M. 2015, AJ, 149, 49
. V Silva Aguirre, L Casagrande, S Basu, ApJ. 75799Silva Aguirre, V., Casagrande, L., Basu, S., et al. 2012, ApJ, 757, 99
. G Sistla, J W Harvey, SoPh. 1266Sistla, G., & Harvey, J. W. 1970, SoPh, 12, 66
. R W Slawson, A Prša, W F Welsh, AJ. 142160Slawson, R. W., Prša, A., Welsh, W. F., et al. 2011, AJ, 142, 160
. C Sneden, ApJ. 184839Sneden, C. 1973, ApJ, 184, 839
S G Sousa, Determination of Atmospheric Parameters of B-A-, F-and G-Type Stars. Springer Science & Business MediaSousa, S. G. 2014, in Determination of Atmospheric Parameters of B-A-, F-and G-Type Stars (Springer Science & Business Media), 297-310
. S G Sousa, N C Santos, G Israelian, M Mayor, M J P F G Monteiro, A&A. 469783Sousa, S. G., Santos, N. C., Israelian, G., Mayor, M., & Monteiro, M. J. P. F. G. 2007, A&A, 469, 783
. D Stello, W J Chaplin, H Bruntt, ApJ. 7001589Stello, D., Chaplin, W. J., Bruntt, H., et al. 2009, ApJ, 700, 1589
PyKE: Reduction and analysis of Kepler Simple Aperture Photometry data. M Still, T Barclay, ascl:1208.004Still, M., & Barclay, T. 2012, PyKE: Reduction and analysis of Kepler Simple Aperture Photometry data, ascl:1208.004
. M Tassoul, ApJS. 43469Tassoul, M. 1980, ApJS, 43, 469
. J Tayar, T Ceillier, D A Garcia-Hernández, ApJ. 80782Tayar, J., Ceillier, T., Garcia-Hernández, D. A., et al. 2015, ApJ, 807, 82
. C J F Ter Braak, Statistics and Computing. 16239Ter Braak, C. J. F. 2006, Statistics and Computing, 16, 239
. M Tsantaki, S G Sousa, V Z Adibekyan, A&A. 555150Tsantaki, M., Sousa, S. G., Adibekyan, V. Z., et al. 2013, A&A, 555, A150
. F Verbunt, E S Phinney, A&A. 296709Verbunt, F., & Phinney, E. S. 1995, A&A, 296, 709
. T R White, T R Bedding, D Stello, ApJ. 743161White, T. R., Bedding, T. R., Stello, D., et al. 2011, ApJ, 743, 161
. J P Zahn, A&A. 57383Zahn, J. P. 1977, A&A, 57, 383
. S Zucker, T Mazeh, ApJ. 420806Zucker, S., & Mazeh, T. 1994, ApJ, 420, 806
|
[
"https://github.com/mrawls/BF-rvplotter"
] |
[
"Forecasting SQL Query Cost at Twitter",
"Forecasting SQL Query Cost at Twitter"
] |
[
"Chunxu Tang [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Beinan Wang [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Zhenxiao Luo [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Huijun Wu [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Shajan Dasan [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Maosong Fu \nTwitter, Inc. San Francisco\nUSA\n",
"Yao Li [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Mainak Ghosh [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Ruchin Kabra [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Nikhil Kantibhai Navadiya [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Da Cheng \nTwitter, Inc. San Francisco\nUSA\n",
"Fred Dai [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Vrushali Channapattan [email protected] \nTwitter, Inc. San Francisco\nUSA\n",
"Prachi Mishra [email protected] \nTwitter, Inc. San Francisco\nUSA\n"
] |
[
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA",
"Twitter, Inc. San Francisco\nUSA"
] |
[] |
With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query with traditional DBMS approaches. Can we estimate the cost of each query more efficiently without any computation in a SQL engine kernel? Can machine learning techniques help to estimate SQL query resource utilization? The answers are yes. We propose a SQL query cost predictor service, which employs machine learning techniques to train models from historical query request logs and rapidly forecasts the CPU and memory resource usages of online queries without any computation in a SQL engine. At Twitter, infrastructure engineers are maintaining a large-scale SQL federation system across on-premises and cloud data centers for serving ad-hoc queries. The proposed service can help to improve query scheduling by relieving the issue of imbalanced online analytical processing (OLAP) workloads in the SQL engine clusters. It can also assist in enabling preemptive scaling. Additionally, the proposed approach uses plain SQL statements for the model training and online prediction, indicating it is both hardware and software-agnostic. The method can be generalized to broader SQL systems and heterogeneous environments. The models can achieve 97.9% accuracy for CPU usage prediction and 97% accuracy for memory usage prediction.
|
10.1109/ic2e52221.2021.00030
|
[
"https://arxiv.org/pdf/2204.05529v1.pdf"
] | 244,497,713 |
2204.05529
|
7b72f39fd5fcf2f12db5744c4f9eec8817119050
|
Forecasting SQL Query Cost at Twitter
Chunxu Tang [email protected]
Twitter, Inc. San Francisco
USA
Beinan Wang [email protected]
Twitter, Inc. San Francisco
USA
Zhenxiao Luo [email protected]
Twitter, Inc. San Francisco
USA
Huijun Wu [email protected]
Twitter, Inc. San Francisco
USA
Shajan Dasan [email protected]
Twitter, Inc. San Francisco
USA
Maosong Fu
Twitter, Inc. San Francisco
USA
Yao Li [email protected]
Twitter, Inc. San Francisco
USA
Mainak Ghosh [email protected]
Twitter, Inc. San Francisco
USA
Ruchin Kabra [email protected]
Twitter, Inc. San Francisco
USA
Nikhil Kantibhai Navadiya [email protected]
Twitter, Inc. San Francisco
USA
Da Cheng
Twitter, Inc. San Francisco
USA
Fred Dai [email protected]
Twitter, Inc. San Francisco
USA
Vrushali Channapattan [email protected]
Twitter, Inc. San Francisco
USA
Prachi Mishra [email protected]
Twitter, Inc. San Francisco
USA
Forecasting SQL Query Cost at Twitter
Index Terms-sqldatabasemachine learning
With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query with traditional DBMS approaches. Can we estimate the cost of each query more efficiently without any computation in a SQL engine kernel? Can machine learning techniques help to estimate SQL query resource utilization? The answers are yes. We propose a SQL query cost predictor service, which employs machine learning techniques to train models from historical query request logs and rapidly forecasts the CPU and memory resource usages of online queries without any computation in a SQL engine. At Twitter, infrastructure engineers are maintaining a large-scale SQL federation system across on-premises and cloud data centers for serving ad-hoc queries. The proposed service can help to improve query scheduling by relieving the issue of imbalanced online analytical processing (OLAP) workloads in the SQL engine clusters. It can also assist in enabling preemptive scaling. Additionally, the proposed approach uses plain SQL statements for the model training and online prediction, indicating it is both hardware and software-agnostic. The method can be generalized to broader SQL systems and heterogeneous environments. The models can achieve 97.9% accuracy for CPU usage prediction and 97% accuracy for memory usage prediction.
I. INTRODUCTION
In the data platform at Twitter, there is a large effort in pursuing high scalability and availability to fulfill the increasing needs for data analytics on the sea of data. Twitter runs multiple large Hadoop clusters with over 300PB of data [1], which are among the biggest in the world. To overcome the performance issues in developing and maintaining SQL systems with increasing volumes of data, we designed a large-scale SQL federation system across on-premises and cloud Hadoop clusters, paving the path for democratizing data analytics and improving productivity at Twitter. The SQL federation system is processing around 10PB of data daily.
During the daily operation of large-scale SQL systems, we found that the lack of forecasting SQL query resource usages is problematic. First of all, data system customers would like to know the resource consumption estimation of their queries. We received complaints that they wasted a considerable amount of time waiting for ad-hoc queries to complete but finally canceled them in frustration 1 . We found that there is a correlation between the CPU time and wall clock time of ad-hoc queries in Twitter OLAP workload. Once the query resource is predicted, users can know approximately how long they will wait. Second, according to the convoy effect, query scheduling requires an estimate of the immediate workload in the SQL system. Without proper query scheduling, the cluster can be overwhelmed because of resource-consuming queries which can easily occupy most resources in a cluster in as short as 10s. Third, elastic scaling needs query resource usage forecasting. Due to the rapid impact of resource-consuming queries, a SQL system has to scale ahead of the actual processing of these queries.
A query cost prediction system can provide the following benefits for large-scale SQL systems:
• Rapid estimate of CPU and memory usage of a query for customers. They will have an intuitive idea of resources a query may consume and an anticipated billing associated. • Improved query scheduling. With the forecasted resource usage of a query, we can differentiate whether this is a resource-consuming query or a lightweight query. This will help balance the workloads on different clusters, before executing any queries in a SQL engine kernel. • Enabling preemptive scaling. Because resourceintensive queries can occupy most resources of a cluster in a short time, the prediction of resource usages can help to trigger preemptive scaling if it alerts that the immediate workload may overwhelm a preset configurable threshold.
As the query cost needs to be predicted before the query is executed in a SQL engine, we do not leverage traditional DBMS approaches such as the cost model [2]- [4]. By contrast, we propose to utilize machine learning techniques to train two models from historical SQL query request logs for CPU time and peak memory prediction. We categorize the queries into various ranges according to their CPU time and peak memory. Unlike some prior work [5]- [7] of generating features from query plans, we extract features from plain SQL statements. Our evaluation results show that the XGBoost classifier together with the TF-IDF vectorization achieves high performance regarding accuracy as well as precision and recall for each category.
This paper makes the following contributions:
1) Motivated by challenges that customers and engineers are facing in a large-scale SQL system, we introduce a SQL query cost prediction system for resource usage estimation independent from SQL engines. 2) We harness supervised machine learning techniques to arXiv:2204.05529v1 [cs.DB] 12 Apr 2022 forecast both CPU and memory resource usages of individual SQL queries. As the proposed approach does not have any specific hardware or software requirements, it can be generalized to broader SQL systems and heterogeneous environments. Moreover, from our observation, the P95 (95th percentile) of the query planning latency in the SQL federation system is 9s, and the latency can be as high as more than 30s. By contrast, the proposed method only requires a constant time for the prediction (around 200ms) for all types of SQL queries. 3) We deployed the trained models in the online production environment for resource usage prediction. We observed that the model accuracy may drop to around 92% in four weeks and verified the concept drift. Moreover, we implemented a quantitative analysis on the monitoring results. The remainder of this paper is organized as follows. We discuss related work in Section II, describe the design of the SQL federation system in Section III, demonstrate the architecture of the query cost prediction system in Section IV, explain the data preprocessing in Section V, and present training and evaluation in Section VI. Section VIII concludes the paper.
II. RELATED WORK
With the increasing volume of data, many distributed SQL engines, targeting analyzing Big Data, emerged in the recent decade. For example, Apache Hive [8] is a data warehouse built on top of Hadoop, providing a SQL-like interface for data querying. Spark SQL [9] is a module integrated with Apache Spark, powering relational processing to Spark data structures. Presto [10] is a distributed SQL engine, targeting "SQL on everything". It can query data from multiple sources which is a major advantage. At the same time, there are some commercial products such as Google BigQuery [11] (a public implementation of Dremel [12], [13]), Amazon Redshift [14], and Snowflake [15], acting as fully-managed cloud data warehouses providing out-of-box data analytics experience.
Some prior work pioneers the study of resource prediction and scaling in a database system. For example, Narayanan et al. [16] proposed a Resource Advisor to answer "whatif" questions about resource upgrades to predict the buffer pool size on online transaction processing (OLTP) workloads. Rogers et al. [17] described a "white-box" and a "black-box" approach to build a resource provisioning framework on top of an Infrastructure-As-A-Service cloud. Similarly, Das et al. [18] presented an automated demand-driven resource scaling approach by deriving a set of robust signals from the database engine and combining them for better scaling.
Recently, some researchers began to utilize machine learning techniques to solve resource prediction problems. Some work concentrated on optimized planning through identifying and forecasting workload patterns. For example, Mozafari et al. [19] developed the DBSeer system, employing statistical models for resource and performance prediction. Ma et al. [20] created a robust forecasting framework to predict the expected arrival rate of queries based on historical data. They studied Linear Regression, Recurrent Neural Network, and Kernel Regression. Higginson et al. [21] utilized time series analysis and supervised machine learning to identify traits for database workload capacity planning.
Some other work focused on predicting metrics of a query, such as query performance prediction, usually with the help of query plans. For example, Gupta et al. [5] introduced a classification tree structure obtained from historical data of queries with the query plan to predict the execution time of a query. Ganapathi et al. [6] built machine learning models with query plan feature vectors to predict metrics such as elapsed time, disk I/Os, and message bytes. Akdere et al. [7] evaluated predictive modeling techniques, ranging from plan-level models to operator-level models. Marcus et al. [22] introduced a plan-structured neural network to predict query performance. Some work, such as [23], [24], and [25], also extended the prediction to concurrent query performance prediction.
The existing plan-based approaches rely on query planning in SQL engines, indicating limitations under the scenario of predicting query resource usages for query scheduling and preemptive scaling when SQL engines are not involved. To resolve this problem, we propose a machine learning approach learning from plain SQL statements' features to forecast both CPU time and peak memory without dependency on any SQL engines or query plans, fulfilling the requirements of query scheduling and preemptive scaling. In the SQL federation system shown in Figure 1, a query processing flow includes the following steps: 1) A client, such as a notebook or BI tool, sends a SQL query to the router.
III. SQL FEDERATION SYSTEM
Router
2) The router obtains the predicted resource usage of the query from the query cost predictor. 3) Using both the projected resource usage and the cluster statistics, the router determines a SQL engine cluster to route the request. 4) The router sends the cluster's URL endpoint, usually the endpoint of the coordinator node in that cluster, back to the client. 5) The client sends the query to the specific cluster. 6) The cluster plans the query, fetches metadata from the metadata storage, and scans necessary datasets from the data storage. 7) Data results are aggregated and returned to the client. The SQL federation system and peripherals contain the following components:
Notebook/Business Intelligence (BI) tools. At Twitter, data analysts and data scientists are employing various notebook tools (e.g. Jupyter notebook and Apache Zeppelin) and BI tools (e.g. Tableau and Looker) for data analytics and visualization to gain insights from datasets. These tools send queries to the SQL federation system to get the corresponding data results.
SQL engine clusters. We utilize Presto as the core of each SQL engine cluster. Meanwhile, Twitter engineering is embarking on an effort to migrating ad-hoc clusters to the Google Cloud Platform (GCP), aka the "Partly Cloudy" [26]. Some of the SQL engine clusters have been migrated to the GCP. For now, we are maintaining a hybrid-cloud SQL federation system with both on-premises and cloud SQL engine clusters.
Data storage. At Twitter, the Hadoop Distributed File System (HDFS) [27] is well-acknowledged and widely used. Each SQL engine worker interacts with the HDFS for querying data, which is usually stored in an Apache Parquet [28] format. With the Partly Cloudy strategy, the data storage has also been extended to the Google Cloud Storage (GCS).
Metadata storage [29]. The metadata storage is used to provide metadata information, for example, how data files are mapped to schemas and tables.
Router. The router sits between the clients and SQL engine clusters, exposes a unified interface to the clients, hides cluster configuration details from the clients, and routes requests to specific clusters. To more efficiently utilize the resources on these clusters, the router also aids in balancing the workloads, with the help of the query cost predictor. Besides collecting real-time operational statistics such as P90 (90the percentile) of query latency from SQL engine clusters, the router lever-ages the prediction results from the query cost predictor to improve query scheduling and enable preemptive scaling.
Query cost prediction serving cluster. This is the serving cluster of the query cost prediction system. The predictor's job is to rapidly estimate the CPU and memory resources anticipated for a SQL query. Section IV discusses the architectural design of this system. Figure 2 illustrates the architectural design of the query cost prediction system:
IV. QUERY COST PREDICTION SYSTEM
Request logs. The raw dataset is collected from query request logs stored in the HDFS. For each SQL query sent by a notebook/BI tool and processed in the SQL federation system, the query engine produces a request log. Table I shows that each SQL request log sample contains queryrelated information including the unique identifier, user name, environment, query statement, etc. The logs in the recent three months (90 days) are a good indicator for forecasting the cost of online queries from our experiments. Such a typical dataset consists of around 1.2 million records and more than 20 columns.
In Twitter's data platform, most SQL statements in a typical OLAP workload are SELECT statements, leveraged to query datasets stored in various data sources such as HDFS and GCS. Figure 3 illustrates the distribution of SQL statements based on types. Besides writing SELECT statements, customers also leverage CREATE statements to create temporary tables or material views and OTHER statements mainly for metadata querying. DELETE statements are rarely used.
Training cluster. The training cluster takes charge of the computation with machine learning techniques. We train two machine learning models-CPU model and memory modelfrom historical request logs for CPU time and peak memory prediction. First, we perform data cleaning and discretization to the raw dataset, converting the continuous CPU time and peak memory into buckets. Then, we apply vectorization techniques to extract features from raw SQL statements and employ classification algorithms to obtain models. Uh0u6ScxuJ alice cluster a sql1 10143681 1204117281 2020021013 HSJb3hSEe9 bob cluster b sql2 5903987 9038118972 2020021411 y2cysjWzKC bob cluster a sql3 284392 1204117281 2020021719 YqtRmXL8Gy alice cluster a sql4 53 45056 2020091516 oMawUdJuHA charley cluster a sql5 179972 118783230 Model repository. The model repository manages the models, including model storage and versioning. The models are stored in a central repository such as GCS buckets.
Serving cluster. The serving cluster fetches models from the model repository and wraps models into a web predictor service, exposing RESTful APIs for external usages and forecasting the CPU time and peak memory for online SQL queries from the notebook/BI tools (for customers to have a sense of the estimate of resource usages of their queries) and the router (for query scheduling and preemptive scaling).
V. DATA PREPROCESSING
A. Data Cleaning and Discretization
Prior DBMS statistical approaches apply regression techniques such as time series analysis to solve DBMS problems. However, as the distribution of SQL query resource usage follows a power law, it poses challenges to conventional regression approaches such as [30] and [31]. Moreover, we noticed that the quick estimate of resource usages for customers and the router for scheduling and scaling do not require an accurately predicted value. For example, from our study, a customer does not care much about whether the query will cost 60 minutes or 61 minutes CPU time, but whether this is a CPU-consuming query or a lightweight query that can be completed in an acceptable range of wall clock time. For query scheduling or scaling, we also do not need an accurate value. Instead, we need to know whether a query is a low resource-consuming query, a medium one, or a high one. This judgment leads to the appliance of data discretization to the raw dataset, transforming the continuous data to discrete data. We also propose that this approach can be generalized to other DBMS problems where an accurately predicted value can be replaced by an approximate range.
Some prior work, such as [6] and [20], leverages clustering to predict DBMS metrics. We tried applying the k-means clustering algorithm, used in [20], to the dataset collected from request logs in 3 months. Figure 4 shows an example of categorizing 10,000 queries of peak memory and CPU time in three clusters. However, we found that this clustering approach did not work well. First of all, in query scheduling and preemptive scaling, we usually plan the CPU and memory resources separately instead of collectively. Moreover, we found that the correlation between peak memory and CPU time is only 0.256, indicating a query that consumes a large amount of memory may only require a small chunk of CPU time. But the algorithm tended to cluster queries with low CPU time and those with low peak memory. How to group these queries? First of all, we select 5h and 1TB as boundaries for high CPU usage and memory usage respectively, because these are prior thresholds applied in our SQL federation system, which are obtained from our operational experience in running analytical queries. For the CPU time, based on our DevOps experience, we consider queries whose CPU time is less than 30s as lightweight queries. This also helps us capture the large proportion of queries in the range [0, 30s), which occupies more than 70% of queries. By contrast, only 1% of queries fall in the range [30s, 1min). For the peak memory, we found that the distribution tends to be more evenly distributed. As a result, we generally evenly categorize the queries whose peak memory is lower than 1TB and select 1MB as the boundary for low and medium-memoryconsuming queries. In summary, we categorize the CPU time into three ranges: [0, 30s), [30s, 5h), [5h, ); categorize the peak memory into three ranges: [0, 1MB), [1MB, 1TB), [1TB, ). Figure 5 shows the distribution of queries out of this partition.
As the queries are not evenly arranged especially for CPU time usage, this may lead to an imbalanced classes issue in the classification which we need to pay attention to. More evaluation details are discussed in Sections VI and VII. After the dataset is transformed, we partition the dataset into a training dataset (80%) and a testing dataset (20%).
B. Feature Extraction
For the transformed dataset with peak memory and CPU time in categories, we apply vectorization techniques from Natural Language Processing (NLP) to generate essential features. Each SQL statement is mapped to a vector of numbers for subsequent processing via vectorization, making it easier for running classification algorithms on text-based data. We employ the bag-of-words (BoW) models, in which each word is represented by a number such that a SQL statement can be represented by a sequence of numbers. Word frequencies are a typical representation. The term frequency-inverse document frequency (TF-IDF) is another popular vectorization approach. BoW models are known for their high flexibility. They can be generalized to a variety of text data domains.
Word embedding is another widely-used NLP approach where words with similar meanings are represented by similar vectors, usually through computing the joint probability of words. In the word embedding, each word is mapped to a highdimensional vector, such that various deep learning algorithms can be applied to solve NLP tasks. However, word embedding brings challenges for domain-specific context [32], indicating it may not fit the context of Twitter SQL statements without a pre-trained domain-specific embedding model. In our work, considering there are extra training costs required for the word embedding model, whereas our BoW-based models already have high prediction accuracy and interpretability, we move forward with the BoW-based models in our machine learning pipeline.
BoW models are good at feature engineering for resource usage prediction. They produce features without computation in any SQL engines or communication with any metadata stores. We do not use table-specific statistics for feature engineering as this type of data requires additional costs for analyzing SQL statements and fetching table-related metadata. We also observed that with tree-based machine learning models where feature importance can easily be interpreted, some SQL-related information such as the access to specific tables and the limit of time ranges can be captured and reflected. This implies machine learning techniques can also help developers to gain insights from large-scale SQL systems.
VI. MODEL TRAINING & EVALUATION
After preparing the features, we trained three types of classifiers on the training dataset: Random Forest (RF), which is an ensemble learning method; XGBoost, which is a treebased gradient boosting approach; and Logistic Regression, which predicts probabilities of certain classes with a logistic function. All of them are well-known for high computational scalability and high interpretability. They have been widely used for classification tasks. The 3-fold cross-validation is used to find the optimal hyperparameters. From our experiments, a typical training job on around 1.2 million query logs can be completed in less than 5 hours in a machine with 8 CPU cores and 64GB of memory. Then, we tested the trained classifiers on the testing dataset. As the classes are imbalanced and resource-consuming queries are of higher importance than lightweight queries, we studied the precision and recall of each class together with the overall accuracy for each of the CPU and memory models. For the feature extraction, we compared the word count approach and the TF-IDF approach. Table II shows the comparison of these approaches.
From the evaluation results, the XGBoost model with the TF-IDF approach outperforms other CPU models up to 3% and other memory models up to 4% for the overall accuracy. It achieves 97.9% accuracy for the CPU model and 97.0% accuracy for the memory model. The Random Forest models and XGBoost models have similar performance and are higher than that of Logistic Regression models. Most TF-IDF-based models have a little bit higher accuracy than their counterparts based on word counts. Accuracy is a popular metric to evaluate model performance, but it is not the only one. In particular, considering the imbalanced classes in the training dataset, shown in Figures 5a and 5b, a high overall accuracy does not always indicate the model is good at predicting all classes. A high capability of classifying a class containing a dominant number of samples can conceal the low accuracy of predicting classes with smaller numbers of samples. To overcome the potential issue here, we also considered the precision and recall of each class, especially the classes representing CPU or memory-intensive queries. In our work, precision is defined as the proportion of returned results that are relevant; recall (also known as sensitivity) is defined as the proportion of relevant cases that are returned. Details of precision and recall of the XGBoost -TD-IDF classifier are shown in Tables III and IV. From the tables, we can see that XGBoost achieves high precision and recall for all classes besides the overall accuracy. Particularly, it reaches no less than 0.95 of precision and recall for resource-consuming queries: [5h, ) and [1TB, ). XGBoost has the best performance, so we decided to use it in our online environment. Moreover, Twitter has an internal distribution of the open-sourced XGBoost JVM package, fitting Twitter's technical stack. This is also an add-on advantage for making this decision.
VII. MODEL SERVING & MONITORING
After the models are trained and tested, we encapsulate the models into a web application for real-time serving. The service, held in the serving cluster, is deployed in Aurora containers on Mesos [33], which is the stack widely used at Twitter. As each deployment unit is stateless, the application's scalability can be enhanced by increasing the number of deployment replicas. The models are stored in a central repository and fetched by the application when a deployment unit starts. The service exposes two RESTful API endpoints to forecast CPU time and peak memory of a SQL query. The inference time is around 200ms.
When the models serve online requests, concept drift is observed since unseen features emerge over time, deteriorating the model performance. Hence, model monitoring and retraining are required under a given time granularity. From our DevOps experience of the SQL federation system, the query counts have high variance daily but are quite stable weekly. For example, we usually observe many more queries on Mondays than those on Fridays. Furthermore, a large number of queries are scheduled to run weekly. To reduce the influence of outliers or peaks in some specific workdays, we re-evaluate the online models weekly. Figure 6 illustrates the drifts of models in a real-time online environment. Because resource-consuming queries weigh more than lightweight queries, we also include the precision and recall for resource-consuming ranges: [5h, ) and [1TB, ). As time goes, the model metrics are gradually decreasing. For example, at first, the overall accuracy of the CPU model is as high as around 98%. After four weeks, the accuracy shrank to around 93%. The recall of the CPU time class [5h, ) has dropped from 0.96 to 0.85. This is usually caused by unseen features in the latest queries which are not captured by the models we have trained. To mitigate the concept drift, we can retrain the models when both the precision and recall for CPU or memory-intensive queries are lower than 0.9. In Figure 6, at week 3, the precision of the class [5h, ) is 0.88 and the recall is 0.87. As both precision and recall are less than 0.9, this will trigger retraining of the CPU model. To maintain the consistency of training timestamps of the CPU and memory models, we will also retrain and redeploy the memory model. After the model retraining with the same hyperparameters, the metrics will go back to a high level. This indicates our models have a high adaptivity of new features. We also observed that even though the CPU model accuracy is higher than that of the memory model, the precision and recall are lower and fall faster. For example, the precision of both the CPU range [5h, ) and the memory range[1TB, ) is higher than 0.95 initially. But after four weeks, the precision of the CPU range has decayed to 0.86. By contrast, the precision of the memory range is still 0.92. This is probably because the setting of the CPU range [5h, ) is too high, causing the model to fail to capture the features of CPU-consuming queries. As a lesson learned from this study, we are considering tuning the CPU range for this type of queries in the future.
VIII. CONCLUSION
In this paper, we introduced the SQL federation system in the Twitter data platform and proposed a SQL query cost prediction system. Unlike prior work, the proposed system learns from plain SQL statements, builds machine learning models from historical query request logs, and forecasts CPU time and peak memory ranges. The evaluation shows that the proposed system of the XGBoost classifier with TF-IDF vectorization can achieve 97.9% accuracy for CPU time prediction and 97% accuracy for memory usage prediction. We deployed the models in the production environment and implemented a quantitative analysis on the concept drift observed. This work can help provide an estimate of resource usage to customers, improve query scheduling, and enable preemptive scaling, without dependency on SQL engines or query plans.
ACKNOWLEDGMENT
We would like to express our gratitude to everyone who has served on Twitter's Interactive Query team, including former team members Hao Luo and Yaliang Wang. We are also grateful to Daniel Lipkin and Derek Lyon for their strategic vision, direction, and support to the team. Finally, we thank Alyson Pavela, Julian Moore, and the anonymous IC2E reviewers for their insightful suggestions that helped us significantly improve this paper.
Fig. 1 :
1SQL federation system at Twitter.
Fig. 2 :
2The architectural design of the query cost prediction system.
2020110601 *: These are the columns utilized for model training.
Fig. 3 :
3Distribution of SQL statements in a typical Twitter's OLAP workload based on types.
Fig. 4 :
4Clustering of 10,000 samples of peak memory and CPU time. Some samples with very high CPU time or peak memory are removed from the figure.
Fig. 5 :
5Distribution of queries.
Changes of model recall for [5h, ) and [1TB, ).
Fig. 6 :
6Changes of model performance over time. RT: Retrained models.
TABLE I :
ISamples of SQL query request logs. Sensitive information such as user names and concrete SQL statements were hydrated and replaced. Some unrelated columns have been removed from the table.query id
user
cluster
query*
cpu time ms* peak memory bytes*
datehour
TABLE II :
IIThe model accuracy(%) of each classifier.CPU
Memory
Random Forest -Word Count
96.9
95.5
Random Forest -TF-IDF
97.2
95.4
XGBoost -Word Count
97.4
96.8
XGBoost -TF-IDF
97.9
97.0
Logistic Regression -Word Count
94.8
92.9
Logistic Regression -TF-IDF
95.0
92.9
TABLE III :
IIIThe precision and recall for each class of the CPU time model.CPU time Precision
Recall
[0, 30s)
0.98
0.99
[30s, 5h)
0.96
0.95
[5h, )
0.96
0.95
TABLE IV :
IVThe precision and recall for each class of the peak memory model.Peak memory Precision Recall
[0, 1MB)
0.98
0.99
[1MB, 1TB)
0.92
0.91
[1TB, )
0.97
0.98
From an analysis of query request logs in a three-month session, 16.2% of uncompleted queries are related to user cancellation.
A new collaboration with Google Cloud. P , P. Agrawal, "A new collaboration with Google Cloud," 2018. [Online]. Available: https://blog.twitter.com/engineering/en us/topics/ infrastructure/2018/a-new-collaboration-with-google-cloud.html
Predicting query execution time: Are optimizer cost models really unusable. W Wu, Y Chi, S Zhu, J Tatemura, H Hacigümüs, J F Naughton, 2013 IEEE 29th International Conference on Data Engineering (ICDE). IEEEW. Wu, Y. Chi, S. Zhu, J. Tatemura, H. Hacigümüs, and J. F. Naughton, "Predicting query execution time: Are optimizer cost models really unus- able?" in 2013 IEEE 29th International Conference on Data Engineering (ICDE). IEEE, 2013, pp. 1081-1092.
How good are query optimizers, really?. V Leis, A Gubichev, A Mirchev, P Boncz, A Kemper, T Neumann, Proceedings of the VLDB Endowment. the VLDB Endowment9V. Leis, A. Gubichev, A. Mirchev, P. Boncz, A. Kemper, and T. Neu- mann, "How good are query optimizers, really?" Proceedings of the VLDB Endowment, vol. 9, no. 3, pp. 204-215, 2015.
A cost model for Spark SQL. L Baldacci, M Golfarelli, IEEE Transactions on Knowledge and Data Engineering. 315L. Baldacci and M. Golfarelli, "A cost model for Spark SQL," IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 5, pp. 819-832, 2018.
PQR: Predicting query execution times for autonomous workload management. C Gupta, A Mehta, U Dayal, 2008 International Conference on Autonomic Computing. IEEEC. Gupta, A. Mehta, and U. Dayal, "PQR: Predicting query execution times for autonomous workload management," in 2008 International Conference on Autonomic Computing. IEEE, 2008, pp. 13-22.
Predicting multiple metrics for queries: Better decisions enabled by machine learning. A Ganapathi, H Kuno, U Dayal, J L Wiener, A Fox, M Jordan, D Patterson, 2009 IEEE 25th International Conference on Data Engineering. IEEEA. Ganapathi, H. Kuno, U. Dayal, J. L. Wiener, A. Fox, M. Jordan, and D. Patterson, "Predicting multiple metrics for queries: Better deci- sions enabled by machine learning," in 2009 IEEE 25th International Conference on Data Engineering. IEEE, 2009, pp. 592-603.
Learning-based query performance modeling and prediction. M Akdere, U Cetintemel, M Riondato, E Upfal, S B Zdonik, 2012 IEEE 28th International Conference on Data Engineering. IEEEM. Akdere, U. Cetintemel, M. Riondato, E. Upfal, and S. B. Zdonik, "Learning-based query performance modeling and prediction," in 2012 IEEE 28th International Conference on Data Engineering. IEEE, 2012, pp. 390-401.
Hive: A warehousing solution over a mapreduce framework. A Thusoo, J S Sarma, N Jain, Z Shao, P Chakka, S Anthony, H Liu, P Wyckoff, R Murthy, Proceedings of the VLDB Endowment. the VLDB Endowment2A. Thusoo, J. S. Sarma, N. Jain, Z. Shao, P. Chakka, S. Anthony, H. Liu, P. Wyckoff, and R. Murthy, "Hive: A warehousing solution over a map- reduce framework," Proceedings of the VLDB Endowment, vol. 2, no. 2, pp. 1626-1629, 2009.
Spark SQL: Relational data processing in Spark. M Armbrust, R S Xin, C Lian, Y Huai, D Liu, J K Bradley, X Meng, T Kaftan, M J Franklin, A Ghodsi, Proceedings of the 2015 ACM SIGMOD international conference on management of data. the 2015 ACM SIGMOD international conference on management of dataM. Armbrust, R. S. Xin, C. Lian, Y. Huai, D. Liu, J. K. Bradley, X. Meng, T. Kaftan, M. J. Franklin, A. Ghodsi et al., "Spark SQL: Relational data processing in Spark," in Proceedings of the 2015 ACM SIGMOD international conference on management of data, 2015, pp. 1383-1394.
Presto: SQL on everything. R Sethi, M Traverso, D Sundstrom, D Phillips, W Xie, Y Sun, N Yegitbasi, H Jin, E Hwang, N Shingte, 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEER. Sethi, M. Traverso, D. Sundstrom, D. Phillips, W. Xie, Y. Sun, N. Yegitbasi, H. Jin, E. Hwang, N. Shingte et al., "Presto: SQL on everything," in 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 1802-1813.
An inside look at Google BigQuery. K Sato, White paperK. Sato, "An inside look at Google BigQuery," White paper, URL: https://cloud. google. com/files/BigQueryTechnicalWP. pdf, 2012.
Dremel: Interactive analysis of web-scale datasets. S Melnik, A Gubarev, J J Long, G Romer, S Shivakumar, M Tolton, T Vassilakis, Proceedings of the VLDB Endowment. 31-2S. Melnik, A. Gubarev, J. J. Long, G. Romer, S. Shivakumar, M. Tolton, and T. Vassilakis, "Dremel: Interactive analysis of web-scale datasets," Proceedings of the VLDB Endowment, vol. 3, no. 1-2, pp. 330-339, 2010.
Dremel: A decade of interactive SQL analysis at web scale. S Melnik, A Gubarev, J J Long, G Romer, S Shivakumar, M Tolton, T Vassilakis, H Ahmadi, D Delorey, S Min, Proceedings of the VLDB Endowment. the VLDB Endowment13S. Melnik, A. Gubarev, J. J. Long, G. Romer, S. Shivakumar, M. Tolton, T. Vassilakis, H. Ahmadi, D. Delorey, S. Min et al., "Dremel: A decade of interactive SQL analysis at web scale," Proceedings of the VLDB Endowment, vol. 13, no. 12, pp. 3461-3472, 2020.
Amazon Redshift and the case for simpler data warehouses. A Gupta, D Agarwal, D Tan, J Kulesza, R Pathak, S Stefani, V Srinivasan, Proceedings of the 2015 ACM SIGMOD international conference on management of data. the 2015 ACM SIGMOD international conference on management of dataA. Gupta, D. Agarwal, D. Tan, J. Kulesza, R. Pathak, S. Stefani, and V. Srinivasan, "Amazon Redshift and the case for simpler data warehouses," in Proceedings of the 2015 ACM SIGMOD international conference on management of data, 2015, pp. 1917-1923.
The Snowflake elastic data warehouse. B Dageville, T Cruanes, M Zukowski, V Antonov, A Avanes, J Bock, J Claybaugh, D Engovatov, M Hentschel, J Huang, Proceedings of the 2016 International Conference on Management of Data. the 2016 International Conference on Management of DataB. Dageville, T. Cruanes, M. Zukowski, V. Antonov, A. Avanes, J. Bock, J. Claybaugh, D. Engovatov, M. Hentschel, J. Huang et al., "The Snowflake elastic data warehouse," in Proceedings of the 2016 International Conference on Management of Data, 2016, pp. 215-226.
Continuous resource monitoring for self-predicting DBMS. D Narayanan, E Thereska, A Ailamaki, 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems. IEEED. Narayanan, E. Thereska, and A. Ailamaki, "Continuous resource monitoring for self-predicting DBMS," in 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems. IEEE, 2005, pp. 239-248.
A generic autoprovisioning framework for cloud databases. J Rogers, O Papaemmanouil, U Cetintemel, 2010 IEEE 26th International Conference on Data Engineering Workshops. ICDEWJ. Rogers, O. Papaemmanouil, and U. Cetintemel, "A generic auto- provisioning framework for cloud databases," in 2010 IEEE 26th Inter- national Conference on Data Engineering Workshops (ICDEW 2010).
. IEEE. IEEE, 2010, pp. 63-68.
Automated demanddriven resource scaling in relational database-as-a-service. S Das, F Li, V R Narasayya, A C König, Proceedings of the 2016 International Conference on Management of Data. the 2016 International Conference on Management of DataS. Das, F. Li, V. R. Narasayya, and A. C. König, "Automated demand- driven resource scaling in relational database-as-a-service," in Proceed- ings of the 2016 International Conference on Management of Data, 2016, pp. 1923-1934.
Performance and resource modeling in highly-concurrent OLTP workloads. B Mozafari, C Curino, A Jindal, S Madden, Proceedings of the 2013 acm sigmod international conference on management of data. the 2013 acm sigmod international conference on management of dataB. Mozafari, C. Curino, A. Jindal, and S. Madden, "Performance and resource modeling in highly-concurrent OLTP workloads," in Proceed- ings of the 2013 acm sigmod international conference on management of data, 2013, pp. 301-312.
Query-based workload forecasting for self-driving database management systems. L Ma, D Van Aken, A Hefny, G Mezerhane, A Pavlo, G J Gordon, Proceedings of the 2018 International Conference on Management of Data. the 2018 International Conference on Management of DataL. Ma, D. Van Aken, A. Hefny, G. Mezerhane, A. Pavlo, and G. J. Gordon, "Query-based workload forecasting for self-driving database management systems," in Proceedings of the 2018 International Con- ference on Management of Data, 2018, pp. 631-645.
Database workload capacity planning using time series analysis and machine learning. A S Higginson, M Dediu, O Arsene, N W Paton, S M Embury, Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. the 2020 ACM SIGMOD International Conference on Management of DataA. S. Higginson, M. Dediu, O. Arsene, N. W. Paton, and S. M. Embury, "Database workload capacity planning using time series analysis and machine learning," in Proceedings of the 2020 ACM SIGMOD Interna- tional Conference on Management of Data, 2020, pp. 769-783.
Plan-structured deep neural network models for query performance prediction. R Marcus, O Papaemmanouil, Proceedings of the VLDB Endowment. the VLDB Endowment12R. Marcus and O. Papaemmanouil, "Plan-structured deep neural network models for query performance prediction," Proceedings of the VLDB Endowment, vol. 12, no. 11, pp. 1733-1746, 2019.
Performance prediction for concurrent database workloads. J Duggan, U Cetintemel, O Papaemmanouil, E Upfal, Proceedings of the 2011 ACM SIGMOD International Conference on Management of data. the 2011 ACM SIGMOD International Conference on Management of dataJ. Duggan, U. Cetintemel, O. Papaemmanouil, and E. Upfal, "Perfor- mance prediction for concurrent database workloads," in Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, 2011, pp. 337-348.
Contender: A resource modeling approach for concurrent query performance prediction. J Duggan, O Papaemmanouil, U Cetintemel, E Upfal, EDBT. J. Duggan, O. Papaemmanouil, U. Cetintemel, and E. Upfal, "Contender: A resource modeling approach for concurrent query performance pre- diction." in EDBT, 2014, pp. 109-120.
Ernest: Efficient performance prediction for large-scale advanced analytics. S Venkataraman, Z Yang, M Franklin, B Recht, I Stoica, 13th {USENIX} Symposium on Networked Systems Design and Implementation. S. Venkataraman, Z. Yang, M. Franklin, B. Recht, and I. Stoica, "Ernest: Efficient performance prediction for large-scale advanced analytics," in 13th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 16), 2016, pp. 363-378.
Partly Cloudy: The start of a journey into the cloud. J Rottinghuis, J. Rottinghuis, "Partly Cloudy: The start of a journey into the cloud," 2019. [Online]. Available: https://blog.twitter.com/engineering/en us/ topics/infrastructure/2019/the-start-of-a-journey-into-the-cloud.html
The Hadoop distributed file system. K Shvachko, H Kuang, S Radia, R Chansler, 2010 IEEE 26th symposium on mass storage systems and technologies (MSST). IeeeK. Shvachko, H. Kuang, S. Radia, and R. Chansler, "The Hadoop distributed file system," in 2010 IEEE 26th symposium on mass storage systems and technologies (MSST). Ieee, 2010, pp. 1-10.
Graduating Apache Parquet. J L Dem, J. L. Dem, "Graduating Apache Parquet," 2015. [Online]. Available: https://blog.twitter.com/engineering/en us/a/2015/graduating-apache- parquet.html
Discovery and consumption of analytics data at Twitter. S Krishnan, S. Krishnan, "Discovery and consumption of analytics data at Twitter," 2016. [Online]. Available: https://blog.twitter.com/engineering/en us/ topics/insights/2016/discovery-and-consumption-of-analytics-data-at- twitter.html
Power-law distributions in empirical data. A Clauset, C R Shalizi, M E Newman, SIAM review. 514A. Clauset, C. R. Shalizi, and M. E. Newman, "Power-law distributions in empirical data," SIAM review, vol. 51, no. 4, pp. 661-703, 2009.
Adaptive huber regression. Q Sun, W.-X Zhou, J Fan, Journal of the American Statistical Association. 115529Q. Sun, W.-X. Zhou, and J. Fan, "Adaptive huber regression," Journal of the American Statistical Association, vol. 115, no. 529, pp. 254-265, 2020.
Evaluation of domainspecific word embeddings using knowledge resources. F Nooralahzadeh, L Øvrelid, J T Lønning, Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018). the eleventh international conference on language resources and evaluation (LREC 2018)F. Nooralahzadeh, L. Øvrelid, and J. T. Lønning, "Evaluation of domain- specific word embeddings using knowledge resources," in Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018), 2018.
Mesos: A platform for fine-grained resource sharing in the data center. B Hindman, A Konwinski, M Zaharia, A Ghodsi, A D Joseph, R H Katz, S Shenker, I Stoica, NSDI. 11B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. H. Katz, S. Shenker, and I. Stoica, "Mesos: A platform for fine-grained resource sharing in the data center." in NSDI, vol. 11, no. 2011, 2011, pp. 22-22.
|
[] |
[
"Robust Imitation of Diverse Behaviors",
"Robust Imitation of Diverse Behaviors"
] |
[
"Ziyu Wang ",
"Josh Merel [email protected] ",
"Scott Reed [email protected] ",
"Greg Wayne [email protected] ",
"Nando De Freitas [email protected] ",
"Nicolas Heess Deepmind "
] |
[] |
[] |
Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.
| null |
[
"https://arxiv.org/pdf/1707.02747v2.pdf"
] | 10,324,627 |
1707.02747
|
523b2ad5a2c60c721e036a42a27f5e40090d09fa
|
Robust Imitation of Diverse Behaviors
Ziyu Wang
Josh Merel [email protected]
Scott Reed [email protected]
Greg Wayne [email protected]
Nando De Freitas [email protected]
Nicolas Heess Deepmind
Robust Imitation of Diverse Behaviors
Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.
Introduction
Building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, in this work we combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product of this is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations.
We first introduce a variational autoencoder (VAE) [15,26] for supervised imitation, consisting of a bi-directional LSTM [13,31,9] encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state, while modelling correlations among states with a WaveNet [38]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker, implemented in the MuJoCo physics engine [37], show that the VAE learns a structured semantic embedding space, which allows for smooth policy interpolation.
While supervised policies that condition on demonstrations (such as our VAE or the recent approach of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too much from the demonstration trajectories. These limitations of supervised learning for imitation, also known as behavioral cloning (BC) [24], are well known [27,28].
Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning, allowing the agent to interact with the environment during training. GAIL allows one to learn more robust policies with fewer demonstrations, but adversarial training introduces another difficulty called mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples. This will cause the learned policy to capture only a subset of control behaviors (which can be viewed as modes of a distribution), rather than allocating capacity to cover all modes.
Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section 3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE policy alone is brittle and GAIL alone does not capture all of the diverse behaviors.
Background and Related Work
We begin our brief review with generative models. One canonical way of training generative models is to maximize the likelihood of the data: max i log p θ (x i ). This is equivalent to minimizing the Kullback-Leibler divergence between the distribution of the data and the model: D KL (p data (·)||p θ (·)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable.
One class of highly-expressive yet tractable models are the auto-regressive models which decompose the log likelihood as log p(x) = i log p θ (x i |x <i ). Auto-regressive models have been highly effective in both image and audio generation [39,38].
Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model over the latent variables, q φ (z|x), and optimize a lower bound of the log-likelihood:
E q φ (z|xi) [log p θ (x i |z)] − D KL (q φ (z|x i )||p(z)) ≤ log p(x).
(1) For continuous latent variables, this bound can be optimized efficiently via the re-parameterization trick [15,26]. This class of models are often referred to as VAEs.
GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks: a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples, predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs optimize the following objective function min
G max D E p data (x) [log D(x)] + E p(z) [log(1 − D(G(z))] .(2)
Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has focused on alleviating mode collapse in image modeling [2,4,19,25,41,11], but so far these have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp and at times realistic image samples [39], but they tend to be slow to sample from and unlike VAEs do not immediately provide a latent vector representation of the data. This is why we used VAEs to learn representations of demonstration trajectories.
We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a behavior provided via a demonstration. It is natural to view imitation learning from the perspective of generative modeling. However, unlike in image and audio modeling, in imitation the generation process is constrained by the environment and the agent's actions, with observations becoming accessible through interaction. Imitation learning brings its own unique challenges.
In this paper, we assume that we have been provided with demonstrations {τ i } i where the i-th trajectory of state-action pairs is τ i = {x i 1 , a i 1 , · · · , x i Ti , a i Ti }. These trajectories may have been produced by either an artificial or natural agent.
As in generative modeling, we can easily apply maximum likelihood to imitation learning. For instance, if the dynamics are tractable, we can maximize the likelihood of the states directly:
max θ i Ti t=1 log p(x i t+1 |x i t , π θ (x i t )
). If a model of the dynamics is unavailable, we can instead maximize the likelihood of the actions: max θ i Ti t=1 log π θ (a i t |x i t ). The latter approach is what we referred to as behavioral cloning (BC) in the introduction.
When demonstrations are plentiful, BC is effective [24,29,6]. Without abundant data, BC is known to be inadequate [27,28,12]. The inefficiencies of BC stem from the sequential nature of the problem. When using BC, even the slightest errors in mimicking the demonstration behavior can quickly accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but for BC to achieve this, the corrective behaviors have to appear frequently in the training data.
GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and learn from these interactions. It constructs a reward function using GANs to measure the similarity between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the following objective function
min θ max ψ E π E [log D ψ (x, a)] + E π θ [log(1 − D ψ (x, a))] ,(3)
where π E denotes the expert policy that generated the demonstration trajectories.
To avoid differentiating through the system dynamics, policy gradient algorithms are used to train the policy by maximizing the discounted sum of rewards
r ψ (x t , a t ) = − log(1 − D ψ (x t , a t )).
Maximizing this reward, which may differ from the expert reward, drives π θ to expert-like regions of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize the learning process [30]. GAIL has become a popular choice for imitation learning [16] and there already exist model-based [3] and third-person [35] extensions. Two recent GAIL-based approaches [17,10] introduce additional reward signals that encourage the policy to make use of latent variables which would correspond to different types of demonstrations after training. These approaches are complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation.
The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to recent authoritative surveys on the topic [5,1,14]. Inspired by recent works, including [12,35,6], we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations.
In graphics, a significant effort has been devoted to the design physics controllers that take advantage of motion capture data, or key-frames and other inputs provided by animators [32,34,42,22]. Yet, as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires significant human insight. Our focus is on flexible, general imitation methods.
A Generative Modeling Approach to Imitating Diverse Behaviors
Behavioral cloning with variational autoencoders suited for control
In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having a distribution q φ (z|x 1:T ) to better regularize the latent space.
In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize the following loss:
L(α, w, φ; τ i ) = −E q φ (z|x i 1:T i ) Ti t=1 log π α (a i t |x i t , z)+log p w (x i t+1 |x i t , z) +D KL q φ (z|x i 1:Ti )||p(z)
Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as our demonstration encoding.
The action decoder is an MLP that maps both the state and the embedding to the parameters of a Gaussian. The state decoder is similar to a conditional WaveNet model [38]. In particular, it conditions on the embedding z and previous state x t−1 to generate the vector x t autoregressively.
That is, the autoregression is over the components of the vector x t . Wavenet lessens the load of the encoder which no longer has to carry information that can be captured by modeling auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a mixture of Gaussians as the output of the WaveNet.
Diverse generative adversarial imitation learning
As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations. Our solution to obtain more robust policies from few demonstrations, which are also capable of diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions, we condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior q φ (z|x 1:T ). Specifically, we train the discriminator by optimizing the following objective
max ψ E τi∼π E E q(z|x i 1:T i ) 1 T i Ti t=1 log D ψ (x i t , a i t |z) + E π θ [log(1 − D ψ (x, a|z))] .(4)
A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs [21].
We condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3.
Since our discriminator is conditional, the reward function is also conditional: a t |z)). We also clip the reward so that it is upper-bounded. Conditioning on z allows us to generate an infinite number of reward functions each of them tailored to imitating a different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular mode due to the diversity of reward functions.
r t ψ (x t , a t |z) = − log(1 − D ψ (x t ,
To better motivate our objective, let us temporarily leave the context of imitation learning and consider the following alternative value function for training GANs
min G max D V (G, D) = y p(y) z q(z|y) log D(y|z) + ŷ G(ŷ|z) log(1 − D(ŷ|z))dŷ dydz.
This function is a simplification of our objective function. Furthermore, it satisfies the following property. for j ∈ {1, · · · , n} do Sample trajectory τj from the demonstration set and sample zj ∼ q(·|x j 1:T j ). Run policy π θ (·|zj) to obtain the trajectory τj. end for Update policy parameters via TRPO with rewards r j t (x j t , a j t |zj) = − log(1 − D ψ (x j t , a j t |zj)). Update discriminator parameters from ψi to ψi+1 with gradient:
∇ ψ 1 n n j=1 1 Tj T j t=1 log D ψ (x j t , a j t |zj) + 1 Tj T j t=1 log(1 − D ψ ( x j t , a j t |zj)) until Max iteration or time reached.
If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes
C(G) = 2 z p(z)JSD [p( · |z) || G( · |z)] dz − log 4,(5)
where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize this divergence, and it is well documented that optimizing it leads to mode seeking behavior [36].
The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture of Gaussians and p(z) describes the distribution over the mixture components. In this case, the conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity, we can expect the conditional distributions to be close to uni-modal and therefore our generators to be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible and hence our choice of training q with VAEs.
We now turn our attention to some algorithmic considerations. We can use the VAE policy π α (a t |x t , z) to accelerate the training of π θ (a t |x t , z). One possible route is to initialize the weights θ to α. However, before the policy behaves reasonably, the noise injected into the policy for exploration (when using stochastic policy gradients) can cause poor initial performance. Instead, we fix α and structure the conditional policy as follows
π θ ( · |x, z) = N ( · |µ θ (x, z) + µ α (x, z), σ θ (x, z)) ,
where µ α is the mean of the VAE policy. Finally, the policy parameterized by θ is optimized with TRPO [30] while holding parameters α fixed, as shown in Algorithm 1.
Experiments
The primary focus of our experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems our full learning procedure is critical.
We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible structure that an be exploited for control. Finally, we show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller.
All experiments are conducted with the MuJoCo physics engine [37]. For details of the simulation and the experimental setup please see appendix.
Robotic arm reaching
We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a robotics arm developed by Kinova Robotics. To obtain demonstrations, we trained 60 independent policies to reach to random target locations 2 in the workspace starting from the same initial configuration. We generated 30 trajectories from each of the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data. The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in Figure 2. Representative examples can be found in the video in the supplemental material.
To further investigate the nature of the embedding space we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories. We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment are illustrated with a representative pair in Figure 3. We observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint) space, highlighting the semantic meaningfulness of the discovered latent space.
2D Walker
We found reaching behavior to be relatively easy to imitate, presumably because it does not involve much physical contact. As a more challenging test we consider bipedal locomotion. We train 60 neural network policies for a 2d walker to serve as demonstrations 3 . These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top right). Besides the target speed the reward function imposes few constraints on the behavior. The resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for most purposes this diversity is undesirable, for the present experiment we consider it a feature. We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because our relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, we also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also exhibit dramatically less diversity; see video.
A general problem of adversarial training is that there is no easy way to quantitatively assess the quality of learned models. Here, since we aim to imitate particular demonstration trajectories that were trained to achieve particular target speed(s) we can use the difference between the speed of the demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality of the imitation (cf. also [12]).
The general quality of the learned model and the improvement achieved by the adversarial stage of our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all 60 policies) from the training set, compute the corresponding embedding vectors using the encoder, and use both the VAE policy as well as the improved policy from the adversarial stage to imitate each of the trajectories. We determine the absolute values of the difference between the average speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the adversarial training greatly improves reliability of the controller as well as the ability of the model to accurately match the speed of the demonstration. Video of our agent imitating a diverse set of behaviors can be found in the supplemental material.
To assess generalization to novel trajectories we encode and subsequently imitate trajectories not contained in the training set. The supplemental video contains several representative examples, demonstrating that the style of movement is successfully imitated for previously unseen trajectories.
Finally, we analyze the structure of the embedding space. We embed training trajectories and perform dimensionality reduction with t-SNE [40]. The result is shown in Fig. 4. It reveals a clear clustering according to movement speeds thus recovering the nature of the task context for the demonstration trajectories. We further find that trajectories that are nearby in embedding space tend to correspond to similar movement styles even when differing in speed.
Complex humanoid
We consider a humanoid body of high dimensionality that poses a hard control problem. The construction of this body and associated control policies is described in [20], and is briefly summarized in the appendix (section A.3) for completness. We generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles (see section A.3). Examples of such trajectories are shown in Fig. 5 and in the supplemental video.
The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture data base 4 . Each trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data).
Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. We analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the supplemental video.
For practical purposes it is desirable to allow the controller to transition from one behavior to another. We test this possibility in an experiment similar to the one for the Jaco arm: We determine the embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by blending their embeddings over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. Representative examples of these transitions can be found in the supplemental video.
Conclusions
We have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that learns, from a moderate number of demonstration trajectories (1) a semantically well structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation.
Our experimental results demonstrate that our approach can work on a variety of control problems, and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms. We trained the random reaching policies with deep deterministic policy gradients (DDPG, [33,18]) to reach to random positions in the workspace. Simulations were ran for 2.5 secs or 50 steps. For more details on the hyper-parameters and network configuration, please refer to Table 1.
A.2 Walker
The demonstration policies were trained to reach different speeds. Target speeds were chosen from a set of four different speeds (m/s) -1, 0, 1, 3. For each target speed in {−1, 0, 1, 3}, we trained 12 policies. Another 12 policies are each trained to achieve three target speeds -1, 0, and 1 depending on a context label. Finally 12 policies are each trained to achieve three target speeds -1, 0, and 3 depending on a context label. For each target speed group, a grid search over two parameters are performed: the initial log sigma for the policy and random seeds. We use 4 initial log sigma values: 0, -1, -2, -3 and three seeds.
For more details on the hyper-parameters and network configurations used see Tables 1 3, and 4.
A.3 Humanoid
The model of the humanoid body was generated from subject 8 from the CMU database, also used in [20]. It has 56 actuated joint-angles and a freely translating and rotating root. The actions specified for the body correspond to torques of the joint angles.
We generate training trajectories from six different neural network controllers trained to imitate six different movement styles (simple walk, cat style, chicken style, drunk style, and old style). Policies were produced which demonstrate robust, generalized behavior in the style of a simple walk or single motion capture clip from various styles [20]. For evaluation we use a second set of five different policies that had been independently trained on a partially overlapping set of movement styles (drunk style, normal style, old style, sexy-swagger style, strong style).
For more details on the hyper-parameters and network configurations used see Tables 1 3, and 4. To assist the training of the discriminator, we only use a subset of features as inputs to our discriminator following [20]. This subset includes mostly features pertaining to end-effector positions.
Figure 1 :
1Schematic of the encoder decoder architecture. LEFT: Bidirectional LSTM on demonstration states, followed by action and state decoders at each time step. RIGHT: State decoder model within a single time step, that is autoregressive over the state dimensions.
Lemma 1 .
1Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z) p(y) , then V (G, D) = z p(z) y p(y|z) log D(y|z)dy + x G(ŷ|z) log(1 − D(ŷ|z))dŷ dz. Algorithm 1 Diverse generative adversarial imitation learning. INPUT: Demonstration trajectories {τi}i and VAE encoder q. repeat
Figure 3 :
3Interpolation in the latent space for the Jaco arm. Each column shows three frames of a target-reach trajectory (time increases across rows). The left and right most columns correspond to the demonstration trajectories in between which we interpolate. Intermediate columns show trajectories generated by our VAE policy conditioned on embeddings which are convex combinations of the embeddings of the demonstration trajectories. Interpolating in the latent space indeed correspond to interpolation in the physical dimensions.
Figure 2 :
2Trajectories for the Jaco arm's end-effector on test set demonstrations. The trajectories produced by the VAE policy and corresponding demonstration are plotted with the same color, illustrating that the policy can imitate well.
Figure 4 :
4LEFT: t-SNE plot of the embedding vectors of the training trajectories; marker color indicates average speed. The plot reveals a clear clustering according to speed. Insets show pairs of frames from selected example trajectories. Trajectories nearby in the plot tend to correspond to similar movement styles even when differing in speed (e.g. see pair of trajectories on the right hand side of plot). RIGHT, TOP: Distribution of walker speeds for the demonstration trajectories. RIGHT, BOTTOM: Difference in speed between the demonstration and imitation trajectories. Measured against the demonstration trajectories, we observe that the fine-tuned controllers tend to have less difference in speed compared to controllers without fine-tuning.
Figure 5 :
5Left: examples of the demonstration trajectories in the CMU humanoid domain. The top row shows demonstrations from both the training and test set. The bottom row shows the corresponding imitation. Right: Percentage of falling down before the end of the episode with and without fine tuning.
Table 1 :
1VAE network specifications.Environment LSTM size latent size
Action decoder
sizes
No. of
channels
No. of
WaveNet Layers
Jaco
500
30 (400, 300, 200)
32
10
Walker
200
20
(200, 200)
32
6
Humanoid
500
30 (400, 300, 200)
32
14
A Details of the experiments
A.1 Jaco
Table 2 :
2CMU database motion capture clips per behavior style.Type
Subject clips
simple walk
8
1-11
cat
137
4
chicken
137
8
drunk
137
16
graceful
137
24
normal
137
29
old
137
33
sexy swagger 137
38
strong
137
42
Table 3 :
3Fine tuning phase network specifications. Environment Policy sizes Discriminator sizes Critic Network sizesWalker
(200, 100)
(100, 64)
(200, 100)
Humanoid
(300, 200, 100)
(300, 200)
(300, 200, 100)
Table 4 :
4Fine tuning phase hyper-parameter specifications.Environment
No. of
iterations
Initial policy std.
Discriminator
learning rate
No. of discriminator
update steps
Walker
30000
exp(−1)
1e-4
10
Humanoid
100000
0.1
1e-4
10
See appendix for details.
A survey of robot learning from demonstration. B D Argall, S Chernova, M Veloso, B Browning, Robotics and Autonomous Systems. 575B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469-483, 2009.
. M Arjovsky, S Chintala, L Bottou, arXiv:1701.07875Wasserstein GAN. PreprintM. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. Preprint arXiv:1701.07875, 2017.
Model-based adversarial imitation learning. N Baram, O Anschel, S Mannor, arXiv:1612.02179PreprintN. Baram, O. Anschel, and S. Mannor. Model-based adversarial imitation learning. Preprint arXiv:1612.02179, 2016.
BEGAN: Boundary equilibrium generative adversarial networks. D Berthelot, T Schumm, L Metz, arXiv:1703.10717PreprintD. Berthelot, T. Schumm, and L. Metz. BEGAN: Boundary equilibrium generative adversarial networks. Preprint arXiv:1703.10717, 2017.
Robot programming by demonstration. A Billard, S Calinon, R Dillmann, S Schaal, Springer handbook of robotics. A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. In Springer handbook of robotics, pages 1371-1394. 2008.
One-shot imitation learning. Y Duan, M Andrychowicz, B Stadie, J Ho, J Schneider, I Sutskever, P Abbeel, W Zaremba, arXiv:1703.07326PreprintY. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. One-shot imitation learning. Preprint arXiv:1703.07326, 2017.
I Goodfellow, arXiv:1701.00160NIPS 2016 tutorial: Generative adversarial networks. PreprintI. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. Preprint arXiv:1701.00160, 2016.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, NIPS. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672-2680, 2014.
Bidirectional LSTM networks for improved phoneme classification and recognition. A Graves, S Fernández, J Schmidhuber, Artificial Neural Networks: Formal Models and Their Applications-ICANN 2005. A. Graves, S. Fernández, and J. Schmidhuber. Bidirectional LSTM networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications-ICANN 2005, pages 753-753, 2005.
Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. K Hausman, Y Chebotar, S Schaal, G Sukhatme, J Lim, arXiv:1705.10479arXiv preprintK. Hausman, Y. Chebotar, S. Schaal, G. Sukhatme, and J. Lim. Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. arXiv preprint arXiv:1705.10479, 2017.
Boundary-seeking generative adversarial networks. R D Hjelm, A P Jacob, T Che, K Cho, Y Bengio, arXiv:1702.08431PreprintR. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial networks. Preprint arXiv:1702.08431, 2017.
Generative adversarial imitation learning. J Ho, S Ermon, NIPS. J. Ho and S. Ermon. Generative adversarial imitation learning. In NIPS, pages 4565-4573, 2016.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
Imitation learning: A survey of learning methods. A Hussein, M M Gaber, E Elyan, C Jayne, ACM Computing Surveys. 50221A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys, 50(2):21, 2017.
Auto-encoding variational bayes. D Kingma, M Welling, arXiv:1312.6114PreprintD. Kingma and M. Welling. Auto-encoding variational bayes. Preprint arXiv:1312.6114, 2013.
Imitating driver behavior with generative adversarial networks. A Kuefler, J Morton, T Wheeler, M Kochenderfer, arXiv:1701.06699PreprintA. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer. Imitating driver behavior with generative adversarial networks. Preprint arXiv:1701.06699, 2017.
Inferring the latent structure of human decision-making from raw visual inputs. Y Li, J Song, S Ermon, arXiv:1703.08840arXiv preprintY. Li, J. Song, and S. Ermon. Inferring the latent structure of human decision-making from raw visual inputs. arXiv preprint arXiv:1703.08840, 2017.
T Lillicrap, J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, D Wierstra, arXiv:1509.02971Continuous control with deep reinforcement learning. T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
Least squares generative adversarial networks. X Mao, Q Li, H Xie, R Y K Lau, Z Wang, S P Smolley, ArXiv:1611.04076PreprintX. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. Preprint ArXiv:1611.04076, 2016.
Learning human behaviors from motion capture by adversarial imitation. J Merel, Y Tassa, T B Dhruva, S Srinivasan, J Lemmon, Z Wang, G Wayne, N Heess, arXiv:1707.02201PreprintJ. Merel, Y. Tassa, TB. Dhruva, S. Srinivasan, J. Lemmon, Z. Wang, G. Wayne, and N. Heess. Learning human behaviors from motion capture by adversarial imitation. Preprint arXiv:1707.02201, 2017.
Conditional generative adversarial nets. M Mirza, S Osindero, arXiv:1411.1784PreprintM. Mirza and S. Osindero. Conditional generative adversarial nets. Preprint arXiv:1411.1784, 2014.
Contact-aware nonlinear control of dynamic characters. U Muico, Y Lee, J Popović, Z Popović, SIGGRAPH. U. Muico, Y. Lee, J. Popović, and Z. Popović. Contact-aware nonlinear control of dynamic characters. In SIGGRAPH, 2009.
DeepLoco: Dynamic locomotion skills using hierarchical deep reinforcement learning. X B Peng, G Berseth, K Yin, M Van De Panne, SIGGRAPH. X. B. Peng, G. Berseth, K. Yin, and M. van de Panne. DeepLoco: Dynamic locomotion skills using hierarchical deep reinforcement learning. In SIGGRAPH, 2017.
Efficient training of artificial neural networks for autonomous navigation. D A Pomerleau, Neural Computation. 31D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88-97, 1991.
Loss-sensitive generative adversarial networks on Lipschitz densities. G J Qi, arXiv:1701.06264PreprintG. J. Qi. Loss-sensitive generative adversarial networks on Lipschitz densities. Preprint arXiv:1701.06264, 2017.
Stochastic backpropagation and approximate inference in deep generative models. D Rezende, S Mohamed, D Wierstra, ICML. D. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
Efficient reductions for imitation learning. S Ross, A Bagnell, AIStats. S. Ross and A. Bagnell. Efficient reductions for imitation learning. In AIStats, 2010.
A reduction of imitation learning and structured prediction to no-regret online learning. S Ross, G J Gordon, D Bagnell, AIStatsS. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AIStats, 2011.
. A Rusu, S Colmenarejo, C Gulcehre, G Desjardins, J Kirkpatrick, R Pascanu, V Mnih, K Kavukcuoglu, R Hadsell, arXiv:1511.06295Policy distillation. PreprintA. Rusu, S. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell. Policy distillation. Preprint arXiv:1511.06295, 2015.
Trust region policy optimization. J Schulman, S Levine, P Abbeel, M I Jordan, P Moritz, ICML. J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE Transactions on Signal Processing. 4511M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681, 1997.
Synthesis of controllers for stylized planar bipedal walking. D Sharon, M Van De Panne, ICRA. D. Sharon and M. van de Panne. Synthesis of controllers for stylized planar bipedal walking. In ICRA, pages 2387-2392, 2005.
Deterministic policy gradient algorithms. D Silver, G Lever, N Heess, T Degris, D Wierstra, M Riedmiller, ICML. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
Simulating biped behaviors from human motion data. K W Sok, M Kim, J Lee, K. W. Sok, M. Kim, and J. Lee. Simulating biped behaviors from human motion data. 2007.
Third-person imitation learning. B C Stadie, P Abbeel, I Sutskever, arXiv:1703.01703PreprintB. C. Stadie, P. Abbeel, and I. Sutskever. Third-person imitation learning. Preprint arXiv:1703.01703, 2017.
A note on the evaluation of generative models. L Theis, A Van Den Oord, M Bethge, arXiv:1511.01844PreprintL. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. Preprint arXiv:1511.01844, 2015.
MuJoCo: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, IROS. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IROS, pages 5026-5033, 2012.
A Van Den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu, arXiv:1609.03499WaveNet: A generative model for raw audio. PreprintA. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. Preprint arXiv:1609.03499, 2016.
Conditional image generation with pixelCNN decoders. A Van Den Oord, N Kalchbrenner, L Espeholt, O Vinyals, A Graves, NIPS. A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, and A. Graves. Conditional image generation with pixelCNN decoders. In NIPS, 2016.
Visualizing data using t-SNE. L Van Der Maaten, G Hinton, Journal of Machine Learning Research. 9L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008.
MAGAN: Margin adaptation for generative adversarial networks. R Wang, A Cully, H Jin Chang, Y Demiris, arXiv:1704.03817PreprintR. Wang, A. Cully, H. Jin Chang, and Y. Demiris. MAGAN: Margin adaptation for generative adversarial networks. Preprint arXiv:1704.03817, 2017.
SIMBICON: Simple biped locomotion control. K Yin, K Loken, M Van De Panne, SIGGRAPH. K. Yin, K. Loken, and M. van de Panne. SIMBICON: Simple biped locomotion control. In SIGGRAPH, 2007.
|
[] |
[
"Dynamics of Entanglement and 'Attractor' states in The Tavis-Cummings Model",
"Dynamics of Entanglement and 'Attractor' states in The Tavis-Cummings Model"
] |
[
"C E A Jarvis \nH H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom\n",
"D A Rodrigues \nSchool of Physics and Astronomy\nUniversity of Nottingham\nNG7 2RDNottinghamUnited Kingdom\n",
"B L Györffy \nH H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom\n",
"T P Spiller \nHewlett Packard Laboratories\nFilton RoadBS34 8QZBristolUnited Kingdom\n",
"A J Short \nDAMPT\nCenter of Mathematical Sciences\nWilberforce RoadCB3 0WACambridgeUnited Kingdom\n",
"J F Annett \nH H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom\n"
] |
[
"H H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom",
"School of Physics and Astronomy\nUniversity of Nottingham\nNG7 2RDNottinghamUnited Kingdom",
"H H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom",
"Hewlett Packard Laboratories\nFilton RoadBS34 8QZBristolUnited Kingdom",
"DAMPT\nCenter of Mathematical Sciences\nWilberforce RoadCB3 0WACambridgeUnited Kingdom",
"H H Wills Physics Laboratory\nUniversity of Bristol\nBS8 1TLBristolUnited Kingdom"
] |
[] |
We study the time evolution of N q two-level atoms (or qubits) interacting with a single mode of the quantised radiation field. In the case of two qubits, we show that for a set of initial conditions the reduced density matrix of the atomic system approaches that of a pure state at tr 4 , halfway between that start of the collapse and the first mini revival peak, where t r is the time of the main revival. The pure state approached is the same for a set of initial conditions and is thus termed an 'attractor state'. The set itself is termed the basin of attraction and the features are at the center of our attention. Extending to more qubits, we find that attractors are a generic feature of the multi qubit Jaynes Cummings model (JCM) and we therefore generalise the discovery by Gea-Banacloche for the one qubit case. We give the 'basin of attraction' for N q qubits and discuss the implications of the 'attractor' state in terms of the dynamics of N q -body entanglement. We observe both collapse and revival and sudden birth/death of entanglement depending on the initial conditions.I. INTRODUCTIONThe dynamics of two level quantum systems (also known as qubits) such as spins in a magnetic field, Rydberg atoms or superconducting qubits, coupled to a single mode of an electromagnetic cavity are of considerable interest in connection with NMR studies of atomic nuclei [1], cavity quantum electrodynamics [2] and the physics of quantum computing[3]. The simplest model that captures the salient features of the physics in these fields is the Jaynes-Cummings model (JCM)[4], which deals with only one qubit and its generalisation by Tavis and Cummings to the case of multiple qubits [5].These models have many interesting features [6], but perhaps the most surprising prediction is * Electronic address: [email protected]
|
10.1088/1367-2630/11/10/103047
|
[
"https://arxiv.org/pdf/0906.4005v1.pdf"
] | 119,098,182 |
0906.4005
|
3fe212547f390589e3e121192b12c0c9e5bb7d70
|
Dynamics of Entanglement and 'Attractor' states in The Tavis-Cummings Model
22 Jun 2009
C E A Jarvis
H H Wills Physics Laboratory
University of Bristol
BS8 1TLBristolUnited Kingdom
D A Rodrigues
School of Physics and Astronomy
University of Nottingham
NG7 2RDNottinghamUnited Kingdom
B L Györffy
H H Wills Physics Laboratory
University of Bristol
BS8 1TLBristolUnited Kingdom
T P Spiller
Hewlett Packard Laboratories
Filton RoadBS34 8QZBristolUnited Kingdom
A J Short
DAMPT
Center of Mathematical Sciences
Wilberforce RoadCB3 0WACambridgeUnited Kingdom
J F Annett
H H Wills Physics Laboratory
University of Bristol
BS8 1TLBristolUnited Kingdom
Dynamics of Entanglement and 'Attractor' states in The Tavis-Cummings Model
22 Jun 2009
We study the time evolution of N q two-level atoms (or qubits) interacting with a single mode of the quantised radiation field. In the case of two qubits, we show that for a set of initial conditions the reduced density matrix of the atomic system approaches that of a pure state at tr 4 , halfway between that start of the collapse and the first mini revival peak, where t r is the time of the main revival. The pure state approached is the same for a set of initial conditions and is thus termed an 'attractor state'. The set itself is termed the basin of attraction and the features are at the center of our attention. Extending to more qubits, we find that attractors are a generic feature of the multi qubit Jaynes Cummings model (JCM) and we therefore generalise the discovery by Gea-Banacloche for the one qubit case. We give the 'basin of attraction' for N q qubits and discuss the implications of the 'attractor' state in terms of the dynamics of N q -body entanglement. We observe both collapse and revival and sudden birth/death of entanglement depending on the initial conditions.I. INTRODUCTIONThe dynamics of two level quantum systems (also known as qubits) such as spins in a magnetic field, Rydberg atoms or superconducting qubits, coupled to a single mode of an electromagnetic cavity are of considerable interest in connection with NMR studies of atomic nuclei [1], cavity quantum electrodynamics [2] and the physics of quantum computing[3]. The simplest model that captures the salient features of the physics in these fields is the Jaynes-Cummings model (JCM)[4], which deals with only one qubit and its generalisation by Tavis and Cummings to the case of multiple qubits [5].These models have many interesting features [6], but perhaps the most surprising prediction is * Electronic address: [email protected]
the phenomena of 'collapse and revival' of the Rabi oscillations in the qubit system when the field is started in a coherent state |α [7],
|α = e −|α| 2 /2 ∞ n=0 α n √ n! |n , α = |α|e −iθ .(1)
Here the state |n is the eigenstate of the photon number operatorn = a † a with eigenvalue n and θ is the initial phase of the radiation field. |α| determines the size of the field, withn = α|n|α = |α| 2 .
In short what is found is that the initial Rabi-oscillations of the probability of being in a given qubit state decay on a time scale called the collapse time, t c , but then revive after a much longer time, t r , due to the quantum 'graininess' of the radiation field [8,9,10]. Whilst the nature of this remarkable phenomenon in the case of one qubit is now well understood [6], both experimentally [2] and theoretically [11], the multi qubit case [5,12] has not been explored to the same extent.
For instance, the occurrence between t c and t r , of what we shall call the 'attractor' state has only been fully investigated in the one qubit limit. This intriguing aspect of quantum dynamics was discovered and highlighted by Gea-Banacloche [13,14] in his study of the one qubit case. In this paper we report on our investigation of analogous 'attractors' in cases of more than one qubit.
The organization of this paper is as follows. In Sec. II, the dynamics of the resonant JCM (single qubit coupled to a cavity system) is presented. This sets up our notation, analytic tools and methods of data presentation. In Sec. III we extend the model to two qubits and show that the idea of an 'attractor' state still exists. We investigate the dynamics in depth and quantify the set of initial states that approach the 'attractor', giving the 'basin of attraction'. We show that the atomic system exhibits interesting entanglement properties, in particular that the entanglement can vanish and then return at a later time. Furthermore, the manner in which this occurs depends on whether we are in or outside this basin of attraction. In Sec. IV we develop these ideas further from those of the two-qubit case and find a set of initial states (the basin of attraction) from which the system always goes to the 'attractor' state for an arbitrary number of qubits. In the limit of large qubit number we rewrite our qubit equations in terms of spin coherent states and demonstrate some intriguing phenomena associated with the N q -qubit JCM, especially the collapse and revival of N q -body entanglement. Whilst our method of analysis, and some of our results, are similar to those of Chumakov et al. [15] and Meunier et al. [16], our focus is on the quantum information aspects of the problem and in particular the dynamics of entanglement.
II. FEATURES OF THE JAYNES-CUMMINGS MODEL
We first review the JCM and appearance of 'attractor' states in the one qubit case [17,18,19].
The JCM model consists of a single qubit coupled to a single mode of a quantised electromagnetic field. The two possible states of the qubit are the ground state |g , with energy ǫ g , or its excited state |e , with energy ǫ e . We consider the following Hamiltonian:
H = ωâ †â + Ω 2σ z + λ âσ + +â †σ−(2)
σ z = |e e| − |g g|,σ + = |e g|,σ − = |g e|
whereâ † andâ are the creation and annihilation operators of photons with frequency ω, Ω = ǫ e −ǫ g and λ the cavity-qubit coupling constant. The model is only valid close to resonance and we shall only consider the resonant case of ω = Ω.
We have an initial state:
|Ψ 1 (t = 0) = |ψ 1 |α = (C g |g + C e |e ) ∞ n=0 C n |n ,(4)
where the state of the qubit is normalised so |C e | 2 + |C g | 2 = 1 and the field is started in a coherent state. In the interaction picture (i.e. in a rotating reference frame) we obtain a solution to the Hamiltonian:
|Ψ 1 (t) = ∞ n=0
C e C n cos λ √ n + 1t − iC g C n+1 sin λ √ n + 1t |e, n C g C n+1 cos λ √ n + 1t − iC e C n sin λ √ n + 1t |g, n + 1 + C g C 0 |g, 0 .
In spite of its relative simplicity, Eq. (5) exhibits a wide range of interesting phenomena. A much studied example is the collapse and revival of Rabi oscillations. If the qubit is started in the ground state then initially the probability of the qubit being in the ground state oscillates from one to zero at the Rabi frequency. These oscillations then decay on a time scale called the collapse time, t c = 2 λ , but then revive after a much longer time:
t r = 2π √n λ .(6)
Whenn ≫ λt we can approximate |Ψ 1 (t) in Eq. (5) by Ψ 1 (t) :
Ψ 1 (t) = β + 1 2 (t) D + 1 2 (t) Φ + 1 2 (t) + β − 1 2 (t) D − 1 2 (t) Φ − 1 2 (t) ,(7)
where β ± 1 2 (t) is a normalisation factor, D ± 1 2 (t) is a state of the qubits and Φ ± 1 2 (t) is a state of the field.
β ± 1 2 (t) = 1 √ 2 e ±iπ t tr (n+1) e iθ C e ∓ C g (8)
D ± 1 2 (t) = 1 √ 2 e −iθ |e ∓ e ∓iπt/tr |g(9)
Φ ± 1 2 (t) = e ±iπt/tr α .
With the solution written in this form it is clear there are two distinct parts to the wavefunction.
In fact the field just consists of two different coherent states that are out of phase by λt/ √n . In a penetrating study of this model Gea-Banacloche [14] noted that at a time tr 2 , in the very largen approximation, the qubit disentangles itself completely from the field and the wavefunction factors into a product state of the radiation field and the qubit.
To make this more apparent we rewrite Eq. (8) -Eq. (10) at the time t = tr 2 .
β ± 1 2 ( tr 2 ) = ± i √ 2 e ±iπn/2 e iθ C e ∓ C g(11)D ± 1 2 ( tr 2 ) = 1 √ 2 e −iθ |e + i|g(12)Φ ± 1 2 ( tr 2 ) = |±iα .(13)
With the wavefunction written in this way it can be seen that the two states D + 1 2 ( tr 2 ) and D − 1 2 ( tr 2 ) are in fact the same at the time tr 2 . We label this state ψ + 1 att :
ψ ± 1 att = 1 √ 2 e −iθ |e ± i|g .(14)
This state is independent of the initial coefficients C e and C g and only depends on the phase θ of the initial coherent state, Eq. (1). Throughout this paper we follow Phoenix and Knight [20] and call this state an 'attractor' state. Note that we use the term 'attractor' to refer to ψ ± 1 att even though in the theory of non-linear differential equations the term attractor is reserved for solutions which, when reached, after evolving from an arbitrary initial condition, stop changing with time. As a consequence the qubit state can be factorised out of the wavefunction meaning the qubit and the field are in a product state:
Ψ 1 ( tr 2 ) = ψ + 1 att √ 2 ie iπn/2 e iθ C e − C g |iα − ie −iπn/2 e iθ C e + C g |−iα .(15)
It is clear that all the information of the initial state about the qubit is stored in the radiation field which happens to be in a "Schrödinger cat" state. At a later time t = 3tr 2 , the qubit again disentangles itself completely from the field and goes to a pure state. The qubit state at this time is orthogonal to ψ + 1 att and we label it ψ − 1 att . This remarkable behaviour is most unlike the consequences of a linear Schrödinger equation satisfied by a qubit without the cavity field or with a classical field and, as was stressed by Gea-Banacloche, is a natural route to quantum state preparation. All initial qubit states tend to the same 'attractor' state (which is determined by the initial phase of the field state) and the quantum information about the initial qubit state has effectively been swapped out and encoded into the field at the 'attractor times'.
Before we move on to describe the results of numerical simulations we consider the field, Eq.
(10), at two revival times t = t r and t = 2t r . At both these times the field states Φ ± 1
2 (t) become equal, with Φ + 1 2 (t r ) = Φ − 1 2 (t r ) = |−α and Φ + 1 2 (2t r ) = Φ − 1 2 (2t r ) = |α .
As a consequence the system is once again in a product state as the field can be factorised out of the wavefunction for the whole state. The information about the initial state of the qubits has been transferred from the field back to the qubits.
By considering the average number of photons in the field,n, to be large many interesting observations have been made of the JCM. It was shown that the system repeatedly evolves from a product state to an entangled state and back again. Although the analytical equations for large values ofn clearly show this behaviour, we now consider the exact solution |Ψ 1 (t) for modestn (n = 50), shown in Eq. (5) and comment on the results of numerical simulations for the qubit and the photon sector separately.
We consider the state of the qubits after tracing out the field state. The probability of the qubits being in the state |g is,
P 1 g (t) = g|ρ q (t)|g = ∞ n=0 | g, n|Ψ 1 (t) | 2(16)
where ρ q (t) = Tr f (|Ψ 1 (t) Ψ 1 (t)|) is the reduced density matrix at a certain time for the qubits when the field has been traced out. Throughout this paper all instances of the letter q refer to the qubit system and f refers to the field. The results are depicted in Fig. 1 for the initial state |Ψ 1 (t = 0) = |g |α and the well known phenomenon of collapse and revival can be seen. For numerical reasons the sum over n is truncated at n max = 200 as this is adequate for the average number of photons in the mode ofn = 50.
An entropy measurement of the system can be made at any point in time. Entropy is a well defined quantity that measures the disorder of a system and the purity of a quantum state. In cases where the time-dependent Schrödinger equation (as opposed to some master equation for reduced dynamics) determines the time evolution of a system, the value of the entropy remains constant. The complete systems (qubits plus field) we study are always started in a pure state and follow Schrödinger dynamics; therefore the entropy of the complete systems always remains zero as we have a pure state. In these systems the individual entropies, of the qubits or the field, are of more interest. We calculate the partial entropy from the reduced density matrix ρ q using the rescaled von Neumann entropy [21,22],
S q (t) = −Tr(ρ q (t)log 2 ρ q (t))/N q ,(17)
which is rescaled such that the entropy of the qubits ranges from zero, for a pure state, to one, for a maximally mixed state. For the bipartite case (the one qubit N q = 1 JCM) this reduces to the standard von Neumann entropy. As was discovered by Araki and Lieb [23], if the system is started in a pure state then S q (t)=S f (t) for all time, where S f (t) is the entropy of the field system calculated from reduced density matrix ρ f (t) = Tr q (|Ψ 1 (t) Ψ 1 (t)|). Therefore for the rest of the paper we shall refer to the value of S q (t) only when we mention entropy.
In quantum information these partial entropies provide a measure of the entanglement present between the two systems, known as the entropy of entanglement [24,25]. If the entropy is zero then the state being measured is pure and there is no entanglement present. If the value of the entropy is non zero then there is some entanglement present between the two parts of the system and the reduced density matrix is mixed. Returning to the JCM we note that the 'attractor' state, Eq. (14), is a pure state and hence the value S q ( tr 2 ) will be zero. So the qubit has disentangled itself from the field and there is no entanglement present at this time. The reduced entropy is a powerful tool in understanding entanglement in the JCM and it is worthwhile recalling an example of its power to elucidate the quantum dynamics [20,26].
We have numerically evaluated S q for qubits started in the initial state |Ψ 1 (t = 0) = |g |α using the exact solution for |Ψ 1 (t) and the results are shown in Fig. 1 by the curve labelled (a).
We also plot the probability P 1 att (t) = ψ + 1 att ρ q (t) ψ + 1 att that the qubit is in the 'attractor' state ψ + 1 att , as defined in Eq. (14), shown by the curve labelled (c). Evidently at the time tr 2 the reduced entropy goes very close to zero and P 1 att ( tr 2 ) goes very close to one. As mentioned above, this means there is very little field-qubit entanglement present and the qubit is essentially in the 'attractor' state, ψ + 1 att , as defined in Eq. (14). At 3tr 2 the probability of being in this 'attractor' state approaches zero, as the qubit is in the state ψ − 1 att , which is orthogonal to ψ + 1 att . These results all agree with the discussions of Eq. (15). At future 'attractor' times the system alternates between ψ + 1 att and ψ − 1 att . A big difference between numerical results and the largen solution is noticed at the time t = t r .
Earlier we showed that the solution in the largen approximation will return to a pure state at this time, but in Fig. 1 the entropy has a large value of 0.7, wedged between two maxima (indicating large entanglement between qubit and field). This is because the width of this dip is on a time scale similar to the collapse time, much narrower than the entropy dip at tr 2 . As the value ofn is increased in numerical simulations, the value of the entropy at t = t r gets closer to zero, but we see that even for fairly large coherent states, the largen approximation can deviate significantly from the full solution.
So far we have reviewed the time evolution of the reduced qubit density matrix in the one-qubit and one mode system. Before we move on to address the two-qubit case we calculate the Q function [27] to investigate the radiation field,
Q(α, t) = α|ρ f (t)|α .(18)
This is shown in Fig. 2 in the rotating reference frame for the exact solution as contour plots.
We give an example of a 3d plot of Fig. 2(b) in Fig. 3, but for the rest of the paper we will use the 2D contour plots.
The Q function shows a set of two wavepackets of the cavity field, each of which can be thought of as a Gea-Banacloche state [14] in the largen solution. As a function of time the two 'blobs'
move around the complex plane and follow the circular path of radius √n . These 'blobs' start to smear out and become distorted from the original coherent state. Although the wavepackets all begin in the same place, they evolve with different frequencies, so the states begin to separate.
After a period of time, depending on the differences between the frequencies, the different 'blobs' a : t 0 are separated by more than their diameter and can be easily distinguished. Eventually these two wavepackets will again overlap and this occurs at the revival time, t = t r . When these 'blobs' have some overlap in phase space there will be Rabi oscillations in the qubit probability, giving the periodic sequence of revivals seen in Fig. 1. However, as can be seen in Fig. 2(d), at the revival time, t r , the 'blobs' have undergone some distortion and so the wave packets are no longer simple coherent states. This distortion elongates the wave packets, and so they overlap for a longer time than in the ideal coherent state case. We therefore understand the lack of a full amplitude revival, P 1g (t r ) = 1, as due to the spreading, and hence distinguishability, of the field states. When the value ofn gets very large there is less distortion and the Gea-Banacloche states stay exact coherent states throughout their evolution.
Also included in Fig. 2 is an arrow for each 'blob' which represents the average atomic po- The N q qubit Hamiltonian, which in this section we treat for N q = 2 so i = 1, 2, is a generalisation of Eq. (2). It is called the Tavis-Cummings Hamiltonian [5]:
larisation, − − → d ± 1 2 (t) = D ± 1 2 (t) − → σ D ± 1 2 (t) inH = ωâ †â + 2 Nq i=1 Ω iσ z i + Nq i=1 λ i âσ + i +â †σ− i .(19)
For brevity we only consider the solution for the case λ 1 = λ 2 = λ and ω = Ω 1 = Ω 2 . This model has been studied in detail by Chumakov et al. [28] and the largen solution has been found [15,16] with similar results to our analysis. Adding another qubit to the system yields a more complex system which can exhibit interesting entanglement properties as not only can the two qubits be entangled with the field, but they can also be entangled between themselves.
We have investigated the evolution of the system over time, with analytical solutions in the largen approximation [29] and the numerically evaluated exact solution. To summarise these we start with the initial state:
|Ψ 2 (t = 0) = |ψ 2 |α = (C ee |ee + C eg |eg + C ge |ge + C gg |gg ) |α ,(20)
where the state is normalised so |C ee | 2 + |C eg | 2 + |C ge | 2 + |C gg | 2 = 1. Following the same procedure as in the one qubit case we solve the Hamiltonian and approximate for largen. The time evolution of this state takes the following form:
Ψ 2 (t) = 1 k=−1 β k (t)|D k (t) ⊗ |Φ k (t) ,(21)
where k = −1, 0 or 1 and:
β ±1 (t) = e ±2iπ t tr (n+2) 2 (e 2iθ C ee ∓ e iθ (C eg + C ge ) + C gg ),(22)β 0 = 1 √ 2 |e 2iθ C ee − C gg | 2 + |C eg − C ge | 2 ,(23)|D 0 (t) = |D 0 = (e 2iθ C ee − C gg )(e −2iθ |ee − |gg ) + (C eg − C ge )(|eg − |ge ) 2 |e 2iθ C ee − C gg | 2 + |C eg − C ge | 2 ,(24)|D ±1 (t) = 1 2 [e −2iθ |ee ∓ e ∓2iπt/tr e −iθ (|eg + |ge ) + e ∓4iπt/tr |gg ],(25)= D ± 1 2 (2t) ⊗ D ± 1 2 (2t) ,(26)
|Φ k (t) = e 2ikπt/tr α .
Note that the state |D 0 (t) = |D 0 is time-independent and is thus a maximally entangled qubit state for all times if C eg = C ge [29]. The field states are called Gea-Banacloche states and once again have a specific qubit state associated to each of them.
By making the observation shown in Eq. (26) it is easy to see that the two qubit states |D ±1 (t) are direct products of the one qubit states D ± 1 2 (t) up to a mapping t → 2t. This can be used to predict the behaviour of |D ±1 (t) . As stated in Sec. II D ± 1 2 ( tr 2 ) = ψ + 1 att therefore we can conclude that |D ±1 (t) will go to a direct product of ψ + 1 att at t = tr 4 .
D ±1 ( tr 4 ) = D ± 1 2 tr 2 ⊗ D ± 1 2 tr 2 (28) = ψ + 1 att ⊗ ψ + 1 att (29) = ψ + 2 att(30)
So we have found a term that is similar to the one qubit 'attractor' state, ψ + 1 att , which we shall label the two qubit 'attractor' state, ψ + 2 att :
ψ ± 2 att = 1 2 e 2iθ |ee ± ie iθ (|eg + |ge ) − |gg(31)
In the one qubit case the qubit system went to a second 'attractor' state ψ − 1 att at the time 3tr 2 and means D ±1 ( 3tr
4 ) = ψ − 1 att ⊗ ψ − 1 att = ψ − 2 att .
A. Basin of Attraction
The two qubit JCM is more complicated than the original one qubit case and shows some very interesting properties. Firstly we rewrite the wavefunction for the whole system as the time t = tr 4 to understand the two qubit case in more detail.
Ψ 2 ( tr 4 ) = − 1 2 ψ + 2 att e 2iθ C ee + C gg e iπn/2 |iα + e −iπn/2 |−iα − e iθ (C eg + C ge ) e iπn/2 |iα − e −iπn/2 |−iα + β 0 |D 0 |α(32)
With the wavefunction written in this form we can see which initial conditions result in the entropy going to zero. Indeed when β 0 = 0 the qubit and field parts are each in a product state, so there is no qubit-resonator entanglement present. There is also no qubit-qubit entanglement as these are in the unentangled 'attractor' state and this means there is absolutely no entanglement in the system at t = tr 4 . Interestingly, the information on the initial state of the qubits is again stored in the radiation field and the qubits are in a spin coherent 'attractor' state (see appendix A).
The initial conditions of the qubit that results in β 0 = 0 are: e iθ C ee = e −iθ C gg = a and C eg = C ge = 1 2 − |a| 2 . So the initial state of the qubits will be of the form:
|ψ 2 = a(e −iθ |ee + e iθ |gg ) + 1 2 − |a| 2 (|eg + |ge )(33)
where θ is the initial phase of the radiation field and 0 ≤ |a| ≤ 1 √ 2 . These states lie in the symmetric subspace and have some interesting properties that will be discussed in section III B.
While the qubit state is within this basin of attraction its state is parameterized by the single complex variable a and the wavefunction only contains two field states, |Φ ±1 (t) . At the time tr 4 these two field states are macroscopically distinct coherent states:
Ψ 2 ( tr 4 ) = −e iθ ψ + 2 att e iπn/2 a − 1 2 − |a| 2 |iα + e −iπn/2 a + 1 2 − |a| 2 |−iα .(34)
We now see that the initial state of the qubits alters the relative proportions of |α with respect to |−α . This is also seen in the one qubit case and is the mechanism whereby the quantum information of the initial qubit state is encoded in the field. State preparation of a Schrödinger cat state utilizing this phenomenon has been discussed [14,30]. With the two qubit case the same protocol can be used, but is not so useful as the initial qubit state has to be within the basin of attraction in contrast to the one qubit case where all initial qubit states lead to a Schrödinger cat state at the 'attractor' times.
There is also a connection between the field states |Φ ±1 (t) in the two qubit case and the field states Φ ± 1 2 (t) from the one qubit case, |Φ ±1 (t) = Φ ± 1 2 (2t) . Again there is a mapping t → 2t. In Sec. II it was shown that the field and qubit become unentangled at t r and 2t r as
Φ + 1 2 (t r ) = Φ − 1 2 (t r ) and Φ + 1 2 (2t r ) = Φ − 1 2 (2t r )
. Using this information from the one qubit case we can predict that Φ +1 ( tr 2 ) = Φ −1 ( tr 2 ) and |Φ +1 (t r ) = |Φ −1 (t r ) which is indeed the case. As a consequence the field part of the wavefunction can be factorised out and there is no entanglement present between the qubits and the field at the times tr 2 and t r . We now consider the numerical results to test the analytical predictions and see how the system behaves whenn = 50, shown in Fig 4. We consider the probability of the qubits being in the state |gg and the probability P 2 att (t) = ψ + 2 att ρ q (t) ψ + 2 att , that ψ + 2 att occurs at a particular time for two sets of initial conditions, one satisfying β 0 = 0 (Fig 4(b)) and one not (Fig 4(a)). Evidently the pattern of collapse and revival is more complicated than in the one qubit case [31], but nevertheless we still see characteristic dips in the entropy. The value of the entropy at tr 4 is not the same in the two cases shown in Fig. 4 and in fact they show two completely different behaviours. In Fig. 4(a) we notice that the dip in the entropy goes to a value of 0.35, very different to the value of the entropy in Fig. 1, but in Fig 4(b) we do indeed find that the entropy is very small at tr 4 . The initial state used for Fig. 4(a) is not in the basin of attraction, whereas the initial state used for
Fig 4(b) is.
A second dip in the entropy can be seen in Fig. 4 at the time t = 3tr 4 which, in the largen approximation, goes to zero. The probability of being in the 'attractor' state ψ + 2 att goes to zero at this time. The field has disentangled itself from the field again, so at this point the qubits are again in a pure state, the orthogonal 'attractor' state ψ − 1 att . It can be seen from the analytic solution that, in the largen approximation, the field and qubits also disentangle themselves at the time t r , for all initial conditions of the qubits, and also at tr 2 if the initial qubit state is in the basin of attraction. This is not seen in the diagrams in Fig. 4 and instead the value to the entropies at these times are fairly high which means the numerically evaluated exact system is far from the analytic results in largen approximation. This is similar to the one qubit case and can be explained by considering the dynamics of the field.
We can once again plot the Q function for the numerical solution to pictorially display what is occurring in the cavity field. The values of Q(α, t) in the complex α plane at fixed times are shown in Fig. 5 when the qubits are initially in the state |gg , which is outside the basin of attraction.
This time the Q function shows a set of three wavepackets of the cavity field (compared to the two for the one-qubit case, Fig. 2). As a function of time the three 'blobs' separate, but only two move around the complex plane and follow the circular path of radius √n . Again the arrows show the dipole moment of the qubit state associated with each wavepacket,
− → d k (t) = D k (t)| − → J 2 |D k (t) .
The 'blob' that does not move over time has a dipole moment of zero, so only the two moving Gea-Banacloche states have an arrow.
For the one-qubit case the revival time occurred when the two wavepackets overlapped. In this case we have two 'blobs' that move around phase space and overlap at the time t = tr 2 , leaving the stationary wavepacket at the initial position. This causes a rather weak revival peak as only two ically different and when β 0 = 0. Moving from the one qubit case to the two qubit case involves a mapping of t → 2t and a similar mapping can be used to understand the equations for more qubits.
B. Collapse and Revival of Entanglement
An interesting new feature of the two-qubit case, as opposed to the one-qubit case, is that there can be entanglement present between the two qubits themselves, not just between the qubits and the field. So far we have used the entropy to measure the amount of entanglement between the field and the qubit system as the whole system is in a pure state. Unfortunately this measure can not be used to calculate the entanglement between the two qubits as they are sometimes in a mixed state themselves. To find the amount of entanglement between the two qubits we use the mixed state tangle τ , defined as [32,33]:
τ = [max{λ 1 − λ 2 − λ 3 − λ 4 , 0}] 2 .(35)
The λ ′ s are the square root of the eigenvalues, in decreasing order, of the matrix
ρ q (σ y ⊗ σ y ) ρ * q (σ y ⊗ σ y ), where σ y = i (σ − − σ + ) (see equation Eq.
3) and ρ * q represents the complex conjugate of ρ q in the |e , |g basis. A tangle of unity indicates that there is maximum entanglement between the two qubits. If the tangle takes the value of zero, then there is no entanglement present between the two qubits. To clarify, the entropy will be used to measure the amount of entanglement between the field and the qubit system whereas the tangle will be used to measure the entanglement between the qubits.
In this subsection we focus on the initial states inside the basin of attraction. The value of the tangle for the states given by Eq. (33) can be seen in Fig. 6. This diagram shows only two points where τ = 0, for a = ± 1 2 , which demonstrates that there are only two product states in the 'basin of attraction'. In contrast, there are many points in which τ = 1, for a = e iφ √ 2 where φ is an arbitrary phase, or a = ir where r is a real number. This means that all possible levels of entanglement are represented in the basin of attraction, Eq. (33).
Whilst almost all states in the basin of attraction given in Eq. (33) describe entangled states, these all evolve into ψ + 2 att at t = tr 4 and ψ − 2 att at t = 3tr 4 , at which times they are pure and unentangled. So if the system is started off in one of these entangled states defined by Eq. (33), at the time tr 4 the system goes to a state which has zero entanglement. Therefore we have gone from having entanglement in the system to having absolutely no entanglement, either between the qubits and the resonator or between the qubits themselves. At a later time this entanglement returns to the system and can be said to 'revive'. The tangle returns to its initial value at tr 2 and t r , regardless of the phase of the coherent state. This can be seen when we consider the wavefunction for the system at this time:
Ψ 2 ( tr 2 ) = − 1 2 |−α e iπn a − 1 2 − |a| 2 e −iθ |ee + |eg + |ge + e iθ |gg + e −iπn a + 1 2 − |a| 2 e −iθ |ee − |eg − |ge + e iθ |gg .(36)
As stated in Sec. III A this state shows no entanglement between the qubits and the field and therefore the entropy of the reduced qubit density matrix will be zero. The value of the tangle for the qubit state at t = tr 2 is found to be exactly the same as the tangle for the initial state of the qubits, therefore τ (0) = τ ( tr 2 ). We have called this phenomenon 'Collapse and Revival of Entanglement' [34]. Analogous to the single qubit case, the quantum information in the initial state of the qubits (for states within the basin of attraction) is swapped out and encoded into the field state at the 'attractor' times. Calculations demonstrate that the value of the tangle remains near zero for long periods between revival. Therefore we are dealing with a phenomenon which has been described as the 'death of entanglement' by Yu and Eberley [35,36] which forms the centre of much current interest [37]. Qing et al. [38] have found a similar collapse and revival for the same model we have studied here but for very different initial conditions. Even just within the system we consider here, detailed studies reveal interesting links between the dynamics of the disappearance (and reappearance) of entanglement and the basin of attraction, as we discuss in this and the subsequent subsection.
So if the qubit system starts in the basin of attraction and τ = 0, over time the initial entanglement in the qubits is exchanged for entanglement between the qubits and the field, and then for a superposition of field states. At the time of the 'attractor' state, tr 4 , there is absolutely no entanglement present in the system. Then the process starts to reverse. Firstly there is again entanglement between the qubits and the field and at the two qubit revival time, tr 2 , τ returns to its initial value and only qubit entanglement is present.
The actual extent of the entanglement revival for the exact solution can be seen in Fig. 7 (a) where we show the tangle for different initial states within the basin of attraction. On first glance we see that the amount of tangle present at t = tr 2 is nowhere near the initial amount of tangle present at t = 0, which is far from the prediction in the largen approximation. The value of the entropy at tr 2 is around 0.5 for the case when a = 1 √ 2 , rather than the predicted value of zero, Fig. 7 (b). In order for there to be maximum possible entanglement in the system there has to be the least amount of entropy, so there is a trade off in tangle for entropy [25].
Another very interesting feature illustrated in Fig. 7 (a) is the manner in which the entanglement vanishes. For all states within the basin of attraction, although the initial entanglement decays rapidly (before reviving at tr 2 and later times) it does so with a Gaussian envelope (as per the Rabi oscillation collapse). It therefore goes smoothly to zero-this is reflected by the further observation that the eigenvalue expression in Eq. (35) never goes negative, so the max operation never needs to be implemented for the tangle. This can be explained by considering the rank of the matrix used to calculate the tangle, ρ q (σ y ⊗ σ y ) ρ * q (σ y ⊗ σ y ). The rank of a matrix is equal to the number of non-zero eigenvalues. The matrix used to calculate the tangle for this system has a maximum of rank= 2 when the qubits are started in the basin of attraction, so λ 3 = λ 4 = 0. Due to the definition of the tangle, λ 1 ≥ λ 2 and Eq. (35) becomes:
τ = (λ 1 − λ 2 ) 2 ≥ 0.(37)
This characteristic 'Collapse and Revival' behaviour is to be contrasted with behaviour where entanglement vanishes or appears with finite gradient and where the max operation is needed in Eq. (35) because the eigenvalue expression does go negative. To distinguish these behaviours, we use the terminology 'sudden death/birth' of entanglement [35] for the latter. This phenomenon does not occur for initial qubit states within the basin of attraction (which all show collapse and revival of entanglement) but can for those outside, as will be demonstrated in the next subsection. (Bottom)'Collapse and Revival of Entanglement' for many different initial qubit states in the basin of attraction withn = 50. These are shown in many different colours depending on the initial entanglement measure. It shows that the entanglement in the qubits is initially lost then returns at the first revival peak, t = tr 2 . As the initial state is in the basin of attraction we know the whole system has zero entanglement at t = tr 4 . So there is even no entanglement between the field and the qubits. Therefore we know that over time the amount of entanglement varies a great deal.
C. Outside the Basin of Attraction
The previous subsections discussed interesting features of the entanglement dynamics for the two-qubit JCM when considering initial conditions inside the basin of attraction. In this subsection we highlight interesting features that emerge when considering all other cases-when the qubit Hilbert space is not restricted and, for example, the terms (b|eg + 1 − |b| 2 |ge )/ √ 2, where b is a complex number, are present in the initial state, as well as unequal superpositions of |gg and |ee .
To date, in other works it has been found that the JCM for two qubits can exhibit an interesting phenomenon called 'Sudden Death of Entanglement', for certain initial conditions of the qubits.
Some investigations start with the qubits and the field in a mixed state and some others do not [38].
In our work we consider the case when the radiation field is started in a coherent state. Qubits states inside the basin of attraction demonstrate collapse and revival of entanglement, but not sudden death or birth. However, if we venture outside the basin of attraction and include other initial conditions of the two qubit system, we find that there is indeed sudden death of entanglement.
This means that we do not restrict the condition β 0 = 0 in Eq. (23). Now, without this restriction, the qubit system will not go towards the 'attractor' state at tr 4 and 3tr 4 in the largen approximation as β 0 = 0. Furthermore the entropy will be non zero at these times as the qubit state can not be factorised out of the wavefunction. However, the system will still have zero entropy at the 'full' revival time t r as |Φ −1 (t r ) = |Φ 0 (t r ) = |Φ +1 (t r ) = |α and the field state can be factorised out of the wavefunction. Like before τ (0) = τ (t r ) so the amount of entanglement between the qubits is the same at the main revival time and t = 0. For n = 50, this revival will not be complete and so the tangle will not return to its initial value. Fig. 8 shows examples of the qubit entanglement dynamics for initial states which are outside the basin of attraction for the numerically evaluated exact solution. In this figure a new measure of qubit entanglement has been used called the concurrence ζ = √ τ and has been used as a direct comparison to previous work [36,37,39].
Remarkably, Fig. 8 shows that a field initially in a coherent state can give sudden death and birth of entanglement, with initial qubit states of various forms lying outside the basin of attraction.
A case where the initial conditions are in the basin of attraction is shown for reference. Clearly peaks in the entanglement measure at both tr 4 and 3tr 4 arise (with death/birth at finite gradient)times when there would be no entanglement if the initial qubit state was in the basin of attraction.
There are still peaks at the times tr 2 and t r , but these have a lower maximum and also now exhibit birth/death behaviour rather than Gaussian collapse and revival. One initial condition that yields 'Sudden Death of Entanglement' has been shown in Fig. 8 alongside an entropy measurement.
To recap, we find that our system can display both a sudden birth/death of entanglement as well as a Gaussian collapse and revival of entanglement, depending on initial conditions. Furthermore, we understand that, in this system, the sudden death occurs when λ 1 − λ 2 − λ 3 − λ 4 becomes negative and so, according due to the max operation used in Eq. 35 the tangle goes to zero with a finite gradient. In contrast, within the basin of attraction, there are only two non-zero eigenvalues λ 1 > λ 2 , so the tangle can only display a smooth collapse.
These additional peaks in the qubit entanglement seen in Fig. 8 are due to the inclusion of |D 0 and |Φ 0 (t) in the wavefunction. From the analysis in Sec. III we showed that D ±1 ( tr 4 ) = ψ + 2 att , the unentangled two qubit 'attractor' state, so all entanglement present at t = tr 4 is due to |D 0 . If |D 0 is a product state then there will be no entanglement in the qubit system. However if |D 0 is entangled it still does not mean there will be some qubit entanglement as a mixture of entangled and product states need not be entangled. For example curve (b) in the top diagram of Fig. 8 shows there is zero concurrence at tr 2 , even though |D 0 contains some entanglement.
IV. N q QUBIT JAYNES-CUMMINGS MODEL
Having found a sense in which the one qubit 'attractor' state has analogues for two qubits, it is natural to inquire whether similar generalisation holds for the case of more than two qubits. We use the Hamiltonian shown in Eq. (19) for N q qubits. We have found that there are indeed cases where the system goes to a state similar to the one qubit 'attractor' state and have a general form for the basin of attraction for N q qubits.
In the largen approximation the state of the system takes the following form [16]:
Ψ Nq (t) = Nq/2 k=−Nq/2 β k (t)|D k (t) ⊗ |Φ k (t) .(38)
|D k (t) has a complicated form and occurs in the paper by Meunier et al. [16]. For this work we will be most concerned with the states that appear in the expression for the basin of attraction.
These take the following simple form:
D ± Nq 2 (t) = D ± 1 2 (N q t) ⊗ . . . ⊗ D ± 1 2 (N q t)(39)= D ± 1 2 (N q t) ⊗Nq .(40)
Note that:
|Φ k (t) = e iπkNqt/tr α ,(41)
and we can use this equation to understand the simple dynamics of the N q solution. The field initially starts in a coherent state that then splits into N q + 1 different 'blobs' and a revival peak occurs every time at least two of these wavepackets overlaps. The strength of the revival is determined by the number of wavepackets that overlap and these revival peaks occur at definite intervals. Using Eq. (41) we can write a general form for the revival times:
k + 1 p t r , k + (p − 1) p t r , p = 1, 2, ..., N q and k = 1, 2, ..., ∞.
Eq. (41) also highlights that when there are an even number of qubits the field returns to its initial state |α after a time of t r . The time till the wavepackets return to their original position is determined by the slowest moving wavepackets, which are e ±2iπt/tr α in the even qubit case and e ±iπt/tr α in the odd qubit case. When N q is odd the wavepackets return to their start position after a time 2t r and there will always be no stationary wavepacket at the start position.
A sketch of the Q function of the field for three qubits is shown in Fig. 9 for large values of n and we once again show the dipole moments as arrows. Using |D k in reference [16] we can compute a general term for the equation of the dipole moments in the largen approximation and is shown below.
− → d k = D k | − → J Nq |D k = D k | − → σ Nq |D k = |k| ± cos θ + 2|k|πt t r î + sin θ + 2|k|πt t r ĵ(43)
where − → σ Nq =σ x Nqî +σ y Nqĵ +σ z Nqk , is the Pauli vector for N q qubits and
σ x,y,z Nq = Nq i=1σ x,y,z i .(44)
We notice again the pattern of the arrows changes over time and that the wavepackets come to their original starting position after a time 2t r .
A. Basin of Attraction
Using the above experience with one and two qubits, we investigate whether a basin of attraction exists for N q > 2. Using the expansion pioneered by Chumakov et al. [15] and developed a : t 0
Re Α Im Α
b : t tr 8
Re Α Im Α c : t tr 6
Re Α Im Α
d : t tr 3
Re Α Im Α
e : t tr 2
Re Α Im Α
f : t 2tr 3
Re Α Im Α g : t 5tr 6
Re 5t independently by Meunier et al. [16] we have indeed found a basin of attraction for N q qubits. As D ± 1 2 ( tr 2 ) = ψ + 1 att , and using the observation made in Eq. (40), we can therefore conclude that D ± Nq 2 (t) will go to a direct product of ψ + 1 att at t = tr 2Nq . then only D ± Nq 2 (t) are in the wavefunction and the qubits will go to ψ + Nq att at t = tr 2Nq . Using this information we have found a basin of attraction for N q qubits.
D ± Nq 2 tr 2Nq = D ± 1 2 ( tr 2 ) ⊗Nq(45)ψ Nq = Nq/2 m=−Nq/2 A(N q , a, m)e −i( Nq 2 −m)θ N q ! Nq 2 + m ! Nq 2 − m ! |N q , m A(N q , a, m) = a if Nq 2 − m is even 1 2 Nq −1 − |a| 2 if Nq 2 − m is odd (48)
As before φ is an arbitrary phase and θ is the initial phase of the radiation field with 0 ≤ |a| ≤ 1 √ 2 Nq −1 . The basin of attraction is in the symmetric subspace and |N q , m is the symmetrised state with N e excited qubits, N g qubits in the ground state and m = Ne−Ng 2
. For example, for the three qubit case, N q = 3, m = 1 2 = 1 √ 3 (|eeg + |ege + |gee ) as N e = 2 and N g = 1 etc. We note that although the state space of the qubits increases with increasing N q , the basin of attraction is still parameterised by the single complex value a. The significance of this will become apparent in Sec.
IV B.
We emphasise that when the qubits start in the basin of attraction then there are only two wavepackets in phase space of the photon field for all time. The initial conditions have restricted the system to behave like the one qubit case as is highlighted by considering the wavefunction at
t = tr 2Nq . Ψ Nq tr 2Nq = ψ + Nq att β Nq 2 tr 2Nq |iα + β − Nq 2 tr 2Nq |−iα(49)
In the one qubit case the qubit system also goes to a second 'attractor' state ψ − 1 att at the time = ψ − 1 att ⊗Nq = ψ − Nq att . Qubits started in the basin of attraction, Eq. (48), will go to these 'attractor' states and hence the entropy will go to zero. (t) = ψ ± Nq att . Note that only revival and 'attractor' times in the interval 0 < t ≤ t r are shown. Further times can be obtained by adding integer multiples of t r .
We can construct a series for when these 'attractor' times will occur for N q qubits, shown in table I with the revival times from Eq. (42). The general equation for these times is:
k + (2p − 1) 2N q t r , p = 1, 2, .
.., N q and k = 1, 2, ..., ∞.
In Fig. 10 we show the 3 and 4 qubit evolution starting in a state in the basin of attraction (Eq.
(48)). In general we do see there are dips in the entropy that go close to zero and the probability of being in the 'attractor' state goes to close to one. These features get more pronounced asn is increased. As for the two-qubit case, when this probability goes to zero, the system is in the second 'attractor' state, ψ − Nq att , which is orthogonal to the first.
B. Collapse and Revival of 'Schrödinger Cat' States
Surprisingly, the basin of attraction has a simple form when written in terms of spin coherent states, which are analogous to coherent states of a photon field. As described in Appendix A, such states are given by:
|z, N q = 1 (1 + |z| 2 ) Nq/2 Nq/2 m=−Nq/2 N q ! ( Nq 2 − m)!( Nq 2 + m)! z ( Nq 2 −m) |N q , m(51)= 1 (1 + |z| 2 ) Nq/2 (|e + z|g ) ⊗Nq(52)
where |N q , m is defined in Sec. IV A. A spin coherent state is a product state and this is easily seen when it is written in Majorana representation [40] in Eq.
+ √ 2 Nq−2 a − 1 2 Nq−1 − |a| 2 z = −e −iθ , N q |α(53)
From Eq. (52) it is clear that z = e −iθ , N q is orthogonal to z = −e −iθ , N q and ψ Nq is a collection of spin 'Schrödinger cat' states. We also write the state of the field at the time of the first dip in the entropy for an arbitrary number of qubits, t = tr 2Nq : the initial qubit state has moved from the qubit system to the field system [34].
Ψ Nq ( tr 2Nq ) = ψ + Nq att √ 2 Nq−2 a − 1 2 Nq−1 − |a| 2 e iπn/2 |iα − √ 2 Nq−2 a + 1 2 Nq−1 − |a| 2 e −iπn/2 |−iα(54)
To illustrate this migration we use a function similar to the field Q function, Q q , using the completeness relation in appendix A,
Q(α, t) = α|ρ f (t)|α π Q q (z, t) = z|ρ q (t)|z (1 + |z| 2 ) 2(55)
Q q (z, t) corresponding to ψ Nq in Eq. (53) is depicted in Fig 11. There are indeed two separate peaks that correspond to two macroscopically different spin coherent states.
The 'attractor' states for N q qubits in Eq. (48) are also spin coherent states (see appendix A).
ψ ± Nq att = 1 √ 2 Nq e −iθ |e ± i|g ⊗ ... ⊗ e −iθ |e ± i|g = 1 √ 2 Nq e −iθ |e ± i|g ⊗Nq = z = ±ie iθ , N q(56)
In Fig 12 we plot a spin Q q function diagram for the 'attractor' state.
So far we have shown, for largen, that the qubit states in the basin of attraction are spin Schrödinger cat states when a = 1 √ 2 Nq . At a later time, t = tr 2Nq , the qubits are in the spin coherent 'attractor' state and the field is in a Schrödinger cat state. Then the qubits return once again to a spin Schrödinger cat state at the time t = tr Nq . This is shown in Table II. So we conclude that the 'Schrödinger cat' state migrates from the qubits to the field and back again.
t = 0 t = t r /2N q t = t r /N q
C. Collapse and Revival of Entanglement
Whilst the basin of attraction for the N q qubit case is a very small part of the Hilbert space of all initial states it shows many different degrees of entanglement, just like the two qubit case.
Some values of a and φ give the qubit state to be in a product state and other values give some degree of entanglement between the qubits. When considering more than two qubits there is no universally accepted measure of entanglement. Indeed, for N q ≥ 4 and multilevel systems there exist infinitely many inequivalent kinds of entanglement [41]. GHZ type entanglement exists for all numbers of qubits and, up to local unitaries, a maximally entangled GHZ state has the form:
|GHZ = 1 √ 2 |b ⊗Nq + |b ⊥ ⊗Nq(57)
where b|b ⊥ = 0. In the previous section it was stated that the basin of attraction can be rewritten as a spin 'Schrödinger cat' state which is a state that is a superposition of two macroscopically distinguishable states. However expanding states in the basin of attraction using Eq. (52) we see that it has a GHZ form:
ψ Nq = 1 2 a + 1 2 Nq−1 − |a| 2 |e + e −iθ |g ⊗Nq + a − 1 2 Nq−1 − |a| 2 |e − e −iθ |g ⊗Nq(58)
although it is only a maximal GHZ state when a = 0 and a = For three qubits there are two different types of entanglement, GHZ [42] and W [43]. |GHZ = 1 √ 2 (|eee + |ggg ) can be regarded as a maximally entangled state, but if one of the qubits is traced out the remaining state has absolutely no entanglement present. The entanglement of |W = 1 √ 3 (|eeg + |ege + |gee ) is maximally robust and is not so fragile to particle loss (i.e. there is still entanglement present after tracing out any individual qubit). It is not possible to transform |GHZ into |W even with a small probability of success and vise versa, and thus the two states represent fundamentally different types of entanglement. In particular, the GHZ state represents a truly 3-body entanglement. Similarly, any N q -qubit GHZ state contains specifically N q -body entanglement.
We have used a pure state GHZ entanglement measure introduced by Coffman et al. [44]:
τ ABC = C 2 A(BC) − C 2 AC − C 2 AB(59)
where τ ABC is the amount of GHZ entanglement, C 2 A(BC) = 2 √ detρ A and C 2 AC is the entanglement after qubit B has been traced out. The results for different values of a for the three qubit version of Eq. (48) are shown in Fig. 13. This measurement only works for three qubit states that are pure, so we only perform the calculation on the initial state. This plot shows the same shape as that of Fig. 6, but values of a are now limited to a smaller range. On the basis of Figs. 6 and 13, and using Eq. (58), we conjecture that for more qubits the basin of attraction will always contain maximally entangled states that involve all the qubits and two unentangled states, but the size of the basin will get smaller as the number of qubits increases.
For the largen approximation it was shown in Sec. III B that if a state is started in the basin of attraction τ (0) = τ ( tr 4 ). This can also be extended to the three qubit case and τ ABC (0) = τ ABC ( tr 6 ).
Therefore if the initial qubit state is in the basin of attraction and τ ABC = 0 then the whole entanglement will return to the qubit system at t = tr 6 . To see if 'Collapse and Revival' of entanglement also occurs in the exact solution for three qubits we start the qubits in an initial state, |ψ 3 , that is a maximally entangled GHZ state (τ ABC = 1).
We then calculated the probability P init (t) = ψ 3 |ρ q (t)|ψ 3 , which is the probability the qubits are in the initial state. If the probability collapses and revives over time then we can conclude that the three qubit system also exhibits 'Collapse and Revival' of entanglement. The results are shown in The qubits are started in a maximally entangled GHZ state, (τ ABC = 1), that lies in the basin of attraction and are then allowed to evolve. At a later time the entropy goes close to zero and the qubits are close to the product form shown in Eq. (47). This is a completely unentangled state and as the entropy is also zero in the largen approximation, there is no entanglement in the entire system. After the 'attractor' time the entropy increases and is no longer close to zero. We follow the green curve in Fig. 14 and note that P init (t) increases to a value near 0.8 and therefore
we propose there is some three qubit mixed state entanglement present in the qubit system. In the largen approximation this value does go to unity, so the system is in the initial state, up to a phase. In this approximation we will therefore necessarily get a full revival of three-body GHZ entanglement.
As further confirmation that the entanglement is of 3-body form, we have also calculated the entanglement present between just two of the qubits after tracing out one of the qubits. It was found that the tangle between any two qubits remained zero throughout the evolution. Therefore we can conclude the entanglement present is just GHZ entanglement and there is no W component present (as a W component would lead to non-zero 2-qubit entanglement).
We have demonstrated the revival of entanglement for the numerically evaluated exact solution for two and three qubits, and shown it to be true for all values of N q in the largen approximation.
Of particular note is that the type of entanglement we observe is of the GHZ form and is thus a true N q -qubit entanglement that cannot be reduced to a simpler form. Remarkably, this implies that the quantum information encoded in an N q -qubit GHZ entangled state-for any value of N q -can be encoded into the state of a single field mode.
V. CONCLUSION
We have studied the 'Collapse and Revival of Entanglement' in the two, three and many qubit JCM. We have shown that the idea of an 'attractor' state discovered by Gea-Banacloche [13] for one qubit does indeed exist in the multi qubit cases described by the Tavis-Cummings model.
Unlike the one qubit case, the qubits have to be within a basin of attraction in order for the qubit system to disentangle itself from the field at certain times. When the qubits are started in a state in the basin of attraction then the system behaves in a similar way to the one qubit case.
The basin of attraction contains mostly entangled states and there are some interesting entanglement properties associated with the whole system when the qubits are started in this basin of attraction. The system evolves and there is then entanglement between the field and the qubit system, but when the system goes to the 'attractor' state all entanglement has completely left the system. The field and the qubits are both in pure states with the qubit state being in the completely unentangled 'attractor' state. At a later time it has been found that the entanglement does return to the system and then the process repeats. We represented explicit examples of entanglement present between the qubits and also between the qubit system and the field. We have called this phenomenon 'Collapse and Revival of Entanglement'. For initial qubit states outside the basin of attraction, sudden death and rebirth [37] of entanglement can occur.
APPENDIX A: SPIN COHERENT STATES
There exist spin states analogous to the coherent state of the harmonic oscillator. For the one dimensional harmonic oscillator the coherent states are functions of a variable α which runs over the entire complex plane:
|α = 1 N e αa † |0 (A1) = 1 N ∞ n=0 (αa † ) n n! |0 (A2) = e −|α| 2 /2 ∞ n=0 α n √ n! |n (A3)
where the normalisation factor N = e |α| 2 /2 .
These states from a complete set, in the sense that [7] 1 π |α α|d 2 α = ∞ n=0 |n n| = 1 (A4)
where the right hand side is the unit matrix, or identity operator.
To find the analogue for spin half particles we consider a single particle of spin S = N q /2. We define the ground state |0 as the state such thatŜ z = S|0 whereŜ z = Nq i=0 σ z i is the operator of the z component of spin. Then the operatorŜ + =Ŝ x +iŜ y 2 creates spin deviations. In a similar way to the field coherent state the spin coherent state for many spin half particles is [45,46,47]: In our work we present the 'attractor' states for many qubits in Eq. 56 as spin coherent states.
|z, N q = 1 N e zS + |0(
These states themselves are indeed spin coherent states. If we set z = ie iθ in Eq. A7 then we get ψ + Nq att and if we set z = −ie iθ then we get ψ − Nq att . When looking at the coherent state of a harmonic oscillator, there exists a set of states called Schrödinger cat states that are the addition of two coherent states that are macroscopically different, e.g. 1 √ 2 (|α + |−α ). There also exist spin coherent states that are analogous to these Schrödinger cat states. They are also defined as the addition of two macroscopically different states, i.e. 1 √ 2 (|z, N q + |−z, N q ). In our work we have plotted Q function diagrams for the cavity field state using the coherent state. There also exists an analogous function for the qubits, defined in terms of spin coherent states in a similar way, using the completeness relation Eq. A8. This spin Q function is given by Q q (z, t) = z|ρ q (t)|z 1 + |z| 2 2 .
(A9)
We have used this in our analysis to make interesting plots which demonstrate pictorially that the states in the basin of attraction for multi-qubit systems are indeed Schrödinger cat states of the qubits.
ACKNOWLEDGMENTS
The work of C.E.A.J. was supported by UK HP/EPSRC case studentship, and D.A.R. was supported by a UK EPSRC fellowship. We thank W. J. Munro for helpful discussions.
FIG. 1 :
1(color online) Time evolution for a system with one qubit when the qubit starts in the initial state |g and the value ofn = 50. (a) the entropy of the qubit. (b) the probability of being in the qubit's initial state |g . (c) the probability of being in the state ψ + att . At tr 2 the probability of being in the 'attractor' state goes to one while the entropy goes to zero.
FIG. 2 :
2(color online) A contour plot of the Q function at six different times when the qubit is in the initial state |g andn = 50. The atomic dipole states are represented by arrows. (a) the time t = 0. The system is started in a coherent state which is shown by a circle of uncertainty in phase space. (b) a time a little after that of figure (a), t = tr 8 . The 'blobs' are well separated and are moving around a circle of fixed radius. (c) the time t = tr 2 . This corresponds to the first dip in entropy. (d) just before the time t = t r . This is just before the revival time and the two 'blobs' will overlap. (e) the time 3tr 2 . The second dip in the entropy. (f) the time t = 2t r . Both the wavepackets have returned to their original position.
FIG. 3 :
3(color online) The Q function in 3D for the time t = tr 8 corresponding to diagram (b) inFig. (2). We notice two distinct peaks of equal height that are very similar to coherent states.
the equatorial plane of the Bloch sphere. Fig. 2(c) shows the two dipole arrows are in line and pointing in the same direction at the time tr 2 because D . The arrows in Fig. 2(e) are again both the same, but are pointing in a different direction as D = ψ − 1 att . The field states are moving in a different direction in Fig. 2(e) to that of Fig. 2(c) and their identity has swapped. These arrows can be used to see if the two qubit states are the same and the direction of the dipole arrows highlights we are in a different 'attractor' state. Inspired by the above behaviour, in this paper we extend the analysis and investigate new problems. In particular we address the question: "Are there similar 'attractor' states in multi qubit systems and what are the implications of their existence?" III. TWO QUBIT JAYNES-CUMMINGS MODEL
FIG. 5 :
5of the three wavepackets are in phase. At a later time,Fig. 5(f), all three wavepackets converge at the starting position and there is a main revival peak, t = t r . If the initial state of the qubits was in the basin of attraction then β 0 ≈ 0 and there would only be two wavepackets. The Q function pictures would then look similar to those inFig. 2and the system would behave like the one-qubit case. Nevertheless we note that the radiation field returns to the start position after one revival time, t r compared to the one-qubit case where it returns to the same place after 2t r . online) Time evolution for a system with two qubits whenn = 50 for two different initial states of the qubits. (Top)The initial state of the qubits is |gg and it can be seen that the two qubit 'attractor' state is not reached at tr 4 . (Bottom) The initial state of the qubits is 1 √ 2 (|ee + |gg ) (i.e. within the basin of attraction) and it shows that the two qubit 'attractor' state is reached at tr 4 . In both diagrams (a) the entropy of the qubits. (b) the probability of the two qubit state |gg . (c) the probability of being in the two qubit 'attractor' state ψ + 2 att when the initial phase of the radiation field is θ = 0.In summary when there are two qubits interacting with a single cavity mode the wavefunction consists of a part that behaves like the one qubit case, |D ±1 (t) and an extra part |D 0 . The 'attractor' state, ψ ± 2 att , occurs at the times when the two 'blobs' in phase space are macroscop-(color online) The Q function at six different times when the qubits are initially in the state |gg .Once again the atomic dipole states are represented as arrows andn = 50.(a) the time t = 0. The system is started in a coherent state which is shown by a circle of uncertainty in phase space. (b) a time a little after that of figure (a), t = tr 8 . (c) the time t = tr 4 . (d) the time t = 3tr 8 . (e) the time 3tr 4 . (f) the time t = t r . Both the wavepackets have returned to their original position.
FIG. 6 :
6(color online) The value of the tangle for the states in the basin of attraction, for different values of a. We notice that the plot shows only two points where the tangle is zero.
FIG. 7 :
7(color online) (Top) 'Collapse and Revival of Entanglement' when the qubits are started in the maximally entangled state (|ee + |gg )/ √ 2 wheren = 50. (a) the entropy of the qubits. (b) the probability of the two qubit state |gg . (c) the tangle for the system.
FIG
. 8: (color online) (Top) Values of the concurrence over time for several different initial qubit states. As in previous figures,n = 50. (a) The initial state of the qubits is 1 √ 10 |ee − 9 10 |gg . At times the concurrence is greater than the initial value. (b) The initial state of the qubits is 1 √ 2 (|eg + i|ge ), and unlike initial conditions within the basin of attraction, leads to a concurrence value of zero at tr 2 . (c) The initial state of the qubits is 1 √ 20 |ee + 19 20 |gg and shows 'Sudden Death of Entanglement'. (d) The initial state of the qubits is 1 √ 2 |eg + e iπ 4 |ge , a maximally entangled state that exhibits 'Sudden Death of Entanglement'. (e) The initial state of the qubits is 1 √ 2 (|ee + |gg ), a maximally entangled state that lies in the basin of attraction that is used as a comparison. (Bottom) 'Sudden Death of Entanglement' when the qubits are started in the state 1 √ 20 |ee + 19 20 |gg , case (c) in the top diagram. As before (a) the entropy of the qubits. (b) the probability of the qubit being in the state |gg . (c) the mixed state concurrence of the qubit system.
FIG. 9 :
9(color online) Phase space sketches of the Q function at eight different times when the qubits are initial started in the state |ggg . The atomic dipole states are represented as arrows. (a) the time t = 0. The system is started in a coherent state which is shown by a circle of uncertainty in phase space. (b) a time a little after t = 0. (c) the time t = tr 6 . (d) the time t = tr 3 . (e) the time t = tr 2 . (f) the time t = 2tr 3 .
.
(52). Below we rewrite Eq. (48) in the form of a superposition of two spin coherent states, and show the form of the wavefunction at the time t = 0 when the qubits are in the basin of attraction:Ψ Nq (0) = ψ Nq |α = √ 2 Nq−2 a + 1 2 Nq−1 − |a| 2 z = e −iθ , NFIG. 10: (color online) Time evolution of the system when the initial qubit state is in the basin of attraction forn = 50 and θ = 0. (Top) Three qubits: The initial state of the qubits is (Bottom) Four qubits: The initial state of the qubits is 1 √ 2 (|N q = 4, m = 1 + |N q = 4, m = −1 ). As before (a) the entropy of the qubits. (b) the probability of all the qubits being in the state |g . (c) the probability of being in the qubit 'attractor' state ψ + Nq att .
Eq. (53) and Eq. (54) are very similar and show how a Schrödinger cat state, and the information about the initial conditions of the qubits, moves from the qubits to the field. At the time t = 0 the qubits are in a spin 'Schrödinger cat' state and the variable a, which fixes the initial state of the qubits, determines the ratio of z = e −iθ , N q to z = −e −iθ , N q . At the time t = tr 2Nq the field is now in a 'Schrödinger cat' state and the information about the initial state of the qubit encoded by a is now stored in the field state. A 'Schrödinger cat' state, and the information about
FIG. 11 :
11(color online) A plot of the spin Q q function of the qubits at the time t = 0 for a state in the basin of attraction. The value of a = 0 and we see a perfect Schrödinger cat state of the qubits. The number of qubits for the plot is N q = 40 with θ = 0, but a similar picture can be produced for any N q > 3. The peaks merge together for N q ≤ 3.FIG. 12: (color online) A plot of the spin Q q function of the qubits at the time t = t r /80, the 'attractor' time when N q = 40. The initial value of a = 0 and we see a spin coherent state centered around the value z = ie iθ with θ = 0.
2
Nq −1 . Excluding the case when a = 1 2 Nq −1 , all other values of a give an entangled state of GHZ form. So, as with the previous case, there are only two values of a that give a product state when in the basin of attraction, meaning all other states contain some entanglement. We next consider the case of three qubits in more detail and verify by numerical calculations that the basin of attraction for three qubits is indeed only made up of GHZ type entanglement.
FIG. 13 :
13(color online) The value of the tangle for the states in the basin of attraction for different values of a. We notice that the plot shows only two points where the tangle is zero, just like the two qubit case.
Fig. 14 .
14The measure stated in Eq. 59 can not be used here as the qubits are mostly in a mixed state and Eq. 59 is for a pure state. For this reason we have chosen to look at the probability of being in the initial state.
FIG. 14: (color online) 'Collapse and Revival of Entanglement' for three qubits for the initial state, |m = −3/2 , which is inside the basin of attraction, withn = 50 and θ = 0. (a) the entropy of the qubits. (b) the probability of all the qubits being in the state |g . (c) the probability of being in the initial state, P init (t). (d) the value of tangle when one qubit has been traced out. In fact this pink line remains zero for all time, so there is no W state entanglement present.
spin of the state and z is a complex number that determines the center of the coherent state. A spin coherent state is a product state and therefore has no entanglement. For example if z = 1, then the average value of the spin coherent state is |N q , m = 0 , if z = 0 then the average value of the spin coherent state is N q , m = Nq 2 and if z = ∞ then the average value of the spin coherent state is N q , m = − Nq 2 .The states |z, N q form a complete set and the completeness relation is:N q + 1 π |z z|(1 + |z| 2 ) 2 d 2 z = |N q , m N q , m| = 1 (A8)
TABLE I :
IA summary of where the different revival peaks occur for differing numbers of qubits. Also the
times are shown for when D ± Nq
2
TABLE II :
IIA table of the states of the field and qubits at different times. The behavior repeats in the largēn approximation.
C P Slichter, Principles of Magnetic Resonance. BerlinSpringer-VerlagC. P. Slichter, Principles of Magnetic Resonance (Springer-Verlag, Berlin, 1978).
P R Berman, Cavity Quantum Electrodynamics. LondonAcademic PressP. R. Berman, ed., Cavity Quantum Electrodynamics (Academic Press, London, 1994).
M A Nielsen, I L Chuang, Quantum Computation and Quantum Information. CambridgeCambridge University PressM. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge Uni- versity Press, Cambridge, 2001).
. E T Jaynes, F W Cummings, Proc IEEE. 5189E. T. Jaynes and F. W. Cummings, Proc IEEE 51, 89 (1963).
. M Tavis, F W Cummings, Phys. Rev. 170279M. Tavis and F. W. Cummings, Phys. Rev. 170, 279 (1968).
. B W Shore, P L Knight, J. Mod. Opt. 401195B. W. Shore and P. L. Knight, J. Mod. Opt. 40, 1195 (1993).
C C Gerry, P L Knight, Introductory Quantum Optics. CambridgeCambridge University PressC. C. Gerry and P. L. Knight, Introductory Quantum Optics (Cambridge University Press, Cambridge, 2005).
. F W Cummings, Phys. Rev. 1401051F. W. Cummings, Phys. Rev. 140, A1051 (1965).
S Barnett, P Radmore, Methods in Theoretical Quantum Optics. Oxford, OxfordS. Barnett and P. Radmore, Methods in Theoretical Quantum Optics (Oxford, Oxford, 1997).
. J H Eberly, N B Narozhny, J J Sanchez-Mongragon, Phys. Rev. Lett. 441323J. H. Eberly, N. B. Narozhny, J. J. Sanchez-Mongragon, Phys. Rev. Lett. 44, 1323 (1980).
. H I Yoo, J H Eberly, Phys. Rep. 118239H. I. Yoo and J. H. Eberly, Phys. Rep. 118, 239 (1985).
. M Ueda, T Wakabayashi, M Kuwata-Gonokami, Phys. Rev. Lett. 762045M. Ueda, T. Wakabayashi, and M. Kuwata-Gonokami, Phys. Rev. Lett. 76, 2045 (1996).
. J Gea-Banacloche, Phys. Rev. Lett. 653385J. Gea-Banacloche, Phys. Rev. Lett. 65, 3385 (1990).
. J Gea-Banacloche, Phys. Rev. A. 445913J. Gea-Banacloche, Phys. Rev. A 44, 5913 (1991).
. S M Chumakov, A B Klimov, J J Sanchez-Mondragon, Phys. Rev. A. 494972S. M. Chumakov, A. B. Klimov and J. J. Sanchez-Mondragon, Phys. Rev. A 49, 4972 (1994).
. T Meunier, A Le Diffon, C Ruef, P Degiovanni, J.-M Raimond, Phys. Rev. A. 7433802T. Meunier, A. Le Diffon, C. Ruef, P. Degiovanni and J.-M. Raimond, Phys. Rev. A 74, 033802 (2006).
An English translation of this article may be found in the book Quantum Theory and Measurement. E Schrödinger, Naturwissenschaften. J. A. Wheeler and W. H. Zurek23152Princeton University PressE. Schrödinger, Naturwissenschaften 23, 812 (1935), An English translation of this article may be found in the book Quantum Theory and Measurement, edited by J. A. Wheeler and W. H. Zurek (Princeton University Press, Princeton, NJ, 1983), p152.
. V Bužek, H Moya-Cessa, P L Knight, S J D Phoenix, Phys. Rev. A. 458190V. Bužek, H. Moya-Cessa, P. L. Knight and S. J. D. Phoenix, Phys. Rev. A 45, 8190 (1992).
. P L Knight, B W Shore, Phys. Rev. A. 48642P. L. Knight and B. W. Shore, Phys. Rev. A 48, 642 (1993).
. S J Phoenix, P L Knight, Phys. Rev. A. 446023S. J. D Phoenix and P. L. Knight, Phys. Rev. A 44, 6023 (1991).
J Neumann, Mathematical Foundations of Quantum Mechanics. R. T. BeyerPrincetonPrinceton University PressJ. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, 1955), eng. translation by R. T. Beyer.
. A , Rev. Mod. Phys. 50221A. Wehrl, Rev. Mod. Phys. 50, 221 (1978).
. H Araki, E Lieb, Comm. Math. Phys. 18160H. Araki and E. Lieb, Comm. Math. Phys. 18, 160 (1970).
. V Vedral, M B Plenio, K Jacobs, P L Knight, Phys. Rev. A. 564452V. Vedral, M. B. Plenio, K. Jacobs, and P. L. Knight, Phys. Rev. A 56, 4452 (1997).
. W J Munro, D F V James, A G White, P G Kwiat, Phys. Rev. A. 6430302W. J. Munro, D. F. V. James, A. G. White, and P. G. Kwiat, Phys. Rev. A 64, 030302(R) (2001).
. S J Pheonix, P L Knight, Ann. Phy. 186381S. J. D Pheonix and P. L. Knight, Ann. Phy. 186, 381 (1988).
C W Gardiner, P Zoller, Quantum Noise. BerlinSpringerC. W. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, 1991).
. S M Chumakov, A B Klimov, J J Sanchez-Mondragon, Optics Commun. 118529S. M. Chumakov, A. B. Klimov and J. J. Sanchez-Mondragon, Optics Commun. 118, 529 (1995).
. D A Rodrigues, C E A Jarvis, B L Györffy, T P Spiller, J F Annett, J. Phys.: Condens. Matter. 2075211D. A. Rodrigues, C. E. A. Jarvis, B. L. Györffy, T. P. Spiller and J F Annett, J. Phys.: Condens. Matter 20, 075211 (2008).
. B Yurke, D Stoler, Phys. Rev. Lett. 5713B. Yurke and D. Stoler, Phys. Rev. Lett. 57, 13 (1986).
. M S Iqbal, S Mahmood, M S K Razmi, M S Zubairy, J. Opt. Soc. Am. B. 51312M. S. Iqbal, S. Mahmood M. S. K. Razmi and M. S. Zubairy, J. Opt. Soc. Am. B 5, 1312 (1988).
. W K Wootters, Phys. Rev. Lett. 802245W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998).
. W K Wootters, Quantum Inf. Comput. 127W. K. Wootters, Quantum Inf. Comput. 1, 27 (2001).
. C E A Jarvis, D A Rodrigues, B L Györffy, T P Spiller, A J Short, J F Annett, arXiv:0809.2025C. E. A. Jarvis, D. A. Rodrigues, B. L. Györffy, T. P. Spiller, A. J. Short and J. F. Annett, arXiv: 0809.2025 (2008).
. Ting Yu, J H Eberly, Phys. Rev. Lett. 93140404Ting Yu and J. H. Eberly, Phys. Rev. Lett. 93, 140404 (2004).
. Ting Yu, J H Eberly, Science. 323598Ting Yu and J. H. Eberly, Science 323, 598 (2009).
. J H Eberly, Ting Yu, Science. 316555J. H. Eberly and Ting Yu, Science 316, 555 (2007).
. Yang Qing, Yang Ming, Cao Zhuo-Liang, Chin. Phys. Lett. 25825Yang Qing, Yang Ming and Cao Zhuo-Liang, Chin. Phys. Lett. 25, 825 (2008).
. Muhammed Yönaç, Ting Yu, J H Eberly, J. Phys. B. 39621Muhammed Yönaç, Ting Yu and J H Eberly, J. Phys. B 39, S621 (2006).
. E Majorana, Nuovo Cimento. 943E. Majorana, Nuovo Cimento 9, 43 (1932).
. M B Plenio, S Virmani, Quant. Info. Comp. 71M. B. Plenio and S. Virmani, Quant. Info. Comp 7, 1 (2007).
D M Greenberger, M A Horne, A Zeilinger, Bells Theorem, Quantum Theory, and Conceptions of the Universe. M. KafatosDordrechtKluwer AcademicD. M. Greenberger, M. A. Horne, A. Zeilinger, Bells Theorem, Quantum Theory, and Conceptions of the Universe (Kluwer Academic, Dordrecht, 1989), edited by M. Kafatos.
. W Dür, G Vidal, J I Cirac, Phys. Rev. A. 6262314W. Dür, G. Vidal, and J. I. Cirac, Phys. Rev. A 62, 062314 (2000).
. V Coffman, J Kundu, W K Wootters, Phys. Rev. A. 6152306V. Coffman, J. Kundu and W. K. Wootters, Phys. Rev. A 61, 052306 (2000).
. J M Radcliffe, J. Phys. A: Gen. Phys. 4313J. M. Radcliffe, J. Phys. A: Gen. Phys. 4, 313 (1971).
. F T Arecchi, E Courtens, R Gilmore, H Thomas, Phys. Rev. A. 62211F. T. Arecchi, E. Courtens, R. Gilmore and H. Thomas, Phys. Rev. A 6, 2211 (1972).
. W.-M Zhang, H Da, Gilmore Feng, Rev. of Mod. Phys. 62867W.-M. Zhang, Da H. Feng and Gilmore, Rev. of Mod. Phys. 62, 867 (1990).
|
[] |
[
"Quantum reflection of antihydrogen from a liquid helium film",
"Quantum reflection of antihydrogen from a liquid helium film"
] |
[
"P.-P Crépin \nLaboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance\n",
"E A Kupriyanova \nP.N. Lebedev Physical Institute\n53 Leninsky prospect117924MoscowRussia\n\nRussian Quantum Center\n100 A, Novaya street143025Skolkovo, MoscowRussia\n",
"R Guérout \nLaboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance\n",
"† A Lambrecht \nLaboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance\n",
"V V Nesvizhevsky \nInstitut Laue-Langevin (ILL)\n71 avenue des Martyrs38042GrenobleFrance\n",
"S Reynaud \nLaboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance\n",
"‡ S Vasiliev \nDepartment of Physics and Astronomy\nUniversity of Turku\n20014TurkuFinland\n",
"A Yu Voronin \nP.N. Lebedev Physical Institute\n53 Leninsky prospect117924MoscowRussia\n\nRussian Quantum Center\n100 A, Novaya street143025Skolkovo, MoscowRussia\n"
] |
[
"Laboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance",
"P.N. Lebedev Physical Institute\n53 Leninsky prospect117924MoscowRussia",
"Russian Quantum Center\n100 A, Novaya street143025Skolkovo, MoscowRussia",
"Laboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance",
"Laboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance",
"Institut Laue-Langevin (ILL)\n71 avenue des Martyrs38042GrenobleFrance",
"Laboratoire Kastler Brossel (LKB)\nUPMC-Sorbonne Universités\nCNRS\nENS-PSL Research University\nCollège de France75252ParisFrance",
"Department of Physics and Astronomy\nUniversity of Turku\n20014TurkuFinland",
"P.N. Lebedev Physical Institute\n53 Leninsky prospect117924MoscowRussia",
"Russian Quantum Center\n100 A, Novaya street143025Skolkovo, MoscowRussia"
] |
[] |
We study the quantum reflection of ultracold antihydrogen atoms bouncing on the surface of a liquid helium film. The Casimir-Polder potential and quantum reflection are calculated for different thicknesses of the film supported by different substrates. Antihydrogen can be protected from annihilation for as long as 1.3s on a bulk of liquid 4 He, and 1.7s for liquid 3 He. These large lifetimes open interesting perspectives for spectroscopic measurements of the free fall acceleration of antihydrogen. Variation of the scattering length with the thickness of a film of helium shows interferences which we interpret through a Liouville transformation of the quantum reflection problem.
|
10.1209/0295-5075/119/33001
|
[
"https://arxiv.org/pdf/1709.07724v1.pdf"
] | 119,098,560 |
1709.07724
|
a1c7ebdaada02f96caa492733ef464eb8206cbb0
|
Quantum reflection of antihydrogen from a liquid helium film
P.-P Crépin
Laboratoire Kastler Brossel (LKB)
UPMC-Sorbonne Universités
CNRS
ENS-PSL Research University
Collège de France75252ParisFrance
E A Kupriyanova
P.N. Lebedev Physical Institute
53 Leninsky prospect117924MoscowRussia
Russian Quantum Center
100 A, Novaya street143025Skolkovo, MoscowRussia
R Guérout
Laboratoire Kastler Brossel (LKB)
UPMC-Sorbonne Universités
CNRS
ENS-PSL Research University
Collège de France75252ParisFrance
† A Lambrecht
Laboratoire Kastler Brossel (LKB)
UPMC-Sorbonne Universités
CNRS
ENS-PSL Research University
Collège de France75252ParisFrance
V V Nesvizhevsky
Institut Laue-Langevin (ILL)
71 avenue des Martyrs38042GrenobleFrance
S Reynaud
Laboratoire Kastler Brossel (LKB)
UPMC-Sorbonne Universités
CNRS
ENS-PSL Research University
Collège de France75252ParisFrance
‡ S Vasiliev
Department of Physics and Astronomy
University of Turku
20014TurkuFinland
A Yu Voronin
P.N. Lebedev Physical Institute
53 Leninsky prospect117924MoscowRussia
Russian Quantum Center
100 A, Novaya street143025Skolkovo, MoscowRussia
Quantum reflection of antihydrogen from a liquid helium film
(Dated: March 13, 2018)
We study the quantum reflection of ultracold antihydrogen atoms bouncing on the surface of a liquid helium film. The Casimir-Polder potential and quantum reflection are calculated for different thicknesses of the film supported by different substrates. Antihydrogen can be protected from annihilation for as long as 1.3s on a bulk of liquid 4 He, and 1.7s for liquid 3 He. These large lifetimes open interesting perspectives for spectroscopic measurements of the free fall acceleration of antihydrogen. Variation of the scattering length with the thickness of a film of helium shows interferences which we interpret through a Liouville transformation of the quantum reflection problem.
INTRODUCTION
Quantum reflection is a non classical phenomenon which appears when a quantum matter wave approaches a rapidly varying attractive potential. Instead of accelerating towards the surface, the quantum particle has a probability to be reflected. This process has been studied theoretically for the van der Waals potential since the early days of quantum mechanics [1][2][3][4]. It was first observed experimentally for H and He atoms [5][6][7] and then for ultracold atoms or molecules on solid surfaces [8][9][10][11].
In the last years quantum reflection has been studied also for antimatter [12][13][14] since it should play a key role in experiments with antihydrogen atoms [15][16][17]. It was shown that the free fall acceleration of antihydrogen can in principle be evaluated accurately [18] through spectroscopic studies of the quantum levitational states [19,20] of atoms trapped by quantum reflection and gravity [21,22]. Following uncertainty principle of quantum mechanics, such spectroscopic measurements should have a better accuracy for larger lifetime of antihydrogen in the trap.
In this letter, we calculate quantum reflection of antihydrogen above liquid helium films and show that the lifetime of antihydrogen reaches values as high as 1.3s for a bulk (a thick film) of liquid 4 He and 1.7s for a bulk of liquid 3 He. We also study the effect of thickness for 4 He films supported by different substrates and bring out a surprising interference pattern for the scattering length as a function of thickness. By using a Liouville transformation of the quantum reflection problem, we propose an interpretation of this phenomenon in terms of shape resonances.
CASIMIR-POLDER POTENTIAL
We study quantum reflection for an antihydrogen atom of mass m falling onto a liquid helium film of thickness d supported by a substrate (see Fig. 1).
Substrate
He H Figure 1. Representation of the quantum reflection process for an antihydrogen atom falling onto a helium film supported by a substrate. We study the limiting case of a bulk of helium (very large thickness of the film) as well as the general case of a film of finte thickness d supported by a substrate.
The atom is sensitive to the Casimir-Polder (CP) potential and to the free fall acceleration g. In the present letter, we focus our attention on the quantum reflection in the CP potential [23], as the latter is effective at distances z much smaller than the length scale g associated with quantum effects in the gravity field g = ( 2 /2m 2 g) 1/3 5.87 µm. Note that we suppose g = g in all numerical evaluations.
The CP potential is evaluated at zero temperature as the following function of the distance z of the atom to arXiv:1709.07724v1 [physics.atom-ph] 22 Sep 2017 the liquid helium surface [24]
V (z) = c 2 ∞ 0 dξ 2π ξ 2 α(iξ) d 2 q (2π) 2 e −2κz 2κ p s p r p , (1) κ = ξ 2 /c 2 + q 2 , s TE = 1 , s TM = − ξ 2 + 2c 2 q 2 ξ 2 .
This expression is integrated over the complex frequency ω = iξ and the transverse wave-vector q, and summed up over the two field polarizations p = TE, TM. The dynamic polarizability α(ω) of the antihydrogen atom is supposed to be the same as for the hydrogen atom, because possible differences between the two cases would be too small to have an influence at the level of precision aimed in the present study. Note that there are differences between hydrogen and antihydrogen for the atomic physics in the proximity of the helium film. While the potential of interaction for a hydrogen atom leads to the existence of a single bound state with an adsorption energy of the order of 1K on 4 He bulk [25][26][27], the repulsive part of potential may be absent in the much less studied case of antihydrogen. These differences are not studied in this letter where we focus attention on antihydrogen for which quantum reflection is the only reason for scattering of atoms back from the helium surface.
The reflection amplitudes r p are calculated for polarizations p by combining the Fresnel amplitudes at interfaces and propagation in the helium film. The optical properties of 4 He are described with a sufficient accuracy by a model dielectric constant with three resonances [28] (iξ) 1 + k=1,2,3 This model corresponds to a dielectric constant close to unity at the static limit ( (0) − 1 0.0567) as well as at all frequencies. We also use the optical model (2) for 3 He, with the same resonance frequencies ω k , and the resonance amplitudes a k multiplied by the same factor calculated to reproduce the static dielectric constant (0) − 1 0.043 known from experiments [29].
a k 1 + (ξ/ω k ) 2 ,(2)
These numbers lead to a poor reflectance of the film for electromagnetic waves and weak values for the CP potential with values even weaker for 3 He than for 4 He. It follows that quantum reflection occurs closer to the material surface where the CP potential is much steeper, which explains the large quantum reflection probability found below, with reflection even larger for 3 He than for 4 He. In both cases we use an effective dielectric constant and disregard the role played by excitations in the helium film like ripplons. The latter is well justified at temperatures below 100 mK [30], the temperature range where results obtained in the following are accurate.
We now discuss the results calculated for the CP potential in different situations. We begin with the limiting cases of d → 0 and d → ∞ where we obtain respectively the CP potentials of the naked substrate and of liquid helium bulk (that is liquid helium film with a large thickness). These potentials, attractive at all distances, behave as non retarded van der Waals potentials at short distances and as retarded potentials at large distances, with the two domains separated by the wavelength λ A 121nm of the first atomic transition 1S→2P of antihydrogen
V (z) − C 3 z 3 , z λ A ,(3)V (z) − C 4 z 4 , z λ A .(4)
Constants C 3 and C 4 are given in For the purpose of discussing the influence of the optical properties of the mirror on the CP potentials, we normalize the potential V (z) obtained from (2) by the potential V * (z) = −C * 4 /z 4 corresponding to the large distance limit above a perfectly reflecting mirror. The constant C * 4 = −3α(0) c/(32π 2 ) is determined by the static polarizability α(0) = 9 2 a 3 0 of antihydrogen. The ratios V (z)/V * (z) obtained for liquid 3 He and 4 He bulks as well as silica, silicon and gold bulks are plotted as full lines in Fig. 2. The ratios obtained for liquid 4 He films with finite thickness d on silica are plotted in Fig. 2 as dashed lines. They go smoothly from the one obtained for a liquid helium bulk for z d to that for a silica bulk for z d.
QUANTUM REFLECTION FROM A LIQUID HELIUM BULK
The previous calculations show a very low value of the CP potential for thick enough liquid helium films, as explained by the fact that liquid helium is almost transparent for the electromagnetic field. We now discuss the consequence of this fact in terms of large quantum reflection from liquid helium bulks.
To this aim, we solve the Schrödinger equation for the antihydrogen falling into the CP potential [15] above the liquid helium film. We then obtain the reflection amplitude r as the ratio of the outgoing wave to the incoming one far from the film. The quantum reflection probability is the squared modulus of this amplitude R = |r| 2 . The results are shown in Fig. 3 with larger and larger probability obtained for the weaker and weaker potentials of Fig. 2. In particular, quantum reflection for atoms falling from a height h and thus having a given energy E = mgh is much larger on a liquid helium bulk than on the other materials studied here.
We then extract from the reflection amplitude the complex length A(k) depending on the wavevector k equivalent to the energy E = 2 k 2 /(2m)
A(k) ≡ − i k 1 + r(k) 1 − r(k) .(5)
The effective range theory modified to account for the −C 4 /z 4 tail of the potential leads to the expression
A(k) = −i α(k ) ,(6)α(K) = α 0 + i π 3 K + α 2 + 4 3 α 0 ln K K 2 ,
where = √ 2mC 4 / is the characteristic length associated with the constant C 4 while α 0 and α 2 are two dimensionless parameters. The values of α 0 and α 2 are known for the special case V = −C 4 /z 4 [31,32]. In the problem studied here, the CP potential above the liquid helium film does not reduce to this special form, and we proceed as in [18] by extracting the 2 parameters from a fit of the low−k dependence of r(k).
We now discuss the results thus obtained for the scattering length a = −i α 0 which is deduced from the characteristic length and the parameter α 0 . For quantum reflection on a liquid 4 He bulk, one finds for example a = − (34.983 + 44.837i) a 0 . The imaginary part
In table II, we compare this values obtained for τ from the quantum reflection propabilities drawn on Fig. 3 and also for porous silica studied in [16]. We also give the values for the number N 1 of bounces for an atom in the first quantum levitation state. The numbers show that liquid helium is a much better reflector for antihydrogen matter waves than the other materials which have been studied up to now. The much larger lifetime, that is also the much larger number of bounces before annihilation, implies that it should be possible to trap antimatter for long enough to improve significantly the spectroscopy measurements discussed in [18]. Table II. Lifetime τ of antihydrogen in seconds above various material surfaces and number N1 of bounces for an atom in the first quantum gravitational state for different bulk materials and for porous silica (see [16] for the latter case).
QUANTUM REFLECTION FROM FINITE THICKNESS FILMS
We now investigate the effect on quantum reflection of the finite thickness of a liquid 4 He film supported by a substrate. We present the results of the calculations in terms of the scattering length a which now depends on the thickness d of the film as well as on the optical properties of the substrate. Results are presented in Fig. 4 for films supported by silica, silicon and gold substrates. The important effect of the thickness is clearly seen on this plot. For thicknesses larger than a few tens of nanometers, the scattering length reaches asymptotically the value found above for a liquid 4 He bulk. The curves give the thickness of the film to be chosen sufficient for recovering the large lifetimes predicted at the limit of the bulk. This property is also illustrated in terms of variation of the lifetime in Fig. 5. As could be expected, the substrate which leads to the larger lifetime for a given thickness of the liquid helium film is the one which would have the best reflectivity without the film (silica in our case).
• • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •Re(a) (a0) • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
For small thicknesses, of the order of a few nanometers, real and imaginary parts of the scattering length are found to oscillate in phase quadrature in Fig. 4-5. This property is confirmed by the variation of a in the complex plane, shown in Fig. 6 in the case of a gold substrate. It looks like a consequence of an interference phenomenon that we discuss in the next section. Figure 6. Scattering length a represented in the complex plane, depending on the thickness d of the liquid helium film above a gold substrate. The thickness ranges from 0.1nm (center of the spiral) to 50nm (outer part of the spiral). Green point corresponds to d = 1nm, red to d = 5nm and blue to d = 20nm.
• • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
DISCUSSION OF THE OSCILLATIONS
The counterintuitive property of a larger quantum reflection for a weaker CP potential [15,16] has been discussed in recent papers by using Liouville transformations of the Schrödinger equation. Such transformations allow one to map the problem of quantum reflection from an atractive well to the more intuitive problem of ordinary reflection from a wall, with the latter becoming higher for weaker CP potentials [33,34].
The shape of the CP potential for the film of thickness 10λ A in Fig. 2 suggests that the atom falling onto the film see two zones of rapid variation of the potential, the first one at the transition from the potential which would be seen for the naked substrate to that of a helium bulk, and the second one at the approach to the liquid helium film. Using Liouville transformations, we now interpret the oscillations of the scattering length seen in Fig. 6 as an interference between reflections on the two walls.
Liouville transformations [35,36] are gauge transformations of the Schrödinger equation which preserve the reflection amplitude while changing the potential landscape. They correspond to a coordinate change z → z and rescaling ψ → ψ ψ ψ = z (z)ψ of the wave-function We choose here a specific Liouville gauge discussed in [34], with the new coordinate z = φ dB /κ proportional to the WKB phase φ dB (κ an arbitrary constant). The latter is the integral φ dB = k dB dz of the WKB wavevector k dB = 2m(E − V )/ . The Liouville transformation preserves the form of the Schrödinger equation with modified energy E and potential V [34]
V (z) = E Q(z) , E = κ 2 ,(8)Q(z) = −α 3 dB d 2 α dB dz 2 , α dB ≡ 1 √ k dB .
Q(z) is the badlands function [3,4] which marks the zones where the WKB approximation breaks down, that is also where significant quantum reflection occurs [15].
We draw in Fig. 7 the badlands function Q(z) for four different thicknesses of the film supported by a gold substrate. Three thicknesses correspond to the colored points emphasized in Fig. 4 for d =1nm (green), 5nm (red) and 20nm (blue). For the purpose of comparison with the case of a naked substrate, a fourth plot is drawn for d =0 (dashed yellow curve). The plots for non null thicknesses show two peaks while only one peak appears for the plot of the naked substrate as well as for the plot of a liquid helium bulk. The peak lying far from the surface is roughly the same for all curves, and it is the same as for the naked substrate. The other one corresponds to the approach to the helium film and its position depends on the thickness of the film.
For films with non null thicknesses, the two peaks form a cavity where the matter wave can be stored. In the case studied here, the mirror closer to the material surface has a poorer reflectivity than the mirror farther from the material surface. The presence of the cavity leads to a faster annihilation when atoms are trapped, which degrades the lifetime of the antihydrogen atom, as observed in Fig. 4-5. The interferences taking place in the cavity explain the oscillation patterns highlighted in Fig. 4 and 6. The associated phase is related to the round-trip dephasing in the cavity, which is determined by the displacement to the left of the weaker peak and the change of the shape of the potential inside the cavity. This discussion can be considered as a qualitative interpretation of the full calculations presented in the preceding section. Figure 7. Badlands functions Q(z) (z in nm) calculated for an antihydrogen atom falling from the energy of the first quantum gravitational state onto a liquid helium film above a gold substrate. The three full lines correspond to three thicknesses of the film, with the same color code as for the points emphasized in Fig. 6 : From bottom to top, the thickness of the film is 1nm (green), 5nm (red) and 20nm (blue). The dashed (yellow) curve corresponds to the naked substrate.
CONCLUSION
In this letter we have found theoretically a high reflection probability for antihydrogen atoms falling down onto thick enough liquid helium films. We also predicted the presence of oscillations of the scattering length as a function of thickness for liquid helium films supported by a substrate. We interpreted the associated interference pattern as a consequence of the existence of two separated zones where significant reflection occurs.
We have considered that low-temperature reflection properties of antihydrogen atoms from liquid helium films are essentially determined by the long-range part of potential, as it is known theoretically [37] and proven experimentally [7] for the case of hydrogen-liquid helium interaction. Of course, it would be interesting to go further in the analysis by considering explicitly the effect of the short-range part of the interaction potential. This idea has been studied for the case of antihydrogen atom-helium atom interaction, and short-range repulsion predicted [38][39][40][41][42]. It would be worth performing similar calculations for the case of antihydrogen atom-liquid helium interaction studied in the present letter. These calculations may change the precise value of the lifetimes obtained above, but they should not affect qualitatively our main statement, namely that long lifetimes are obtained for antimatter above a liquid helium bulk, which have very interesting applications for new spectroscopic tests of the equivalence principle for antihydrogen atoms.
(ω 1
1, ω 2 , ω 3 ) = (3.22, 3.74, 12) × 10 16 rad.s −1 , (a 1 , a 2 , a 3 ) = (0.016, 0.036, 0.0047) .
Figure 2 .
2CP potentials V (z) normalized by the potential V * (z) calculated for a perfect mirror at large distances. Distances z are normalized by the wavelength λA 121nm of the 1S→2P antihydrogen transition. The full lines correspond, from bottom to top, to bulks of 3 He (light blue), 4 He (dark blue), silica (red), silicon (green) and gold (yellow). The other lines correspond to liquid helium films of thickness d = 10λA (dashed line) and d = 0.1λA (dotted line) on a silica bulk.
Figure 3 .
3Quantum reflection probability as a function of the free fall height of the atom h, that is also of its energy E = mgh. The full lines correspond, from top to bottom, to bulks of 3 He (light blue), 4 He (dark blue), silica (red), silicon (green) and gold (yellow). b = −Im(a) of this scattering length determines the mean lifetime τ for atoms bouncing above the bulk[18] τ = 2mgb .
Figure 4 .
4Real (upper plot) and imaginary (lower plot) parts of the scattering length depending on the thickness d of the liquid 4 He film, drawn from the top to the bottom for a silica substrate (red curve), a silicon substrate (green) and a gold substrate (yellow). For comparison, the dashed (blue) line corresponds to real and imaginary parts of the scattering length for a liquid 4 He bulk.
Figure 5 .
5Lifetime τ depending on the thickness d of the liquid 4 He film, drawn from the top to the bottom for a silica substrate (red curve), a silicon substrate (green) and a gold substrate (yellow). For comparison, the dashed (blue) line is the lifetime corresponding to the liquid 4 He bulk.
Table I. Constants C3 and C4 for bulks of liquid helium and substrates made of silica, silicon or gold, expressed in atomic units (E h and a0 are the Hartree energy and Bohr radius).table I for liquid helium
bulk and substrates made of silica, silicon or gold.
medium
C3 [E h a 3
0 ]
C4 [E h a 4
0 ]
liquid 3 He
0.0034
1.19
liquid 4 He
0.0045
1.55
silica
0.053
28.1
silicon
0.101
50.28
gold
0.085
73.38
Acknowledgements -We are grateful to our colleagues from GBAR and GRANIT collaborations for useful discussions. We thank the referee for stimulating comments.
. * Pierre-Philippe, * [email protected] † [email protected] ‡ [email protected]
The interaction of atoms and molecules with solid surfaces. III. The Condensation and Evaporation of Atoms and Molecules. J E Lennard-Jones, A F Devonshire, Proceedings of the Royal Society of London. Series A. 156J. E. Lennard-Jones and A. F. Devonshire. The interac- tion of atoms and molecules with solid surfaces. III. The Condensation and Evaporation of Atoms and Molecules. Proceedings of the Royal Society of London. Series A, 156:6-28, 1936.
The interaction of atoms and molecules with solid surfaces. IV. The condensation and evaporation of atoms and molecules. J E Lennard-Jones, A F Devonshire, Proceedings of the Royal Society of London. Series A. 156J. E. Lennard-Jones and A. F. Devonshire. The interac- tion of atoms and molecules with solid surfaces. IV. The condensation and evaporation of atoms and molecules. Proceedings of the Royal Society of London. Series A, 156:29-36, 1936.
Semiclassical approximations in wave mechanics. M V Berry, K E Mount, Reports on Progress in Physics. 35M. V. Berry and K. E. Mount. Semiclassical approxima- tions in wave mechanics. Reports on Progress in Physics, 35:315-397, 1972.
Working with WKB waves far from the semiclassical limit. H Friedrich, J Trost, Physics Reports. 397H. Friedrich and J. Trost. Working with WKB waves far from the semiclassical limit. Physics Reports, 397:359- 449, 2004.
Scattering of 4 He Atoms Grazing the Liquid-4 He Surface. V U Nayak, D O Edwards, N Masuhara, Physical Review Letters. 50V. U. Nayak, D. O. Edwards, and N. Masuhara. Scatter- ing of 4 He Atoms Grazing the Liquid-4 He Surface. Phys- ical Review Letters, 50:990-992, 1983.
Quantum reflection: Focusing of hydrogen atoms with a concave mirror. J J Berkhout, O J Luiten, I D Setija, T W Hijmans, T Mizusaki, J T M Walraven, Physical Review Letters. 63J. J. Berkhout, O. J. Luiten, I. D. Setija, T. W. Hij- mans, T. Mizusaki, and J. T. M. Walraven. Quantum reflection: Focusing of hydrogen atoms with a concave mirror. Physical Review Letters, 63:1689-1692, 1989.
Evidence for universal quantum reflection of hydrogen from liquid 4 He. I A Yu, J M Doyle, J C Sandberg, C L Cesar, D Kleppner, T J Greytak, Physical Review Letters. 71I. A. Yu, J. M. Doyle, J. C. Sandberg, C. L. Cesar, D. Kleppner, and T. J. Greytak. Evidence for universal quantum reflection of hydrogen from liquid 4 He. Physical Review Letters, 71:1589-1592, 1993.
Specular reflection of very slow metastable neon atoms from a solid surface. F Shimizu, Physical Review Letters. 86F. Shimizu. Specular reflection of very slow metastable neon atoms from a solid surface. Physical Review Letters, 86:987-990, 2001.
Experimental observation of quantum reflection far from threshold. V Druzhinina, M Dekieviet, Physical Review Letters. 91193202V. Druzhinina and M. DeKieviet. Experimental observa- tion of quantum reflection far from threshold. Physical Review Letters, 91:193202, 2003.
Quantum reflection from a solid surface at normal incidence. T A Pasquini, Y Shin, C Sanner, M Saba, A Schirotzek, D E Pritchard, W Ketterle, Physical Review Letters. 93223201T. A. Pasquini, Y. Shin, C. Sanner, M. Saba, A. Schi- rotzek, D. E. Pritchard, and W. Ketterle. Quantum re- flection from a solid surface at normal incidence. Physical Review Letters, 93:223201, 2004.
Low velocity quantum reflection of Bose-Einstein condensates. T A Pasquini, M Saba, G.-B Jo, Y Shin, W Ketterle, D E Pritchard, T A Savas, N Mulders, Physical Review Letters. 9793201T. A. Pasquini, M. Saba, G.-B. Jo, Y. Shin, W. Ketterle, D. E. Pritchard, T. A. Savas, and N. Mulders. Low ve- locity quantum reflection of Bose-Einstein condensates. Physical Review Letters, 97:093201, 2006.
Interaction of ultracold antihydrogen with a conducting wall. A Y Voronin, P Froelich, B Zygelman, Physical Review A. 7262903A. Y. Voronin, P. Froelich, and B. Zygelman. Interac- tion of ultracold antihydrogen with a conducting wall. Physical Review A, 72:062903, 2005.
Gravitational quantum states of Antihydrogen. A Yu Voronin, P Froelich, V V Nesvizhevsky, Physical Review A. 8332903A. Yu Voronin, P. Froelich, and V. V. Nesvizhevsky. Gravitational quantum states of Antihydrogen. Physi- cal Review A, 83:032903, 2011.
Whispering-gallery states of antihydrogen near a curved surface. A Yu Voronin, V V Nesvizhevsky, S Reynaud, Physical Review A. 8514902A. Yu Voronin, V. V. Nesvizhevsky, and S. Reynaud. Whispering-gallery states of antihydrogen near a curved surface. Physical Review A, 85:014902, 2012.
Quantum reflection of antihydrogen from the Casimir potential above matter slabs. G Dufour, A Gérardin, R Guérout, A Lambrecht, V V Nesvizhevsky, S Reynaud, A Yu, Voronin, Physical Review A. 8712901G. Dufour, A. Gérardin, R. Guérout, A. Lambrecht, V. V. Nesvizhevsky, S. Reynaud, and A. Yu. Voronin. Quantum reflection of antihydrogen from the Casimir po- tential above matter slabs. Physical Review A, 87:012901, 2013.
Quantum reflection of antihydrogen from nanoporous media. G Dufour, R Guérout, A Lambrecht, V V Nesvizhevsky, S Reynaud, A Yu, Voronin, Physical Review A. 8722506G. Dufour, R. Guérout, A. Lambrecht, V. V. Nesvizhevsky, S. Reynaud, and A. Yu. Voronin. Quan- tum reflection of antihydrogen from nanoporous media. Physical Review A, 87:022506, 2013.
Shaping the distribution of vertical velocities of antihydrogen in GBAR. G Dufour, P Debu, A Lambrecht, V V Nesvizhevsky, S Reynaud, A Yu Voronin, European Physical Journal C. 742731G. Dufour, P. Debu, A. Lambrecht, V. V. Nesvizhevsky, S. Reynaud, and A. Yu Voronin. Shaping the distribution of vertical velocities of antihydrogen in GBAR. European Physical Journal C, 74:2731, 2014.
Casimir-Polder shifts on quantum levitation states. P.-P Crépin, G Dufour, R Guérout, A Lambrecht, S Reynaud, Physical Review A. 9532501P.-P. Crépin, G. Dufour, R. Guérout, A. Lambrecht, and S. Reynaud. Casimir-Polder shifts on quantum levitation states. Physical Review A, 95:032501, 2017.
The propagation of Schrödinger waves in a uniform field of force. G Breit, Phys. Rev. 32G. Breit. The propagation of Schrödinger waves in a uniform field of force. Phys. Rev., 32:273-276, 1928.
Quantum states of neutrons in the Earth's gravitational field. V V Nesvizhevsky, H G Börner, A K Petukhov, H Abele, S Baeßler, F J Rueß, T Stöferle, A Westphal, A M Gagarski, G A Petrov, A V Strelkov, Nature. 415V. V. Nesvizhevsky, H. G. Börner, A. K. Petukhov, H. Abele, S. Baeßler, F. J. Rueß, T. Stöferle, A. West- phal, A. M. Gagarski, G. A. Petrov, and A. V. Strelkov. Quantum states of neutrons in the Earth's gravitational field. Nature, 415:297-299, 2002.
Realistic model for a quantum reflection trap. A Jurisch, H Friedrich, Physics Letters A. 349A. Jurisch and H. Friedrich. Realistic model for a quan- tum reflection trap. Physics Letters A, 349:230-235, 2006.
Influence of realistic atom wall potentials in quantum reflection traps. Physical Review A. J Madroñero, H Friedrich, 7522902J. Madroñero and H. Friedrich. Influence of realistic atom wall potentials in quantum reflection traps. Physical Re- view A, 75:022902, 2007.
The full problem of combined effects of the CP and gravity potentials is treated in. 18The full problem of combined effects of the CP and grav- ity potentials is treated in [18].
Dispersive interactions between atoms and nonplanar surfaces. R Messina, D A R Dalvit, P A Neto, A Lambrecht, S Reynaud, Physical Review A. 8022119R. Messina, D. A. R. Dalvit, P. A. Maia Neto, A. Lam- brecht, and S. Reynaud. Dispersive interactions be- tween atoms and nonplanar surfaces. Physical Review A, 80:022119, 2009.
Kapitza resistance at the H/liquid He interface. B Castaing, M Papoular, Journal de Physique Lettres. 44B. Castaing and M. Papoular. Kapitza resistance at the H/liquid He interface. Journal de Physique Lettres, 44:537-540, 1983.
Kapitsa jump in a gas of spin-polarized atomic hydrogen. Yu, G V Kagan, N A Shlyapnikov, Glukhov, JETP Letters. 401052Yu. Kagan, G. V. Shlyapnikov, and N. A. Glukhov. Kapitsa jump in a gas of spin-polarized atomic hydro- gen. JETP Letters, 40:1052, 1984.
Scattering of hydrogen atoms from liquid-helium surfaces. J J Berkhout, J T M Walraven, Physical Review B. 47J. J. Berkhout and J. T. M. Walraven. Scattering of hydrogen atoms from liquid-helium surfaces. Physical Review B, 47:8886-8904, 1993.
Verification of the Lifschitz theory of the van der Waals potential using liquid helium films. E S Sabisky, C H Anderson, Phys. Rev. A. 7790E. S. Sabisky and C. H. Anderson. Verification of the Lif- schitz theory of the van der Waals potential using liquid helium films. Phys. Rev. A, 7:790, 1972.
Dielectric constant and molar volume of saturated liquid 3He and 4He. H A Kierstead, Journal of Low Temparature Physics. 23791H. A. Kierstead. Dielectric constant and molar volume of saturated liquid 3He and 4He. Journal of Low Tem- parature Physics, 23:791, 1976.
Kapitza conductance between gaseous atomic hydrogen and liquid helium. V V Goldman, Physical Review Letters. 56V. V. Goldman. Kapitza conductance between gaseous atomic hydrogen and liquid helium. Physical Review Let- ters, 56:612-615, 1986.
Modification of effective-range theory in the presence of a longrange r −4 potential. T F O'malley, L Spruch, L Rosenberg, Journal of Mathematical Physics. 2T. F. O'Malley, L. Spruch, and L. Rosenberg. Modifica- tion of effective-range theory in the presence of a long- range r −4 potential. Journal of Mathematical Physics, 2:491-498, 1961.
Effectiverange theory for quantum reflection amplitudes. F Arnecke, H Friedrich, J Madroñero, Physical Review A. 7462702F. Arnecke, H. Friedrich, and J. Madroñero. Effective- range theory for quantum reflection amplitudes. Physical Review A, 74:062702, 2006.
Quantum reflection and Liouville transformations from wells to walls. G Dufour, R Guérout, A Lambrecht, S Reynaud, Europhysics Letters). 11030007EPLG. Dufour, R. Guérout, A. Lambrecht, and S. Reynaud. Quantum reflection and Liouville transformations from wells to walls. EPL (Europhysics Letters), 110:30007, 2015.
Liouville transformations and quantum reflection. G Dufour, R Guérout, A Lambrecht, S Reynaud, Journal of Physics B. 48155002G. Dufour., R. Guérout, A. Lambrecht, and S. Reynaud. Liouville transformations and quantum reflection. Jour- nal of Physics B, 48:155002, 2015.
Second mémoire sur le développement des fonctions ou parties de fonctions en séries dont les divers termes sont assujétisà satisfaireà une mêmeéquation différentielle du second ordre, contenant un paramètre variable. J Liouville, Journal de Mathématiques Pures et Appliquées. 2J. Liouville. Second mémoire sur le développement des fonctions ou parties de fonctions en séries dont les divers termes sont assujétisà satisfaireà une mêmeéquation différentielle du second ordre, contenant un paramètre variable. Journal de Mathématiques Pures et Appliquées, 2:16-35, 1837.
Asymptotics and special functions. F Olver, Taylor & FrancisF. Olver. Asymptotics and special functions. Taylor & Francis, January 1997.
Role of long-range forces in H sticking to liquid He. C Carraro, M W Cole, Physical Review B. 45C. Carraro and M. W. Cole. Role of long-range forces in H sticking to liquid He. Physical Review B, 45:12930- 12935, 1992.
Helium-antihydrogen interaction: The Born-Oppenheimer potential energy curve. K Strasburger, H Chojnacki, Physical Review Letters. 88163201K. Strasburger and H. Chojnacki. Helium-antihydrogen interaction: The Born-Oppenheimer potential energy curve. Physical Review Letters, 88:163201, 2002.
Strong nuclear force in cold antihydrogen-helium collisions. S Jonsell, P Froelich, S Eriksson, K Strasburger, Physical Review A. 7062708S. Jonsell, P. Froelich, S. Eriksson, and K. Strasburger. Strong nuclear force in cold antihydrogen-helium colli- sions. Physical Review A, 70:062708, 2004.
Adiabatic potentials for the interaction of atomic antihydrogen with He and He +. K Strasburger, H Chojnacki, A Sokolowska, Journal of Physics B. 383091K. Strasburger, H. Chojnacki, and A. Sokolowska. Adia- batic potentials for the interaction of atomic antihydro- gen with He and He + . Journal of Physics B, 38:3091, 2005.
Interaction of antihydrogen with ordinary atoms and solid surfaces. P Froelich, A Y Voronin, Hyperfine Interactions. 213P. Froelich and A. Y. Voronin. Interaction of antihydro- gen with ordinary atoms and solid surfaces. Hyperfine Interactions, 213:115-127, 2012.
Helium-antihydrogen scattering at low energies. S Jonsell, E A G Armour, Y Plummer, A C Liu, Todd, New Journal of Physics. 1435013S Jonsell, E A G Armour, M Plummer, Y Liu, and A C Todd. Helium-antihydrogen scattering at low energies. New Journal of Physics, 14:035013, 2012.
|
[] |
[
"Hardware Design and Analysis of the ACE and WAGE Ciphers",
"Hardware Design and Analysis of the ACE and WAGE Ciphers"
] |
[
"Mark D Aagaard [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada\n",
"Marat Sattarov [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada\n",
"Nuša Zidarič [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada\n"
] |
[
"Department of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada",
"Department of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada",
"Department of Electrical and Computer Engineering\nUniversity of Waterloo\nOntarioCanada"
] |
[] |
This paper presents the hardware design and analysis of ACE and WAGE, two candidate ciphers for the NIST Lightweight Cryptography standardization. Both ciphers use sLiSCP's unified sponge duplex mode. ACE has an internal state of 320 bits, uses three 64 bit Simeck boxes, and implements both authenticated encryption and hashing. WAGE is based on the Welch-Gong stream cipher and provides authenticated encryption. WAGE has 259 bits of state, two 7 bit Welch-Gong permutations, and four lightweight 7 bit S-boxes. ACE and WAGE have the same external interface and follow the same I/O protocol to transition between phases. The paper illustrates how a hardware perspective influenced key aspects of the ACE and WAGE algorithms. The paper reports area, power, and energy results for both serial and parallel (unrolled) implementations using four different ASIC libraries: two 65 nm libraries, a 90 nm library, and a 130 nm library. ACE implementations range from a throughput of 0.5 bits-per-clock cycle (bpc) and an area of 4210 GE (averaged across the four ASIC libraries) up to 4 bpc and 7260 GE. WAGE results range from 0.57 bpc with 2920 GE to 4.57 bpc with 11080 GE.
| null |
[
"https://arxiv.org/pdf/1909.12338v2.pdf"
] | 203,591,881 |
1909.12338
|
0310a7b73f5d983921f2544a7ab47392cbcb16c0
|
Hardware Design and Analysis of the ACE and WAGE Ciphers
Mark D Aagaard [email protected]
Department of Electrical and Computer Engineering
University of Waterloo
OntarioCanada
Marat Sattarov [email protected]
Department of Electrical and Computer Engineering
University of Waterloo
OntarioCanada
Nuša Zidarič [email protected]
Department of Electrical and Computer Engineering
University of Waterloo
OntarioCanada
Hardware Design and Analysis of the ACE and WAGE Ciphers
This paper presents the hardware design and analysis of ACE and WAGE, two candidate ciphers for the NIST Lightweight Cryptography standardization. Both ciphers use sLiSCP's unified sponge duplex mode. ACE has an internal state of 320 bits, uses three 64 bit Simeck boxes, and implements both authenticated encryption and hashing. WAGE is based on the Welch-Gong stream cipher and provides authenticated encryption. WAGE has 259 bits of state, two 7 bit Welch-Gong permutations, and four lightweight 7 bit S-boxes. ACE and WAGE have the same external interface and follow the same I/O protocol to transition between phases. The paper illustrates how a hardware perspective influenced key aspects of the ACE and WAGE algorithms. The paper reports area, power, and energy results for both serial and parallel (unrolled) implementations using four different ASIC libraries: two 65 nm libraries, a 90 nm library, and a 130 nm library. ACE implementations range from a throughput of 0.5 bits-per-clock cycle (bpc) and an area of 4210 GE (averaged across the four ASIC libraries) up to 4 bpc and 7260 GE. WAGE results range from 0.57 bpc with 2920 GE to 4.57 bpc with 11080 GE.
Introduction
In 2013, NIST started the Lightweight Cryptography (LW) project [1], with the end goal of creating a portfolio of lightweight algorithms for authenticated encryption with associated data (AEAD), and optionally hashing, in constrained environments [2]. For hardware-oriented lightweight algorithms, hardware implementation results are an important criteria for assessment and comparison. In the first round of the LWC evaluation, more than half of the candidates [3] reported hardware implementation results or their estimates, ranging from complete implementation and analysis to partial implementation results and theoretical estimates based on gate count. Various amounts of analysis, such as area reported only for a cryptographic primitive used or thorough area breakdown of all components, different design decisions, such as serial and unrolled implementations, and different ASIC and/or FPGA implementation technologies can be found. Furthermore, some authors report the results without interface, some with the interface, and in some cases, e.g. [6], CAESAR Hardware Applications Programming Interface (API) for Authenticated Ciphers [7] was used. This paper explores different hardware design options for two of the LWC candidates, ACE [4] and WAGE [5]. The original and parallel implementations were synthesized using four different ASIC libraries, including 65nm, 90 nm and 130 nm technologies. ACE implementations range from a throughput of 0.5 bits-per-clock cycle (bpc) and an area of 4210 GE (averaged across the four ASIC libraries) up to 4 bpc and 7260 GE. WAGE results range from 0.57 bpc with 2920 GE and 4.57 bpc with 11080 GE.
The paper is organized as follows: Section 2 briefly introduces ACE and WAGE, Section 3 lists design principles and presents the interface with the environment and describes the implementations of both ciphers. Section 4 describes the parallel implementations of ACE and WAGE. Implementation technologies and results are summarized in Section 5.
Specifications of ACE and WAGE
Both ACE and WAGE permutations operate in a unified duplex sponge mode [9]. The 320 bit ACE permutation offers both AEAD and hashing functionalities, and the 259 bit WAGE permutation supports AEAD functionality. Because of the similarities between ACE and WAGE, this section begins with a short description of ACE and WAGE permutations, followed by a discussion on the unified duplex sponge mode for both schemes, highlighting some differences.
ACE Permutation
ACE has a 320 bit internal state S, divided into five 64 bit registers, denoted A, B, C, D, and E. The 320 bit ACE permutation uses the unkeyed reduced-round Simeck block cipher [10] with a block size of 64 and 8 rounds, denoted Simeck box SB-64, as the nonlinear operation. SB-64 is a lightweight permutation, consisting of left cyclic shifts, and and xor gates. Each round is parameterized by a single bit of round constant rc = (q 7 , q 6 , . . . , q 0 ), q j ∈ {0, 1} and 0 ≤ j ≤ 7. The algorithmic description of SB-64 is shown at the end of Algorithm 1, with the 64 bit input and output split into half, i.e. (x 1 ||x 0 ) and (x 9 ||x 8 ) respectively.
To construct the ACE-step(S i ), 0 ≤ i ≤ 15, SB-64 is applied to three registers A, C and E, with each using its own round constant rc i 0 , rc i 1 , rc i 2 . The 8 bit constants rc i j , are generated by an LFSR with the feedback polynomial x 7 + x + 1 run in a 3-way parallel configuration to produce one bit of each rc i j per clock cycle. At each step, the outputs of SB-64 are added to registers B, D and E, which are further parameterized by step constants (sc i 0 , sc i 1 , sc i 2 ). The computation of step constants does not need any extra circuitry, but rather uses the same LFSR as the round constants: the three feedback values together with all 7 state bits yield 10 consecutive sequence elements, which are then split into three 8 bit step constants. The step constants are used once every 8 th clock cycle. The step function is then concluded by a permutation of all five registers. For the properties of SB-64, the choice of the final permutation, and the number of rounds and steps refer to [4].
WAGE Permutation
WAGE is a hardware oriented AE scheme, built on top of the initialization phase of the well-studied, LFSR based, Welch-Gong (WG) stream cipher [11,12]. The WAGE permutation is iterative and has a round function derived from the LFSR, the decimated Welch-Gong permutation WGP, and the small S-boxes SB. Details, such as differential uniformity and nonlinearity of the WGP and SB and selection of the LFSR polynomial can be found in [5]. The parameter selection for WAGE was aimed at balancing the security and hardware implementation area, using hardware implementation results for many design decisions, e.g., field size, representation of field elements, LFSR polynomial, etc.
Both LFSR and WGP are defined over F 2 7 and the S-box is a 7 bit permutation. F 2 7 is defined with the primitive polynomial f (x) = x 7 + x 3 + x 2 + x + 1, and the field elements are represented using the polynomial basis PB = {1, ω, . . . , ω 6 }, where ω is the root of f (x) ( Table 1). The LFSR is defined by the feedback polynomial (x) (Table 1), which is primitive over F 2 7 . The 37 stages of the LFSR also constitute the internal state of WAGE, denoted S i = (S i 36 , S i 35 , · · · , S i 1 , S i 0 ); the subscript i is used to mark the i-th iteration of the permutation. For the element x ∈ F 2 7 , the decimated WG permutation with decimation d = 13 is defined in Table 1. The 7 bit SB uses a nonlinear transformation Q and a permutation P, which together yield one-round R = P • Q. The SB itself iterates the function R 5 times, applies Q once, and then complements the 0th and 2nd bit (Table 1).
+ y 6 + ω, f (ω) = 0 WGP7 WGP7(x d ) = x d + (x d + 1) 33 + (x d + 1) 39 + (x d + 1) 41 + (x d + 1) 104 SB Q Q(x0, x1, x2, x3, x4, x5, x6) → (x0 ⊕ (x2 ∧ x3), x1, x2, x3 ⊕ (x5 ∧ x6), x4, x5 ⊕ (x2 ∧ x4),A i ← SB-64(A i 1 ||A i 0 , rc i 0 ) 8: C i ← SB-64(C i 1 ||C i 0 , rc i 1 ) 9: E i ← SB-64(E i 1 ||E i 0 , rc i 2 )
10:
B i ← B i ⊕ C i ⊕ (1 56 ||sc i 0 ) 11: D i ← D i ⊕ E i ⊕ (1 56 ||sc i 1 )
12: for j = 2 to 9 do 22:
E i ← E i ⊕ A i ⊕ (1 56 ||sc i 2 ) 13: A i+1 ← D i 14: B i+1 ← C i 15: C i+1 ← A i 16: D i+1 ← E i 17: E i+1 ← B i 18: return (A i+1 ||B i+1 ||C i+1 ||D i+1 ||E i+1 )x j ← (L 5 (x j−1 ) x j−1 ) ⊕ L 1 (x j−1 ) ⊕ x j−2 ⊕ (1 31 ||q j−2 ) 23: return (x 9 ||x 8 ) ( L is left-rotation )
Algorithm 2 WAGE permutation 1: Input : S 0 = (S 0 36 , S 0 35 , · · · , S 0 1 , S 0 0 ) 2: Output : S 111 = (S 111 36 , S 111 35 , · · · , S 111 1 , S 111 0 )
3: for i = 0 to 110 do:
4: S i+1 ← WAGE-StateUpdate(S i , rc i 0 , rc i 1 ) 5: return S 111 6: Function WAGE-StateUpdate(S i ): 7: fb = S i 31 ⊕ S i 30 ⊕ S i 26 ⊕ S i 24 ⊕ S i 19 ⊕ S i 13 ⊕ S i 12 ⊕ S i 8 ⊕ S i 6 ⊕ (ω ⊗ S i 0 ) 8: S i+1 4 ← S i 5 ⊕ SB(S i 8 ) 9: S i+1 10 ← S i 11 ⊕ SB(S i 15 ) 10: S i+1 18 ← S i 19 ⊕ WGP(S i 18 ) ⊕ rc i 0 11: S i+1 23 ← S i 24 ⊕ SB(S i 27 ) 12: S i+1 29 ← S i 30 ⊕ SB(S i 34 ) 13: S i+1 36 ← fb ⊕ WGP(S i 36 ) ⊕ rc i 1 14: S i+1 j ← S i j+1
where j ∈ {0, · · · , 36}\{4, 10, 18, 23, 29, 36} 15:
return S i+1
As mentioned before, the WAGE permutation is iterative, and repeats its round function WAGE-StateUpdate(S i ) 111 times, as shown in Algorithm 2. In each round, 6 stages of the LFSR are updated nonlinearly, while all the remaining stages are just shifted. A pair of 7 bit round constants (rc i 0 , rc i 1 ) is xored with the pair of stages (18, 36). Round constants are produced by an LFSR of length 7 with feedback polynomial x 7 + x + 1, implemented in a 2-way parallel configuration, see [5] for details.
The Unified Duplex Sponge Mode
ACE-AE-128 and WAGE-AE-128 use the unified duplex sponge mode from sLiSCP [9] (Figure 1). The phases for encryption and decryption are: initialization, processing of associated data, encryption (Figure 1(a)) or decryption (Figure 1(b)), and finalization. Figure 1 also shows the domain separators for each phase. The internal state is divided into a capacity part S c (256 bits for ACE-AE-128 and 195 bits for WAGE) and a 64 bit rate S r , which for:
• ACE-AE-128 consists of bytes A [7], A [6], A [5], A [4], C [7], C [6], C [5], C [4] • WAGE-AE-128 consists of the 0-th bit of stage S 36 , i.e., S 36,0 , and all bits of stages S 35 , S 34 , S 28 , S 27 , S 18 , S 16 , S 15 , S 9 and S 8
The input data (associated data AD, message M or ciphertext C) is absorbed (or replaced) into the rate part of the internal state. If the input data length is not a multiple of 64, padding with (10 * ) is needed. In Figure 1, d denotes the number of 64 bit blocks of AD and m the number of 64 bit blocks of M and C after padding. Refer to [4,5] for further padding rules. No padding is needed during initialization and
load-AE(N,K) r c K 0 AD 0 C 0 0x00 K 1 0x00 Initialization 0x01 AD d-1 0x01 M 0 0x02 C m-1 M m-1 0x02 K 0 0x00 K 1 0x00 tagextract(S) t Processing associated data Encryption Finalization Perm Perm Perm Perm Perm Perm Perm Perm Perm (a) Authenticated encryption Perm 0x00 0x00
Initialization 0x01 0x01 0x02 0x00 0x00
Processing associated data Decryption Finalization The ACE HASH functionality is shown in Figure 2, with only two phases, namely absorbing and squeezing. The only input is now the message M . Since the hash has a fixed length of 256 bits, the length of the squeezing phase is fixed. ACE-H-256 is unkeyed, and the state is loaded with a fixed initialization vector IV. More specifically, the function load-H(IV ) loads the state bytes B [7], B [6] and B [5] with bytes 0x80, 0x40, and 0x40 respectively, and sets all other state bits to zero.
0x02 Perm Perm Perm Perm Perm Perm Perm Perm load-AE(N,K) r c K 0 AD 0 C 0 K 1 AD d-1 M 0 C m-1 M m-1 K 0 K 1 tagextract(S) t (b) Verified decryption
3 Hardware Implementations
Hardware Design Principles and Interface with the Environment
The design principles and assumptions followed by the hardware implementations:
1. Multi-functionality module. The system should include all supported operations in a single module (Figure 3), because lightweight applications cannot afford the extra area for separate modules.
2. Single input/output ports. In small devices, ports can be expensive. To ensure that ACE and WAGE are not biased in favour of the system, at the expense of the environment, the ciphers have one input and one output port (Table 2). That being said, the authors agree with the proposed lightweight cryptography hardware API's [8] use of separate public and private data ports and will update implementations accordingly. 3. Valid-bit protocol and stalling capability. The environment may take an arbitrarily long time to produce any piece of data. For example, a small microprocessor could require multiple clock cycles to read data from memory and write it to the system's input port. The receiving entity must capture the data in a single clock cycle ( Figure 4). In reality, the environment can stall as well. In the future, ACE and WAGE implementations will be updated to match the proposed lightweight cryptographic hardware API's use of a valid/ready protocol for both input and output ports. 4. Use a "pure register-transfer-level" implementation style. In particular, use only registers, not latches; multiplexers, not tri-state buffers; synchronous, not asynchronous reset; no scan-cell flip-flops; clock-gating is used for power and area optimization.
Since both ACE and WAGE use a unified sponge duplex mode, they share a common interface with the environment ( Table 2). The environment separates the associated data and the message/ciphertext, and performs padding if necessary. The domain separators shown in Figure 1 are provided by the environment and serve as an indication of the phase change for AEAD functionality. For ACE-H-256, the phase change is indicated by the change of the i mode(0) signal, as shown in Table 3. The hardware is unaware of the lengths of individual phases, hence no internal counters for the number of processed blocks are needed. The top-level module, shown in Figure 3, is also very similar for both ACE and WAGE. It depicts the interface signals from Table 2, with only slight differences in bitwidths. Figure 4 shows the timing diagram for ACE during the encryption phase of message blocks M 5 and M 6 , which clearly shows the valid-bit protocol. The first five lines show the top-level interface signals and line six shows the value of the permutation counter pcount, which is a part of the ACE finite state machine (FSM) and keeps track of the 128 clock cycles needed for one ACE permutation. After completing the previous permutation, the top-level module asserts o ready to signal to the environment that an ACE permutation just finished and new data can be accepted. The environment replies with a new message block M 5 accompanied by an i valid signal. The hardware immediately encrypts, returns C 5 and asserts o valid. This clock cycle is also the first round of a new ACE permutation and the o ready is deasserted, indicating that the hardware is busy. Figure 4 shows the ACE hardware remaining busy (o ready = 0) for the duration of one ACE permutation. When pcount wraps around from 127 to 0, the hardware is again idle and ready to receive new input, in this case M 6 . A few more details about the use of pcount will follow in Subsection 3.2. The interaction between the top-level module and the environment during the encryption phase of WAGE is very similar, with 111 clock cycles for the completion of one permutation. More significant differences for the interaction with the environment arise during loading, tag extract and of course ACE-H-256. Similarly, the output multiplexers are needed to accommodate encryption/decryption and tag generation for ACE-AE-128 and squeezing for ACE-H-256. Furthermore, the output is forced to 0 during normal operation. The registers A, C and E are split in half to accommodate inputs and outputs. The rest of Figure 5(a) shows one step of the ACE permutation (Algorithm 1). The rounds and steps always use the same hardware, but in different clock cycles, which forces the use of multiplexers inside the ACE permutation. The last row of multiplexers accommodates loading.
i mode (1) (0) Mode Operation or phase 0 0 ACE-E Encryption 0 1 ACE-D Decryption 1 0 ACE-H-256 Absorb 1 1 ACE-H-256 Squeeze - 0 WAGE-E Encryption - 1 WAGE-D Decryption
ACE Datapath
WAGE Datapath
Because of the shifting nature of the LFSR, which in turn affects loading, absorbing and squeezing, the WAGE datapath is slightly more complicated than the ACE datapath and hence is explained in two levels: Figure 7: The wage lfsr stages S0, . . . , S10 with multiplexers, xor and and gates for the sponge mode 1. wage lfsr treated as a black box in Figure 6 with p = 1 (no parallelization)
0 K0 - - - ----- 1 K2, K0 - - ----- 2 K4, K2, K0 - ----- 3 K6, K4, K2, K0 ----- 4 K8, K6, K4, K2, K0 ---- 5 K10,
• wage lfsr: The LFSR has 37 stages with 7 bits per stage, a feedback with 10 taps and a module for multiplication with ω ( Table 1). The internal state of wage lfsr is also the internal state S of WAGE. • WGP module implementing WGP: For smaller fields like F 2 7 , the WGP area, when implemented as a constant array in VHDL/Verilog, i.e., as a look-up table, is smaller than when implemented using components such as multiplication and exponentiation to powers of two [13,14]. However, the WGP is not stored in hardware as a memory array, but rather as a net of and, or, xor and not gates, derived and optimized by the synthesis tools. • SB module: The SB is implemented in unrolled fashion, i.e. as purely combinational logic, composed of 5 copies of R, followed by a Q and the final two not gates (Table 1). • lfsr c: The lfsr c for generating the round constants was implemented in a 2-way parallel fashion. It has only 7 1 bit stages and two xor gates for the two feedback computations.
2. Extra hardware for the wage lfsr in sponge mode. Figure 7 shows details for stages S 0 , . . . , S 10 . The grey line represents the path for normal operation during the WAGE permutation. The additional hardware for the entire wage lfsr is listed below, with examples in brackets referring to Figure 7.
• The 64 bit i data is padded with zeros to 70 bits, then fragmented into 7 bit wage lfsr inputs D k , k = 0, . . . , 9, corresponding to the rate stages S r . For each data input D k there is a corresponding 7 bit data output O k . ( D 1 , O 1 and D 0 , O 0 in Figure 7). • 10 xor gates must be added to the S r stages to accommodate absorbing, encryption and decryption (xors at stages S 9 ,S 8 ). • 10 multiplexers to switch between absorbing and normal operation (Amux1, Amux0 at S 9 ,S 8 ).
• An xor and a multiplexer are needed to add the domain separator i dom sep (Amux at S 0 ). • To replace the contents of the S r stages, 10 multiplexers are added (Rmux1 at stage S 9 )
• Instead of additional multiplexers for loading, the existing Rmuxk, k = 9, 5, 4, 3, 0, multiplexers are now controlled by replace or load and labelled RLmuxk, (see RLmux0 on S 8 ). Since all non-input stages must keep their previous values, an enable signal lfsr en is needed. • Three 7 bit and gates to turn off the inputs D 6 , D 3 and D 1 (and at D 1 ).
• Four multiplexers are needed to turn off the SB during loading and tag extraction (SBmux at S 4 ).
• The total hardware cost to support the sponge mode is: 24 7 bit and one 2 bit multiplexers, 10 7 bit and one 2 bit xor gates, three 7 bit and gates.
As mentioned in Section 2, special care was given to the design of loading and tag-extract. The existing data inputs D k are reused for loading, and the outputs O k for tag extraction. The wage lfsr is divided into five loading regions using the inputs D k , k = 9, 5, 4, 3, 0. For example, the region S 0 , . . . , S 8 in Figure 7 is loaded through input D 0 , however, instead of storing D 0 ⊕ S 8 , the D 0 data is fed directly into S 8 , i.e. the RLmux0 disconnects the Amux0 output. The remaining stages in this region are loaded by shifting, which requires the SBmux at S 4 . Note that there is no need to disconnect the two WGP, because they are automatically disabled by loading through D 9 and D 4 , located at stages S 36 and S 18 respectively. The loading process is illustrated in Table 4, whereK i is the i th 7 bit block of the 128 bit key K. Table 4 shows the key shifting through the LFSR stages in 9 clock cycles. The stages are shown in the second row of Table 4, and the values "-" in the table denote the old, unknown values that are overwritten by the new key. The state of stages S 8 , . . . S 0 after after loading is finished is shown in the last row. The tag is extracted in a similar fashion as loading, but from the data output O k at the end of a particular loading region, e.g., the region S 9 , . . . , S 16 , loaded through D 3 , is extracted through O 1 . The longest tag extraction region is of length 9, which is the same as the longest loading region.
Hardware-Oriented Design Decisions
The design process for ACE and WAGE tightly integrated cryptanalysis and hardware optimizations. A few key hardware-oriented decisions are highlighted here; more can be found in the design rationale chapters of [4,5].
Functionally, it is equivalent for the boundary between phases to occur either before or after the permutation. For ACE and WAGE, the boundary was placed after the permutation updates the state register. This means that the two-bit domain separator is sufficient to determine the value of many of the multiplexer select lines and other control signals. All phases that have a domain separator of "00" have the same multiplexer select values. The same also holds true for "01". Unfortunately, this cannot be achieved for "10", because encryption and decryption require different control signal values, but the same domain separator. Using the domain separator to signal the transition between phases for encryption and decryption also simplifies the control circuit. For hashing, the change in phase is indicated by the i mode signal.
In applications where the delay through combinational circuitry is not a concern, such as with lightweight cryptography, where clock speed is limited by power consumption, not by the delay through combinational circuitry, it is beneficial to lump as much combinational circuitry as possible together into a single clock cycle. This provides more optimization opportunities for the synthesis tools than if the circuitry was separated by registers. For this reason, the ACE datapath was designed so that the input and output multiplexers, one round of the permutation, and state loading multiplexers together form a purely combinational circuit, followed by the state register.
4 Parallel Implementations
Parallelization in General
Both ciphers can be parallelized (unrolled) to execute multiple rounds per clock cycle, at the cost of increased area. In the top-level schematic in Figure 3, the dashed stacked boxes indicate parallelization. The FSM is parameterized with parameter p and used for un-parallelized (p=1) and parallelized (p>1) implementations. Other components are replicated to show p copies, with p=3 in Figure 3. Such a representation is symbolic; parallelization is applied only to the permutation, not the entire datapath. The interface with the environment remains the same.
ACE
The p=1 un-parallelized ACE permutation performs a single round per clock cycle, which implies 8 clock cycles per step. Parallel, i.e. unrolled, versions perform p rounds per clock cycle, and were implemented for divisors of 8, i.e. p = 2, 4, 8. The ACE permutation could be parallelized further, e.g. two or more steps in a single clock cycle. Figure 5(b) shows the example p=4 for registers A and B, with p=4 copies of SB-64 connected in series. Each SB-64 has its own round constant rc k 0 , k = 0, . . . , p − 1. The round vs. step multiplexers are still needed, and can be removed only for values of p, that are multiples of 8. Also note the step constant indicated as sc p−1 0 . For p=4 a step is concluded in 2 clock cycles. However, this requires a modification to the lfsr c, which must now generate p · 3 round constant bits rc k j , j = 0, 1, 2, k = 0, . . . , p − 1 per clock cycle. The last cycle within a step requires 7 additional bits, which together with rc p−1 j yield 10 bits for the step constant generation sc p−1 j . In the case p=4 the lfsr c must generate 12 constant bits in the first cycle and 19 constant bits in the second clock cycle of the step, which are then used for rc k j and sc k j . For the extra constant bits, the lfsr c feedback was replicated, i.e. (p − 1) · 3 feedbacks in addition to the original 3. WAGE performs one clock cycle for the interaction with the environment, i.e. absorbing or replacing the input data into the state, followed by 111 clock cycles of the WAGE permutation. Because 111 is divisible only by 3 and 37, the opportunities to parallelize WAGE appear rather limited. However, by treating the absorption or replacement of the input data into the internal state as an additional clock cycle in the permutation, we increase the the length of the permutation to 112 clock cycles. Because 112 has many divisors, this allows parallelism of p = 2, 3, 4, 6, 8. The cost is a less than 1% decrease in performance for the additional clock cycle and some additional multiplexers, because the clock cycle that loads data has different behaviour than the normal clock cycles Figure 8 shows the 3-way parallel wage lfsr including all nonlinear components and their copies. Multiplexers are not replicated, and hence, are not shown. For the components f, rc, WGP and SB in Figure 8, the superscript k indicates the original (k = 0) and the two copies (k = 1, 2). Computation of the three feedbacks f k is not shown but is conducted as
WAGE
f k = S 31+k ⊕ S 30+k ⊕ S 26+k ⊕ S 24+k ⊕ S 19+k ⊕ S 13+k ⊕ S 12+k ⊕ S 8+k ⊕ S 6+k ⊕ (ω ⊗ S 0+k )
. Similar to ACE, the generation of WAGE round constants rc k 1 , rc k 0 must be parallelized as well. For readability, the two WGP were labelled WGP k 1 , WGP k 0 , with WGP 0 1 , WGP 0 0 being the original WGPs positioned at S 36 , S 18 , just like rc 0 1 , rc 0 0 . Similarly, the SBs were also labelled SB k j , j = 3, 2, 1, 0, in the decreasing order, i.e. SB 0 3 is the original SB with input S 34 .
Implementation Technologies and ASIC Implementation Results
Logic synthesis was performed with Synopsys Design Compiler version P-2019.03 using the compile ultra command and clock gating. Physical synthesis (place and route) and power analysis were done with Cadence Encounter v14.13 using a density of 95%. simulations were done in Mentor Graphics ModelSim SE v10.5c. The ASIC cell libraries used were ST Microelectronics 65 nm CORE65LPLVT 1.25V, TSMC 65 nm tpfn65gpgv2od3 200c and tcbn65gplus 200a at 1.0V, ST Microelectronics 90 nm CORE90GPLVT and CORX90GPLVT at 1.0V, and IBM 130nm CMRF8SF LPVT with SAGE-X v2.0 standard cells at 1.2V. Some past works have used scan-cell flip-flops to reduce area, because these cells include a 2:1 multiplexer in the flip-flop which incurs less area than using a separate multiplexer. Scan-cell flip-flops were not used because their use as part of the design would prevent their insertion for fault-detection and hence, prevent the circuit from being tested for manufacturing faults. Furthermore, chip enable signals were removed from all datapath registers, which are controlled by clock gating instead. This allows a further reduction of the implementation area. Figure 9 shows area 2 vs. throughput for both ACE and WAGE with different degrees of parallelization, denoted by W-p and A-p (p = 1, 2, 3,4,8). The throughput axis is scaled as log(Tput) and the area axis is scaled as log(area 2 ). The grey contour lines denote the relative optimality of the circuits using Tput/area 2 . Throughput is increased by increasing the degree of parallelization (unrolling), which reduces the number of clock cycles per permutation round. For p=1, the area of WAGE (W-1) is less than that of ACE (A-1), because WAGE has 259 registers, compared to 320 for ACE. As parallelization is increased, WAGE's area grows faster than ACE's, because of the larger size of WAGE's permutation. Going from p=1 to p=8 results As can be seen by the relative constant size of the shaded rectangles enclosing the data points, the relative area increase with parallelization is relatively independent of implementation technology. Table 5 represents the same data points as Figure 9 with the addition of maximum frequency (f, MHz) and energy per bit (E, nJ). Energy is measured as the average value while performing all cryptographic operations over 8192 bits of data at 10 MHz. As the ACE throughput increases, energy per bit decreases consistently, despite higher circuit area and, therefore, power consumption. However, this is not the case with WAGE. This phenomena can be explained by the higher relative area increase for WAGE which comes from the higher complexity of WGP with respect to SB-64. Connecting more WGPs in a combinational chain results in an exponential increase of the number of glitches, which drastically increases power consumption. Table 6 reports the area results obtained using the ST Micro 65 nm process and tool flow from this paper and the results reported in the submission. The various ciphers use different protocols and interfaces, sometimes provide different functionality (e.g., with or without hashing), and use different key sizes. As such, this analysis is very imprecise, but gives a rough comparison to ACE and WAGE results. As the LWC competition progresses and the hardware API matures, more precise comparisons will become possible. This preliminary analysis indicates that ACE and WAGE are among the smaller cipher candidates.
Conclusion
The goal of the ACE and WAGE design process was to build on the well studied Simeck S-Box and Welch-Gong permutation. The overall algorithms were designed to lend themselves to efficient implementations in hardware and to scale well with increased parallelism. ACE has a larger internal state: 320 bits, vs 259 for WAGE, but the ACE permutation is smaller than that of WAGE. This means the non-parallel version of WAGE is smaller than that of ACE, but as parallelism increases, WAGE eventually becomes larger than ACE. At 1 and 2 bits-per-cycle, the designs are relatively similar in area. A number of the NIST LWC candidate ciphers provided synthesizable source code. A preliminary comparison with these ciphers on ST Micro 65 nm indicates that ACE and WAGE are likely to be among the smaller candidates.
19 :
19Function SB-64(x 1 ||x 0 , rc): 20: rc = (q 7 , q 6 , . . . , q 0 ) 21:
Figure 1 :
1Schematic diagram of the AEAD algorithm, where Perm is ACE or WAGE permutation respectively finalization because both schemes use a 128 bit key. With the exception of tag extraction, both schemes generate an output only during the encryption and decryption phases: the 64 bit output block is obtained by the xor of the current input and rate.
Figure 1 Figure 2 :
12also shows functions load-AE(N, K) and tagextract(S), which are straightforward for ACE. The ACE load-AE(N, K) performs the loading of the 128 bit key K and nonce N , where the key is loaded into registers A and C, the nonce into B and E, and the register D is loaded with zeros. The ACE tagextract(S) extracts the 128 bit tag from registers A and C. Special care was taken in the specification of load-AE(N, K) and tagextract(S) of WAGE to take advantage of the shifting nature of the LFSR, which will be discussed in more detail in Section 3.3. Schematic diagram of the ACE-H-256 algorithm, where Perm is ACE permutation
Figure 3 :
3Top
Figure 4 :
4Timing diagram for ACE during encryption
Figure 5 (
5a) shows the ACE datapath. The top and bottom of the figure depict the five 64 bit registers A, B, C, D and E, followed by the hardware components required for normal operation during permutation, absorbing, and replacing, which imposes input multiplexers controlled by the mode and the counter pcount.
Figure 5 :
5The ACE module datapath and parallelization
Figure 6 :
6The WAGE cipher datapath
Figure 8 :
8The WAGE permutation with p=3
measured in bits per clock cycle (bpc), and plotted on a log scale axis. The area axis is scaled as log(Area 2 ).
Figure 9 :
9Area 2 vs Throughput
Table 1 :
1Specification parameters of WAGE LFSR(y) = y 37 + y 31 + y 30 + y 26 + y 24 + y 19 + y 13 + y 12 + y 8a ∈ F 2 7 a = 6
i=0 aiω i , ai ∈ F2
vector representation:
[a] PB = (a0, a1, a2, a3, a4, a5, a6)
Table 2 :
2Interface signalsInput signal
Meaning
reset
resets the state machine
i mode
mode of operation
i dom sep
domain separator
i padding
the last block is padded
i data
input data
i valid
valid data on i data
Output signal Meaning
o ready
hardware is ready
o data
output data
o valid
valid data on o data
Table 3 :
3Modes of operation
Table 4 :
4wage lfsr loading through D0shift
D0
count
S8, S7, S6, S5, S4, S3, S2, S1, S0
Table 5 :
5Post-PAR implementation results Note: Energy results done with timing simulation at 10 Mhz. in 1.72× area increase for ACE and 3.80× for WAGE on average. Optimality for WAGE reaches a maximum at p=3. For ACE, optimality continues to increase beyond p=8.ST Micro 65 nm
TSMC 65 nm
ST Micro 90 nm
IBM 130 nm
Label
Tput
A
f
E
A
f
E
A
f
E
A
f
E
[A/W-p]
[bpc]
[GE]
[MHz] [nJ]
[GE]
[MHz] [nJ]
[GE] [MHz] [nJ]
[GE]
[MHz]
[nJ]
ACE
A-1
0.5
4250
720
27.9
4600
705
20.1
3660
657
62.2
4350
128
46.8
A-2
1
4780
618
18.4
5290
645
12.4
4130
628
35.8
4980
88.9
29.4
A-4
2
5760
394
15.1
6260
588
8.51
4940
484
25.4
5910
90.5
21.1
A-8
4
7240
246
11.4
8090
493
6.40
6170
336
19.4
7550
63.2
18.4
WAGE
W-1
0.57
2900
907
20.0
3290
1120
13.0
2540
940
39.2
2960
153
30.4
W-2
1.14
4960
590
19.1
5310
693
10.6
4280
493
34.4
4850
98.5
N/A
W-3
1.68
5480
397
20.4
5930
527
10.7
4770
414
31.2
5460
79.6
26.5
W-4
2.29
6780
307
24.0
7460
387
12.1
5790
277
32.9
6700
51.9
33.4
W-8
4.57
12150
192
38.5
11870
204
19.9
9330
137
49.9
10960
34.5
59.9
Table 6
6summarizes the area on ST Micro 65 nm of the LWC submissions [3] that included synthesizable VHDL or Verilog code.
Table 6 :
6Area of LWC candidates on ST Micro 65 nm (post-PAR)This work
Reported in submission documents [3]
Acknowledgements This work benefited from the collaborative environment of the Comunications Security (ComSec) Lab at the University of Waterloo, and in particular discussions with Kalikinkar Mandal, Raghvendra Rohit, and Guang Gong.
Submission Requirements and Evaluation Criteria for the Lightweight Cryptography Standardization Process. Submission Requirements and Evaluation Criteria for the Lightweight Cryptography Standardization Process https://csrc.nist.gov/CSRC/media/Projects/Lightweight-Cryptography/documents/ final-lwc-submission-requirements-august2018.pdf
NIST Lightweight Cryptography round 1 candidates. NIST Lightweight Cryptography round 1 candidates https://csrc.nist.gov/Projects/ Lightweight-Cryptography/Round-1-Candidates
ACE: An Authenticated Encryption and Hash Algorithm -Submission to the NIST LWC Competition. M D Aagaard, R Altawy, G Gong, K Mandal, R Rohit, M.D. Aagaard, R. AlTawy, G. Gong, K. Mandal, R. Rohit, "ACE: An Authen- ticated Encryption and Hash Algorithm -Submission to the NIST LWC Competi- tion", March 2019, https://csrc.nist.gov/CSRC/media/Projects/Lightweight-Cryptography/ documents/round-1/spec-doc/ace-spec.pdf
WAGE: An Authenticated Cipher -Submission to the NIST LWC Competition. M D Aagaard, R Altawy, G Gong, K Mandal, R Rohit, M.D. Aagaard, R. AlTawy, G. Gong, K. Mandal, R. Rohit, "WAGE: An Authenticated Cipher - Submission to the NIST LWC Competition", March 2019, https://csrc.nist.gov/CSRC/media/ Projects/Lightweight-Cryptography/documents/round-1/spec-doc/wage-spec.pdf
Hardware Implementations of NIST Lightweight Cryptographic Candidates: A First Look. B Rezvani, W Diehl, Cryptology ePrint Archive. ReportB. Rezvani, W. Diehl, "Hardware Implementations of NIST Lightweight Cryptographic Candidates: A First Look", Cryptology ePrint Archive, Report 2019/824, 2019.
CAESAR Hardware API. E Homsirikamol, W Diehl, A Ferozpuri, F Farahmand, P Yalla, J P Kaps, K Gaj, Cryptology ePrint Archive. ReportE. Homsirikamol, W. Diehl, A. Ferozpuri, F. Farahmand, P. Yalla, J.P. Kaps, K. Gaj, "CAESAR Hardware API." Cryptology ePrint Archive, Report 2015/669, 2016.
Hardware API for Lightweight Cryptography. J P Kaps, W Diehl, M Tempelmeier, E Homsirikamol, K Gaj, 2019J.P. Kaps, W. Diehl, M. Tempelmeier, E. Homsirikamol, K. Gaj, "Hardware API for Lightweight Cryptography", 2019
sLiSCP: Simeck-based Permutations for Lightweight Sponge Cryptographic Primitives. R Altawy, R Rohit, M He, K Mandal, G Yang, G Gong, SAC. C. Adams and J. CamenischSpringerR. AlTawy, R. Rohit, M. He, K. Mandal, G. Yang and G. Gong. sLiSCP: Simeck-based Permutations for Lightweight Sponge Cryptographic Primitives. In SAC (2017), C. Adams and J. Camenisch, Eds., Springer, pp 129-150.
The Simeck family of lightweight block ciphers. G Yang, B Zhu, V Suder, M D Aagaard, G Gong, CHES. T. Güneysu and H. HandschuhSpringerG. Yang, B. Zhu, V. Suder, M.D. Aagaard, and G. Gong. The Simeck family of lightweight block ciphers. In CHES (2015), T. Güneysu and H. Handschuh, Eds., Springer, pp. 307-329.
The WG stream cipher. Y Nawaz, G Gong, 33ECRYPT Stream Cipher Project ReportY. Nawaz and G. Gong. The WG stream cipher. ECRYPT Stream Cipher Project Report 2005 33 (2005).
WG: A family of stream ciphers with designed randomness properties. Y Nawaz, G Gong, Inf. Sci. 178Y. Nawaz, and G. Gong. WG: A family of stream ciphers with designed randomness properties. Inf. Sci. 178, 7 (Apr. 2008), 1903-1916.
Hardware implementations of the WG-5 cipher for passive RFID tags. M D Aagaard, G Gong, R K Mota, Hardware-Oriented Security and Trust (HOST). IEEEM.D. Aagaard, G. Gong, and R.K. Mota. Hardware implementations of the WG-5 cipher for passive RFID tags. In Hardware-Oriented Security and Trust (HOST), 2013, IEEE, pp. 29-34.
A lightweight stream cipher WG-7 for RFID encryption and authentication. Y Luo, Q Chai, G Gong, X Lai, 2010 IEEE Global Telecommunications Conference GLOBECOM 2010. Y. Luo, Q. Chai, G. Gong, and X. Lai. A lightweight stream cipher WG-7 for RFID encryption and authentication. In 2010 IEEE Global Telecommunications Conference GLOBECOM 2010 (Dec 2010), pp. 1-6.
|
[] |
[
"Adiabaticity and color mixing in tetraquark spectroscopy",
"Adiabaticity and color mixing in tetraquark spectroscopy"
] |
[
"J Vijande \nDepartamento de Física Atómica\nMolecular y Nuclear\nUniversidad de Valencia (UV)\nIFIC (UV-CSIC)\nValenciaSpain\n",
"A Valcarce \nDepartamento de Física Fundamental\nUniversidad de Salamanca\n37008SalamancaSpain\n",
"J.-M Richard \nInstitut de Physique Nucléaire de Lyon\nUniversité de Lyon\nIN2P3-CNRS-UCBL\n4 rue Enrico Fermi69622VilleurbanneFrance\n"
] |
[
"Departamento de Física Atómica\nMolecular y Nuclear\nUniversidad de Valencia (UV)\nIFIC (UV-CSIC)\nValenciaSpain",
"Departamento de Física Fundamental\nUniversidad de Salamanca\n37008SalamancaSpain",
"Institut de Physique Nucléaire de Lyon\nUniversité de Lyon\nIN2P3-CNRS-UCBL\n4 rue Enrico Fermi69622VilleurbanneFrance"
] |
[] |
We revisit the role of color mixing in the quark-model calculation of tetraquark states, and compare simple pairwise potentials to more elaborate string models with three-and four-body forces. We attempt to disentangle the improved dynamics of confinement from the approximations made in the treatment of the internal color degrees of freedom.
|
10.1103/physrevd.87.034040
|
[
"https://arxiv.org/pdf/1301.6212v1.pdf"
] | 39,914,846 |
1301.6212
|
cdfdbcdf91082e7ca2d6defe9050d0ff7604ee72
|
Adiabaticity and color mixing in tetraquark spectroscopy
(Dated: December 12, 2013)
J Vijande
Departamento de Física Atómica
Molecular y Nuclear
Universidad de Valencia (UV)
IFIC (UV-CSIC)
ValenciaSpain
A Valcarce
Departamento de Física Fundamental
Universidad de Salamanca
37008SalamancaSpain
J.-M Richard
Institut de Physique Nucléaire de Lyon
Université de Lyon
IN2P3-CNRS-UCBL
4 rue Enrico Fermi69622VilleurbanneFrance
Adiabaticity and color mixing in tetraquark spectroscopy
(Dated: December 12, 2013)numbers: 1239Jh1240Yx3115Ar
We revisit the role of color mixing in the quark-model calculation of tetraquark states, and compare simple pairwise potentials to more elaborate string models with three-and four-body forces. We attempt to disentangle the improved dynamics of confinement from the approximations made in the treatment of the internal color degrees of freedom.
I. INTRODUCTION
There is a persisting interest in the quark dynamics applied to multiquark spectroscopy. The question is whether there exist compact hadron states beyond ordinary mesons (quark-antiquark) and baryons (three quarks).
In simple constituent models, several mechanisms have been proposed for binding multiquarks: chiral dynamics for light quarks, chromomagnetism, clustering of heavy quarks in a chromoelectric potential, etc. In this article, we concentrate on this latter effect, i.e., multiquark binding in a spin-and flavor-independent potential, and discuss the role of the internal color degrees of freedom. The validity of the existing models is beyond the scope of this note. In particular, we shall not discuss the transition from color as a local gauge invariance to color as a global property of the wave-function in constituent models. Still, even in its simplified version, color is a delicate ingredient of multiquark dynamics.
The first studies on multiquarks within constituent models were based on simple color-additive potentials, extrapolated from meson and baryon spectroscopy. Already for baryons, one hardly justifies the choice of a pairwise interaction, half as strong as the quark-antiquark potential, but a more elaborate modeling, based on a connected Y -shape flux tube linking the three quarks, does not change the results significantly.
We shall follow in this paper this picture of a minimal string linking the quarks. There are alternative non-trivial pictures of confinement, in particular the ones based on diquarks, which have been extended from the baryon sector to the multiquarks. See, for instance, [1][2][3].
The Y -shape potential of baryons has been extended to tetraquarks and higher multiquark configurations [4]. There are multi-Y connected diagrams in which the * [email protected] † [email protected] ‡ [email protected] string interaction links all quarks and antiquarks as a Steiner tree whose cumulated length is minimized. But the dynamics is dominated by the so-called "flip-flop" diagrams, with disconnected flux tubes for each of the quark-antiquark, three-quark or three-antiquark subclusters: the attraction comes from the minimum taken over all possible permutations of the quarks and of the antiquarks.
This flip-flop interaction contains 3-body, 4-body, and higher-order terms, and is thus more delicate to handle in variational calculations. Moreover, when two quarks are exchanged, the color wave function is modified. In the latest studies [5][6][7], this effect is not treated rigorously. Instead, a type of adiabatic approximation is used: for any set of coordinates for the quarks, the potential is taken as the minimum of all permutations of the quarks and antiquarks, irrespective of the color wave function, and this minimum, as a function of the coordinates, is interpreted as an effective potential leading to a few-body spectral problem in which color has disappeared. Interestingly, this strategy leads to stable multiquarks for a large variety of constituent masses. This is at variance with the color-additive model, which binds only tetraquarks for large values of the quark-to-antiquark mass ratio.
The question is thus whether the multiquark binding obtained in string models survives a non-adiabatic treatment of the internal color degrees of freedom. The problem is analyzed in the present paper by reformulating the string-based interaction as an operator in color space.
The outline is the following. In Sec. II, we set the notation for the color components of tetraquarks. In Sec. III, we present various models of tetraquark confinement with coupled channels in color space or an adiabatic approximation in which color disappears from the final waveequation. The results are presented in Sec. IV, and the conclusions in Sec.V.
II. COLOR STATES
The internal color structure of tetraquark states is described in several papers, see, e.g. [8][9][10]. We shall borrow the notation of [8], in particular the names "true baryonium" (T ) and "mock baryonium" (M ), though the physics context is rather different in the present heavyquark spectroscopy as compared to the color chemistry of the late 70s.
The wave function for (1, 2, 3, 4) = (qqqq) is written as
Ψ = ψ T |T + ψ M |M(1)
where |T denotes a color state with the two quarks in a color3 state, and the antiquarks in a color 3, while |M corresponds to a color sextet in the quark sector and antisextet in the antiquark one. There is also the possibility of building the global color state out of quarkantiquark clusters either in a color singlet or octet. More precisely, we introduce
|T = |(12)3 (34) 3 , |M = |(12) 6 (34)6 , |1 = |(13) 1 (24) 1 , |8 = |(13) 8 (24) 8 , |1 = |(14) 1 (23) 1 , |8 = |(14) 8 (23) 8 .(2)
The relations between the different sets can be deduced from
|1 = 1 3 |T + 2 3 |M , |8 = − 2 3 |T + 1 3 |M , |1 = − 1 3 |T + 2 3 |M , |8 = 2 3 |T + 1 3 |M .(3)
Accordingly, the matrix elements of the potential in any basis are related to the ones in another basis, for instance,
V 11 = 1|V |1 = 1 3 V T T + 2 3 V M M + 2 √ 2 3 V T M ,(4)
and many similar relations.
As stressed by Lipkin [11], in the limit of a tetraquark with two units of heavy flavor, (QQqq) with a large quark-to antiquark mass ratio M/m, the ground state is an almost pure |T state, with the two flavored quarks in an antitriplet state, as in ordinary (QQq) baryons, and the two antiquarks neutralizing that color, as in (Qqq) antibaryons. In other words, the tetraquark state in the large M/m limit just uses well probed color structures, such as the 3 ⊗ 3 →3 coupling of two quarks in baryons.
On the other hand, for smaller values of the mass ratio M/m, the simple models give at best a very shallow binding. Then the mixing of |T and |M is crucial to establish the stability. See, e.g., Brink and Stancu [12].
III. MODELS OF TETRAQUARK CONFINEMENT
In early days of multiquark calculations, the potential was assumed to be pairwise, with the color dependence associated with the exchange of a color octet, namely
V = − 3 16 i<jλ i .λ j v(r ij ) .(5)
Here,λ i is the color operator for the i th quark, and is suitably modified for an antiquark belonging to the3 representation of SU(3). The normalization is such that v(r ij ) is the central part of the quarkonium potential. This model, however crude, gives at least the possibility of studying the role of the internal color degrees of freedom inside a multiquark state.
The additive model being subject to heavy criticism, an alternative was sought, inspired by the strongcoupling regime of QCD. It is referred to as the flip-flop model. For mesons, the potential is V = σ r, where r is the quark-antiquark distance, and σ the string tension, which can be set to σ = 1 by rescaling, without loss of generality. For baryons, this is the Y -shape interaction V Y = min a (r 1a + r 2a + r 3a ). For a tetraquark, the potential is the minimum of the flip-flop term and a connected string, namely [4,5]
V 4 = min(V ff , V s ) , V ff = min(r 13 + r 24 , r 14 + r 23 ) , V s = min a,b (r 1a + r 2a + r ab + r 3b + r 4b ) ,(6)
as pictured in Figs. 1. For each set of quark coordinates r i , the minimization in (6) implies a rotation in color space. This means that V 4 is an adiabatic approximation, which tends to overbind the system. More importantly, the minimization, at least as it was carried out in [5], does not account for any antisymmetrization. It just holds for distinguishable quarks or antiquarks.
As the model (6) gives an interesting spectrum of stable tetraquarks [5], and, if extended to higher configurations, a spectrum of bound pentaquarks and hexaquarks [6,7], it is crucial to estimate the amount of overbinding due to the adiabatic approximation, and the changes occurring when a proper antisymmetrization is implemented.
We aim at constructing an operator in color space that tends to (6) in the adiabatic limit, at least when one of the three terms is clearly the minimum. For instance, if the (1,3) pair is clustered and lies far from the (2,4) pair which is also clustered, the potential is more easily described in the {|1 , |8 } basis. However, working solely in this latter basis would require the choice of a value for the string tension between color-octet objects (there are studies within QCD, see, e.g., [13]) and an ansatz for the transition potential V 18 . Instead, we shall combine the pieces of information coming from the singlet-singlet and triplet-antitriplet states, to deduce the full 2 × 2 matrix of the potential in the {|T , |M } basis, in which the four-body problem will be solved.
More precisely, we will consider four different models:
• Model A is the adiabatic limit given by (6), already used in [5]. However, the Steiner-tree with two junctions, that plays a marginal role, is neglected. Hence, this is the pure flip-flop model.
• Model B is a smooth version of the adiabatic approximation. We use g(x) = 1/(1 + x n ) and its complement g (x) = 1 − g(x), with some large but finite exponent n = 5 to soften the transition between different limiting regimes of the string model. In practice, we replace the above V T T bŷ
V T T = g 3 g 3 V T T + (1 − g 3 g 3 ) × 3 V 1 1 + V 11 4 g 1 + 3 V 11 + V 1 1 4 g 1 ,(7)
where V 11 = r 13 + r 24 , V 1 1 = r 14 + r 23 ,
and
g 3 = g V 33 V 11 , g 3 = g V 33 V 1 1 , g 1 = 1 − g 1 = g V 11 V 1 1 .(9)
Then, the potential is known in any basis from V 11 , V 1 1 andV T T . For instance, in the {|1 , |8 } basis, one uses V 11 and
V 18 = 1 4 √ 2 {V 11 + 3V 1 1 − 4V T T } V 88 = 1 4 {−V 11 + 3V 1 1 + 2V T T } ,(10)
and it is readily checked that if g 1 → 1, the potential becomes diagonal. The four-body problem is more conveniently solved in the {|T , |M } basis, and besidesV T T , the relevant matrix elements are
V T M = 3 4 √ 2 {V 11 − V 1 1 } V M M = 1 4 {3V 11 + 3V 1 1 − 2V T T } ,(11)
• Model C is the color-additive model of (5), normalized to a unit string tension for quarkonium.
• Model D is the crude adiabatic limit of the previous one. This means that for any given set of positions, the 2×2 matrix consisting of V T T , V T M and V M M of model 4 is diagonalized, and the lowest eigenvalue is taken as the effective four-body potential, irrespective of any symmetrization or antisymmetrization. Of course, model D is more attractive than model C.
• Several other variants have been envisaged, but abandoned as leading to collapses or inconsistencies. For instance, it is tempting to use relations similar to (11) to express V T M and V M M from V 11 , V 1 1 and V T T , as given by the string model. But then the Hamiltonian is not bounded below.
IV. RESULTS
The results are shown in Table I. The aim is not to provide a benchmark of four-body calculations. What really matters, is how the ground-state energy evolves when going from a model to another one. The variational estimate has been carried out using just a few Gaussians, and in the case of models B and C, imposing the proper symmetry.
For this purpose, we make use of a wave function with the relevant symmetry for each color component. As the color vector |T (|M ) is antisymmetric (symmetric) under both the exchange of the identical quarks and the identical antiquarks, it has to be combined with a radial wave function with the proper symmetries. The way of constructing these wave functions has been explicitly detailed in Ref. [10]. We will just draft here its main characteristics. The radial wave function is taken as a linear combination of generalized Gaussians depending on six variational parameters a ij of the form,
Ψ(x 1 , x 2 , x 3 ) = 4 k=1 α k exp − 3 i≥j=1 a ij s k ij x i · x j ,(12)
where s k are six-component (ij) vectors made of arrangements of positive and negative signs. Once combined with the proper election of the signs α k , they give rise to radial wave functions with the following symmetries in the radial space under the exchange of quarks and antiquarks: SS, SA, AS, or AA, where S stands for symmetric and A for antisymmetric. The results from models A and D are very similar for the variational energies. This means that the extra attraction noticed in [5] is mainly due to the color mixing in the adiabatic approximation. Restoring the interaction as an operator in color space is less favorable for multiquark binding, and it is readily seen that models B and C led to comparable predictions. The color structure of the wave-functions is also rather similar in models B and C, with mostly a singlet-singlet configuration.
However, if one looks at the details, it can be realized that the flip-flop and and the additive models differ. In particular, the average QQ separation, r 12 is smaller in the additive model than in the flip-flop model. If the mass ratio M/m → ∞, this effect becomes more pronounced, with r 12 → 0 in the former case and r 12 evolving to a finite value (for m fixed) in the latter one. This influences the contribution of the W -exchange contribution to the weak decay of (bcqq), which also plays a role in the decay of (bcu) baryons [14]. This large M/m limit is rather interesting. A detailed numerical study would involve the computation of an effective QQ interaction, V eff (r 12 ), sum of the direct QQ interaction and of theqq binding energy around them, very similar to the Heitler-London potential of two protons in the adiabatic treatment of the hydrogen molecule. Already, for baryons with two heavy quarks, (QQq), it was stressed that the Born-Oppenheimer approximation works very well to reproduce the properties of the lowest states [15]. For (QQqq), it is striking that the effective potentials V eff (r 12 ) have qualitative differences in the additive model and in the flip-flop one. In the additive model, the minimum of V eff (r 12 ) is reached at r 12 = 0. Theqq contribution is stationary there. So, up to a constant, the potential is dominated by the direct QQ term, which is r 12 /2 here. Thus the average separation decreases as M −1/3 , according to the well-known scaling laws in a linear potential [16]. In the flip-flop model, V eff (r 12 ) is minimum at some finite distance. This can be seen directly. This is also compulsory, if one wishes to understand why (QQqq) is stable in the large M/m limit, as seen by the following reductio ad absurdum: suppose that in the Born-Oppenheimer limit of the flipflop model, r 12 → 0, then the flip-flop energy of (QQqq) and the energy of its threshold, (Qq) + (Qq), which are Fig. 2 and given by V eff (r 12 ) = min(r 13 + r 24 , r 14 + r 23 ) , V th = r 13 + r 24 ,
would coincide if r 12 = 0 and the inequality among energies (QQqq) < (Qq) + (Qq) would be impossible.
V. CONCLUSIONS
Our results illustrate the role of antisymmetrization in preventing from a proliferation of multiquarks. In particular, the main difference between the naive coloradditive model and the crude flip-flop model comes from the treatment of color. It remains that in such a flavorindependent confining, the binding of (QQqq) is obtained for heavy enough quarks.
More ambitious tools are obviously needed to handle the four-quark dynamics. In particular, in the doublecharm sector, the binding effect of the confining interaction is probably not sufficient. In addition, the spin-part of the wave function can be made favorable: as stressed, e.g., in [17] and references therein, there is a light-light interaction in (ccūd) which is absent in the two mesons constituting the threshold.
The production of such tetraquarks can be accessible in B-factories [17] and at the proton-proton colliders. Note also that (ccqq) can be a decay product of hadrons containing charm and beauty. From a (bcu) baryon, for instance, the Cabibbo allowed b → c + W − → c + d +ū, combined to a dd pair creation, leads to (bcu) → (ccūd)+ (ddu). See Fig. 3. From (bc), one could first envisage the Cabbibo suppressed b → u + W − → u +c + s, and after the creation of a light quark-antiquark pair, this would monitor a decay (bc) → (ccud) + (sd). Of course, the CKM suppression factor is rather effective here. Perhaps more promising is the chain b → c + W − → c +c + d giving altogether (ccuūdc) after a uū pair creation. This could lead toD − + X, whereD is an anticharmed meson and X one of the new hidden charm resonances reviewed, e.g., in [18,19]. Another combination is (ccud) + (cū), with, however, a different topology of the quark diagram and thus different color and OZI suppression factors, as discussed by Lipkin in a different context [20]. Anyhow, any heavy-quark factory should lead to the discovery of heavy tetraquarks with suitable triggers.
FIG. 1 .
1Top: schematic picture of the quark-antiquark and three-quark confinement. Bottom: three contributions to the tetraquark potential; the simple string model takes the minimum of these three contributions
FIG. 2 .
2Flip-flop potential for r12 → 0 and cumulated potential of two mesons pictured in
FIG. 3 .
3Some weak decays of the (bcu) baryon and (bc) meson leading to a tetraquark in the final state
TABLE I .
IResults for the various models A, B, C and D, as a function of the quark-to-antiquark mass ratio M/mM/m
A
B
C
D
Threshold
1.
4.644
4.803
4.702
4.596
4.676
2.
4.211
4.306
4.275
4.160
4.248
3.
4.037
4.131
4.112
3.984
4.086
4.
3.941
4.041
4.010
3.891
3.998
5.
3.880
3.985
3.954
3.828
3.942
10.
3.742
3.860
3.834
3.685
3.831
ACKNOWLEDGMENTSWe are very grateful to several colleagues for very useful discussions, in particular Makoto Oka, Paolo Gambino and Qiang Zhao. This work has been partially funded by the Spanish Ministerio de Educación y Ciencia and EU FEDER under Contracts No.
. L Maiani, F Piccinini, A D Polosa, V Riquer, hep-ph/0412098Phys. Rev. 7114028L. Maiani, F. Piccinini, A. D. Polosa, and V. Riquer, Phys. Rev. D71, 014028 (2005), hep-ph/0412098
. D Ebert, R Faustov, V Galkin, 10.1016/j.physletb.2006.01.026arXiv:hep-ph/0512230Phys.Lett. 634214hep-phD. Ebert, R. Faustov, and V. Galkin, Phys.Lett. B634, 214 (2006), arXiv:hep-ph/0512230 [hep-ph]
. S Dubnicka, A Z Dubnickova, M A Ivanov, J G Korner, 10.1103/PhysRevD.81.114007arXiv:1004.1291Phys.Rev. 81114007hep-phS. Dubnicka, A. Z. Dubnickova, M. A. Ivanov, and J. G. Korner, Phys.Rev. D81, 114007 (2010), arXiv:1004.1291 [hep-ph]
. J Carlson, V R Pandharipande, Phys. Rev. 431652J. Carlson and V. R. Pandharipande, Phys. Rev. D43, 1652 (1991)
. J Vijande, A Valcarce, J M Richard, 10.1103/PhysRevD.76.114013arXiv:0707.3996Phys. Rev. 76114013hep-phJ. Vijande, A. Valcarce, and J. M. Richard, Phys. Rev. D76, 114013 (2007), arXiv:0707.3996 [hep-ph]
. J.-M Richard, 10.1103/PhysRevC.81.015205arXiv:0908.2944Phys. Rev. 8115205hep-phJ.-M. Richard, Phys. Rev. C81, 015205 (2010), arXiv:0908.2944 [hep-ph]
. J Vijande, A Valcarce, J.-M Richard, 10.1103/PhysRevD.85.014019arXiv:1111.5921Phys.Rev. 8514019hep-phJ. Vijande, A. Valcarce, and J.-M. Richard, Phys.Rev. D85, 014019 (2012), arXiv:1111.5921 [hep-ph]
. H.-M Chan, H Høgåsen, Phys. Lett. 72121H.-M. Chan and H. Høgåsen, Phys. Lett. B72, 121 (1977)
. H.-M Chan, Phys. Lett. 76634H.-M. Chan et al., Phys. Lett. B76, 634 (1978)
. J Vijande, A Valcarce, 10.3390/sym1020155arXiv:0912.3605Symmetry. 1hep-phJ. Vijande and A. Valcarce, Symmetry 1, 155 (2009), arXiv:0912.3605 [hep-ph]
. H J Lipkin, Phys. Lett. 172242H. J. Lipkin, Phys. Lett. B172, 242 (1986)
. D Brink, F Stancu, 10.1103/PhysRevD.49.4665Phys.Rev. 494665D. Brink and F. Stancu, Phys.Rev. D49, 4665 (1994)
. G S Bali, 10.1103/PhysRevD.62.114503arXiv:hep-lat/0006022Phys.Rev. 62114503hep-latG. S. Bali, Phys.Rev. D62, 114503 (2000), arXiv:hep- lat/0006022 [hep-lat]
. J Körner, M Kramer, D Pirjol, 10.1016/0146-6410(94)90053-1arXiv:hep-ph/9406359Prog. Part. Nucl. Phys. 33hep-phJ. Körner, M. Kramer, and D. Pirjol, Prog. Part. Nucl. Phys. 33, 787 (1994), arXiv:hep-ph/9406359 [hep-ph]
. S Fleck, J M Richard, Prog. Theor. Phys. 82760S. Fleck and J. M. Richard, Prog. Theor. Phys. 82, 760 (1989)
. C Quigg, J L Rosner, 10.1016/0370-1573(79)90095-4Phys.Rept. 56167C. Quigg and J. L. Rosner, Phys.Rept. 56, 167 (1979)
. T Hyodo, Y.-R Liu, M Oka, K Sudoh, S Yasui, arXiv:1209.6207hep-phT. Hyodo, Y.-R. Liu, M. Oka, K. Sudoh, and S. Ya- sui(2012), arXiv:1209.6207 [hep-ph]
. M Nielsen, F S Navarra, S H Lee, 10.1016/j.physrep.2010.07.005arXiv:0911.1958Phys.Rept. 49741hep-phM. Nielsen, F. S. Navarra, and S. H. Lee, Phys.Rept. 497, 41 (2010), arXiv:0911.1958 [hep-ph]
. A Valcarce, T Caramés, J Vijande, 10.1007/s00601-012-0518-80177-7963Few-Body Systems. 1A. Valcarce, T. Caramés, and J. Vijande, Few-Body Sys- tems, 1(2012), ISSN 0177-7963, http://dx.doi.org/10. 1007/s00601-012-0518-8
. H Lipkin, 10.1016/S0370-2693(98)00707-2Phys.Lett. 433117H. Lipkin, Phys.Lett. B433, 117 (1998)
|
[] |
[
"Memory effect and pseudomode amplitude in non-Markovian dynamics of a two level system",
"Memory effect and pseudomode amplitude in non-Markovian dynamics of a two level system"
] |
[
"Yuta Ohyama \nGraduate School of Pure and Applied Sciences\nUniversity of Tsukuba\n1-1-1 Tennodai305-8571TsukubaIbarakiJapan\n",
"Yasuhiro Tokura \nGraduate School of Pure and Applied Sciences\nUniversity of Tsukuba\n1-1-1 Tennodai305-8571TsukubaIbarakiJapan\n\nNTT Basic Research Laboratories\nNTT Corporation\n3-1 Morinosato Wakamiya243-0198AtsugiKanagawaJapan\n"
] |
[
"Graduate School of Pure and Applied Sciences\nUniversity of Tsukuba\n1-1-1 Tennodai305-8571TsukubaIbarakiJapan",
"Graduate School of Pure and Applied Sciences\nUniversity of Tsukuba\n1-1-1 Tennodai305-8571TsukubaIbarakiJapan",
"NTT Basic Research Laboratories\nNTT Corporation\n3-1 Morinosato Wakamiya243-0198AtsugiKanagawaJapan"
] |
[] |
We study non-Markovian dynamics of a two level atom using pseudomode method. Because of the memory effect of non-Markovian dynamics, the atom receives back information and excited energy from the reservoir at a later time, which causes more complicated behaviors than Markovian dynamics. With pseudomode method, non-Markovian dynamics of the atom can be mapped into Markovian dynamics of the atom and pseudomode. We show that by using pseudomode method and quantum jump approach for Markovian dynamics, we get a physically intuitive insight into the memory effect of non-Markovian dynamics. It suggests a simple physical meaning of the memory time of a non-Markovian reservoir.
| null |
[
"https://arxiv.org/pdf/1701.07597v1.pdf"
] | 119,351,178 |
1701.07597
|
c1b1a20b011a82a547b53c081a929a2e9ac43914
|
Memory effect and pseudomode amplitude in non-Markovian dynamics of a two level system
26 Jan 2017 (Dated: July 17, 2018)
Yuta Ohyama
Graduate School of Pure and Applied Sciences
University of Tsukuba
1-1-1 Tennodai305-8571TsukubaIbarakiJapan
Yasuhiro Tokura
Graduate School of Pure and Applied Sciences
University of Tsukuba
1-1-1 Tennodai305-8571TsukubaIbarakiJapan
NTT Basic Research Laboratories
NTT Corporation
3-1 Morinosato Wakamiya243-0198AtsugiKanagawaJapan
Memory effect and pseudomode amplitude in non-Markovian dynamics of a two level system
26 Jan 2017 (Dated: July 17, 2018)
We study non-Markovian dynamics of a two level atom using pseudomode method. Because of the memory effect of non-Markovian dynamics, the atom receives back information and excited energy from the reservoir at a later time, which causes more complicated behaviors than Markovian dynamics. With pseudomode method, non-Markovian dynamics of the atom can be mapped into Markovian dynamics of the atom and pseudomode. We show that by using pseudomode method and quantum jump approach for Markovian dynamics, we get a physically intuitive insight into the memory effect of non-Markovian dynamics. It suggests a simple physical meaning of the memory time of a non-Markovian reservoir.
I. INTRODUCTION
All realistic quantum systems are open quantum system; the system interacts with reservoir systems which cause decoherence and relaxation [1]. According to characters of the interaction and the structure of the reservoirs, the dynamics of open quantum systems can be classified into Markovian dynamics with no memory effect and non-Markovian one with memory effect. In Markovian open system, the reservoir acts as a sink for the system information; the information that the system of interest loses into the reservoir does not play any further role in the system dynamics. However, in non-Markovian case, this lost information is temporarily stored in the reservoir and comes back at a later time to influence the system [2]. This is the memory effect of non-Markovian dynamics and causes more complicated behaviors than Markovian dynamics.
There are stochastic approach for Markovian dynamics [3][4][5][6][7][8]; quantum jump, Monte Carlo wave function and quantum trajectory. In these methods, the dissipation caused by the interaction with the reservoir is interpreted as an incoherent jump between two states and the state of the system is described by the sum of the ensembles which are identified by the jump times. Therefore, we can get the intuitive understanding of the dynamics.
Recently, non-Markovian dynamics has been investigated [9][10][11]. In these papers, non-Markovianity of quantum processes is discussed. The measures for the degree of non-Markovianity are based on the distinguishability of quantum states, which focuses on the dynamics of the system of interest. In this paper, we use pseudomode method [12][13][14][15][16][17]. With pseudomode method, non-Markovian dynamics of the system of interest can be mapped into Markovian dynamics of a combined system of the system of interest and pseudomode. Therefore, the dynamics of the extended system can be discussed. Non-Markovian quantum jump has been also investigated [18][19][20]. Because of the memory effect and back flow from the reservoir into the system, the description is more complicated than Markovian case and pure state quantum trajectories for general non-Markovian systems do not exist [21]. By connecting this method and pseudomode method, we get a simple intuitive physical picture of the memory of a non-Markovian reservoir and of how such memory allows to partly restore some of the coherence lost to the environment [15]. This result also suggests that pseudomode could be seen as an effective description of the reservoir memory.
The purpose of this paper is to get a more physically intuitive insight into the memory effect of non-Markovian dynamics. For this purpose, we use pseudomode method and quantum jump approach. With pseudomode method, non-Markovian dynamics of the system is described by Markovian dynamics of a combined system, so that we can apply quantum jump approach for Markovian dynamics to the combined system. The result gives us a simple physical meaning of the memory time of a non-Markovian reservoir.
The paper is organized as follows. In Sec. II, we persent the model discussed in the paper. In Sec. III, the property of the model we have presented in Sec. II is evaluated using quantum jump approach for Markovian dynamics and, in Sec. IV, we study the dynamics of the damped Jaynes Cummings model, which is a typical example of non-Markovian system. Finally, we conclude the paper in Sec. V.
II. MODEL
Non-Markovian systems appear in many branches of physics. Here we consider a two level atom interacting with a structured electromagnetic reservoir which is described by the Jaynes-Cummings model with rotating ap-proximation [1]. The Hamiltonian for the total system is
H = ω 0 2 σ z + k ω k b † k b k + k g k σ + b k + σ − b † k ,(1)
where σ z = |e e| − |g g|, σ + = (σ − ) † = |e g|, b † k and b k are the bosonic creation and annihilation operators for the reservoir mode k with frequency ω k ≥ 0 and g k is the coupling between the two level atom and the reservoir mode k in the reservoir. |e and |g are excited and the ground states of the two level atom, respectively. Total excitation number is a conserved quantity in this model.
Let the atom be in an arbitrary superposition and the reservoir be in vacuum state at t = 0, therefore the initial state is given by
|Ψ(0) = (α|e + β|g ) ⊗ |0 ,(2)
where |0 denotes the vacuum state of the reservoir, and α and β satisfy the normalization condition |α| 2 + |β| 2 = 1.
In the interaction picture, the total state at t ≥ 0 can be expanded as
|Ψ(t) I = α a 0 (t)|e, 0 + k a k (t)|g, 1 k + β|g, 0 .(3)
where |1 k = b † k |0 and the coefficient of |g, 0 is independent of time. Inserting this state into Schrödinger equation, we get the integro-differential equation for atomic amplitude a 0 (t),
d dt a 0 (t) = − t 0 dt ′ f (t − t ′ )a 0 (t ′ ),(4)
where f (t) ≡ k g 2 k e −i(ω k −ω0)t is a correlation function. Here we assume that the coupling g k depends only on the frequency ω k . In a continuous distribution limit, the sum on the reservoir mode k is replaced by an integral by ω as follows,
k g 2 k ≃ ∞ −∞ dωρ(ω)g 2 (ω) = 1 2π ∞ −∞ dωD(ω),(5)
where ρ(ω) is the density of states of the reservoir. The structure of the reservoir is characterized by the positive definite function D(ω). Because we have extended the integral to −∞, this function should be vanished in the negative ω region. With these equations, the correlation function becomes
f (t) = 1 2π ∞ −∞ dωD(ω)e −i(ω−ω0)t .(6)
If the reservoir has no structure, D(ω) does not depend on ω. In this case, the correlation function is proportional to the delta function, so that the dynamics of the system FIG. 1. Diagrammatic representation of the combined system dynamics. The system interacts with pseudomodes which leaks into the Markovian reservoir. The interaction between the system and pseudomodes is effectively described by HSP .and the leak rate form pseudomode l into the Markovian reservoir is given by 2λ l .
is Markovian dynamics. In the following, we restrict that D(ω) can be approximated by a sum of Lorentz functions. This is not the necessary condition for using pseudomode method but an assumption for simplicity. We set the explicit form of D(ω) as
D(ω) ≃ L l=1 γ l λ 2 l (ω − ω l ) 2 + λ 2 l ,(7)
where γ l is the coupling strength and λ −1 l is the reservoir's correlation time. Since D(ω) is vanished in the negative ω region, the resonant frequency ω l should be much lager than the width λ l .
From the residue theorem and t − t ′ ≥ 0, only the poles in the lower half plane have contribution for the dynamics of a 0 (t). So we define that λ l is positive for any l. Since D(ω) should be non-negative for any ω, we consider γ l > 0 for any l, which is also not the necessary condition but an assumption for simplicity. Integral of Lorentz function is
∞ −∞ dω λ 2 l (ω − ω l ) 2 + λ 2 l = πλ l .(8)
Using this assumption, we get the integro-differential equation
d dt a 0 (t) = − L l=1 γ l λ l 2 t 0 dt ′ e −i∆ l (t−t ′ ) e −λ l (t−t ′ ) a 0 (t ′ ),(9)
where ∆ l = ω l − ω 0 . From this equation, we can see that the parameter λ l represents how long past state affects the present dynamics. If there exists finite λ l , the present dynamics depends on the past dynamics. Therefore, the system interacts with non-Markovian reservoir and its non-Markovianity is characterized by λ l .
With the pseudomode method [12,13], the dynamics of this system can be mapped into Markovian dynamics of a combined system of the two level system and L pseudomodes system. For the present model, pseudomode method leads to the following Markovian master equation
d dt ρ I SP (t) = 1 i H SP , ρ I SP (t) + L l=1 2λ l D[c l ]ρ I SP (t),(10)
where ρ I SP (t) is the density operator of the combined system, D[·] is a superoperator
D[A]ρ = AρA † − 1 2 A † Aρ − ρA † A ,(11)
which describes the dissipation and H SP is a combined system Hamiltonian
H SP = L l=1 ∆ l c † l c l + L l=1 γ l λ l 2 σ + c l + σ − c † l ,(12)
where c † l and c l are the creation and annihilation operators for the pseudomode labeled by l. From the definition of the pseudomdoes, the initial state of the pseudomodes is the vacuum (see Refs. [12,13] for details). From Eq. (10), we can see that the system coherently interacts with pseudomodes and each pseudomode dissipatively interacts with a Markovian reservoir (FIG. 1). The information contained in the atom first flows to pseudomodes and then from each pseudomode to its reservoir. The flow from each pseudomode to its reservoir is oneway, but the flow between the atom and pseudomodes is two-way. In non-Markovian dynamics, the atom recieves back information and excitation energy from the reservoir due to memory effect. Therefore pseudomodes could be seen as an effective description of the reservoir memory [15].
To get the state of the system, we should trace out the pseudomodes,
ρ I S (t) = Tr P ρ I SP (t).(13)
III. STOCHASTIC APPROACH
With the pseudomode method, the dynamics of this system is effectively described by the Markovian master equation. So we can use stochastic approach for Markovian dynamics. Here we define a non-Hermitian Hamiltonian,
H I eff = H SP − i L l=1 λ l c † l c l .(14)
In this Hamiltonian, non-Hermitian term represents dissipation into the Marovian resevoir. Using Eq. (14), we can rewrite Eq. (10) as,
d dt ρ I SP (t) = 1 i H I eff ρ I SP (t) − ρ I SP (t)(H I eff ) † + L l=1 2λ l c l ρ I SP (t)c † l .(15)
The first term of the right-hand side represents the continuous dynamics governed by the non-Hermitian Hamiltonian H I eff . The second term represents jump process which is the loss of an excitation energy from pseudomodes.
Using an unnormalized state vector |Ψ(t) which satisfies Schrödinger equation,
i d dt |Ψ(t) = H I eff |Ψ(t) ,(16)
we can divide ρ I SP (t) into two terms as follows,
ρ I SP (t) = |Ψ(t) Ψ (t)| + Π p (t)|g, 0 P g, 0 P |,(17)
where |0 P is the vacuum state of pseudomodes. The trace of ρ I SP (t) is conserved to 1, so that the coefficient of the second term Π p (t) is defined by Π p (t) = 1 − Ψ (t)|Ψ(t) of time. Because λ l is defined as positive, the inner product of |Ψ(t) is a monotonic decreasing function of time and Π p (t) is a monotonic increasing function.
From the quantum trajectory approach [5], the unnormalized state vector |Ψ(t) is a trajectory under no jumps. Since the system is two level atom and the initial state of the reservoir is vacuum, the jumped part (= the second term of Eq. (17)) is the ground state. The state of the two level atom is given by
ρ I S (t) = Tr P |Ψ(t) Ψ (t)| + Π p (t)|g g|.(18)
The probability that there is no jump until time t (= the survival probability) is given by the inner product of |Ψ(t) ,
P 0 (t) = Ψ (t)|Ψ(t) .(19)
Because we can regard the pseudomodes as memory part of the reservoir [15], the survival probability P 0 (t) can be regarded as the probability that the system interacts with its reservoir coherently until time t. The jump rate to the ground state of the combined system, which is given by the damp rate of P 0 (t), represents a memory loss rate. So the probability density of jump is given by
p(t) = − d dt P 0 (t).(20)
This probability density represents the information flux from pseudomode into Markovian reservoir. For the particular model, the relationship between the oscillation of p(t) and the measure of non-Markovianity had been discussed [16].
Since the initial state of the atom is |ψ(0) = α|e + β|g , the state of the combined system can be expanded as
|Ψ(t) = α(a 0 (t)|e, 0 P + L l=1 q l (t)|g, 1 l ) + β|g, 0 P ,(21)
where |1 l = c † l |0 P , a 0 (0) = 1 and q l (0) = 0. This a 0 (t) is the same as the amplitude a 0 (t) in Eq. (9). Using Eq. (21), the survival probability P 0 (t) and the probability density p(t) are given by
P 0 (t) = |α| 2 |a 0 (t)| 2 + L l=1 |q l (t)| 2 + |β| 2 ,(22)p(t) = L l=1 2λ l |αq l (t)| 2 .(23)
The probability density p(t) is the 2λ l |αq l (t)| 2 represents the energy flow from pseudomode l to its reservoir.
If the jump to the ground state occurs during the measurement time T , the expectation value of the jump time is given by
t T = T 0 tp(t)dt T 0 p(t)dt ,(24)
and we define the expectation value t as the long measurement time limit of t T ,
t ≡ lim T →∞ t T = ∞ 0 (|a 0 (t)| 2 + L l=1 |q l (t)| 2 )dt.(25)
Moreover we define that
t S = ∞ 0 |a 0 (t)| 2 dt,(26)t l = ∞ 0 |q l (t)| 2 dt,(27)
and then we get t = t S + l t l , where t S is the expected time length that the two level system is in the excited state |e and t l is one that a pseudomode l is in the excited state |1 l . Since we can regard the pseudmodes as the degree of freedom of the reservoir that interact with the system of interest coherently [15], l t l can be regarded as the expectation value of memory time of the non-Markovian reservoir and reflects non-Markovianity of the system dynamics.
We consider the Markovian limit (λ l → ∞). When λ l ≫ ∆ l , γ l , the unnormalized state vector is approximated as
|Ψ(t) ≃ αe − 1 2 l γ l t |e + β|g |0 P .(28)
Therefore, we can see that q l (t) = 0 for any t > 0 and t l = 0 in the Markovian limit . Because time t is positive, t l = 0 means that there is no time that pseudomodes are in their excited states. The Markovian limit is the limit the reservoir has no memory. This is consistent and intuitive with the result we have got here; pseudomodes are vanished and l t l converges to 0 in the Markovian limit, so that pseudomode is a memory part of the reservoir and l t l is an expectation value of memory time of the reservoir. This result also suggests the following criterion.
• When l t l = 0, its dynamics is Markovian.
• When l t l = 0, its dynamics is non-Markovian.
IV. DAMPED JAYNES-CUMMINGS MODEL
In this section, we discuss the dynamics of a two level atom in a lossy cavity [1]. The reservoir is electromagnetic field inside and outside the cavity and its density of state has a peak at the cavity resonant frequency. Therefore, we can assume that the structure is a single Lorentz function,
D(ω) = γλ 2 (ω − ω c ) 2 + λ 2 ,(29)
where ω c is the resonant frequency of the cavity. This is called the damped Jaynes-Cummings model, which is a typical example of non-Markovian system and L = 1 case of what we have discussed. Therefore, we can use pseudomode method and the result we have got. The effective Hamiltonian H I eff is
H I eff = H SP − i λc † p c p = (∆ − iλ) c † p c p + γλ 2 σ + c p + σ − c † p ,(30)
and the unnormalized state |Ψ(t) is
|Ψ(t) = α(a 0 (t)|e, 0 P + q(t)|g, 1 P ) + β|g, 0 P ,(31)
where ∆ = ω c − ω 0 is the detuning between the two level system and the pseudomode, c † p and c p are creation and annihilation operators for the pseudomode and |1 P = c † p |0 P . Inserting the effective Hamiltonian H I eff and the unnormalized state |Ψ(t) into Schrödinger equation, we get two simultaneous differential equations, Eigenvalues of these equations are
i d dt a 0 (t) = γλ 2 q(t) i d dt q(t) = (−iλ + ∆)q(t) + γλ 2 a 0 (t).(32)−(λ + i∆) ± (λ + i∆) 2 − 2γλ 2 .(33)
Under the initial condition, a 0 (0) = 1 and q(0) = 0, the solution is
a 0 (t) = e − λ 2 t e −i ∆ 2 t cosh dt 2 + λ + i∆ d sinh dt 2 , q(t) = −i √ 2γλ d e − λ 2 t e −i ∆ 2 t sinh dt 2 ,(34)
where we define d = (λ + i∆) 2 − 2γλ. As a result, we get the probability density,
p(t) =2λ|αq(t)| 2 = 2|α| 2 γλ 2 |d| 2 e −λt (cosh(Re[d]t) − (cos(Im[d]t)) . (35)
FIG . 2(a) shows the dynamics of the populations as a function of time. In FIG. 2(a), |a 0 (t)| 2 and |q(t)| 2 oscillate and Π p (t) monotonically increases. The correlated oscillations of |a 0 (t)| 2 and |q(t)| 2 are caused by non-Markovianity. There is no energy flow from Markovian reservoir into the combined system, so that Π p (t) monotonically increases. FIG. 2(b) shows the dynamics of the probability density of jump as a function of time. From FIG. 2(b), we can see that p(t) is positive except for t = 0 and t → ∞, because cosh x > 1 for ∀x > 0. When there is no detuning ∆ = 0, d = λ 2 − 2γλ, so that d is real or pure imaginary. When 2γ < λ, d is real so that p(t) = 0 for t > 0. On the other hand, when λ < 2γ, d is pure imaginary so that
p(t) = 2|α| 2 γλ 2γ − λ e −λt 1 − cos 2γλ − λ 2 t .(36)
Therefore when the time satisfies t = 2πn(2γ − λ) − 1 2 for n ∈ {0, N}, the pribability density p(t) is 0.
Here the structure of the reservoir is a single Lorentz function so that there is only a single pseudomode and the expectation value of jump time is given by
t = t S + t P ,(37)t S = ∞ 0 |a 0 (t)| 2 dt,(38)t P = ∞ 0 |q(t)| 2 dt.(39)
Using the result we calculated above, we get
t S = 1 γ 1 + ∆ λ 2 + 1 2λ t P = 1 2λ .(40)
As noted at Eq. (9), non-Markovianity is characterized by λ. The decrease of λ means an increase of the reservoir correlation time, hence the non-Markovianity becomes stronger. In Eq. (40), the expectation values t S and t P are monotonically decreasing functions for λ. Therefore, the result shows that non-Markovianity of the system dynamics is reflected to the delay of the expectation value of the jump time t . From Eq. (40), we see that t S depends on the detuning ∆ and t P do not depend on it. This can be understood as follows. In the detuned Rabi oscillation, the oscillating amplitude is smaller than unity, which depends on the value of the detuning. The damp rate of the system excited state population and the maximum value of the pseudomode excited state population are suppressed by increasing the detuning, which are shown in FIG. 3. Therefore, the damp of P (t) is slower than resonance case and the expected time length that the atom is in the exited state increases as the detuning increases. However, the leak rate from the pseudomode into the Markovian reservoir is 2λ, which does not depend on the detuning. This is the reason why the expected time length the pseudomode is in the exited state is the invariant value for the detuning. As shown in FIG. 3(b), instead of the suppression of the maximum value, the population in the later time increases.
We also calculate the generating function
χ(ω) ≡ ∞ 0 p(t)e iωt dt.(41)
Using Eq. (34), we get explicit form of the generation function
χ(ω) = 2γλ 2 (λ − iω) (λ − iω) 4 − (λ 2 − ∆ 2 − 2γλ)(λ − iω) 2 − (∆λ) 2 .(42)
From this function, we can get the expectation value
t = d ln χ(ω) d(iω) ω=0 = 1 γ 1 + ∆ λ 2 + 1 λ ,(43)
and the variance
(δt) 2 = d 2 ln χ(ω) d(iω) 2 ω=0 = 1 γ 2 1 + ∆ λ 2 2 − 1 γλ 1 − 3 ∆ λ 2 + 1 λ 2 .(44)
In order to understand the relationship between these values, we define the function
Λ t ≡ (δt) 2 − ( t ) 2 ( t ) 2 = − (3λ 2 − ∆ 2 )γλ (λ 2 + γλ + ∆ 2 ) 2 .(45)
This function can be divided into 3 cases as follows
Λ t = > 0 · · · √ 3λ < |∆| = 0 · · · √ 3λ = |∆| < 0 · · · √ 3λ > |∆|(46)
The sign of Λ t changes at λ 0 = |∆|/ √ 3. In the Markovian limit (λ → ∞), the expectation value and the variance converge to t → γ −1 and (δt) 2 → γ −2 , respectivelity. Thus Λ t converges to 0 in the Markovian limit. Because the correlation time λ −1 0 = √ 3/|∆| is small for large detunig |∆|, the function Λ t is positive for relatively large λ for large detuning. When it is negative, the variance is relatively smaller than one of Markovian dynamics. t is the expected time length that the system and the reservoir can interact with each other coherently. Therefore, negative Λ t means that the memory is lost at more definite time compared with Markovian dynamics.
V. CONCLUTIONS
We have studied the non-Markovian dynamics of a two level atom, using pseudomode method and the stochastic approach for Markovian dynamics. In this paper, we have assumed that the structure of the reservoir is given by a sum of Lorentz functions. With pseudomode method, the non-Markovian dynamics of a two level atom can be mapped to Markovian dynamics of a combined system of the system and pseudomodes whose number is the same as that of the Lorentz functions.
The expectation value of jump time to the ground state of the combined system is given by the sum of the expected time length that the two level system is in the exited state and one that each pseudomode is in its excited state. The later time length represents the memory time of a non-Markovian reservoir. In the Markovian limit, we get the result that the probability that pseudomodes are in their exited states is 0. Then the expected time length that pseudomodes in their excited state also converges to 0.
In particular, we have discussed the damped Jaynes-Cummings model, which is a model of a two level atom in a lossy cavity. This is analytically solvable so that we can get an exact solution of the dynamics and the expectation values, explicitly. As a result, we have found that the expected time length that the system and the pseudomode are in their excited state took the value reflecting non-Markovianity.
Since Markovian approximation is the approximation that the reservoir has no memory, we can say that our result suggest the pseudomode is the degree of the freedom characterizing the memory of the reservoir.
FIG. 2 .
2(Color online) Population and probability density. The initial condition is exited state (α = 1, β = 0). The parameters are γ = 1, λ = 0.2 and ∆ = 0.5. With these parameters, tS = 9.8 and tP = 2.5. (a) Populations |a0(t)| 2 (blue solid), |q(t)| 2 (yellow dashed) and Πp(t) (green large dashed). Because of the definition, these populations satisfy |a0| 2 + |q(t)| 2 + Πp(t) = 1. (b) Probability density p(t) (blue solid). Yellow and green dashed lines are the plots in which we replace oscillation term with ±1.
FIG. 3 .
3(Color online) (a) The system excited state population |a0(t)| 2 and (b) Pseudomode excited state population |ap(t)| 2 with some detuning values; Blue solid: ∆ = 0.5, Yellow dashed: ∆ = 0 and Green dotted: ∆ = 1. The other parameters are same (γ = 1, λ = 0.2). (a) The larger detuning makes the damping more slowly. (b) When the detuning ∆ increases, the maximum value of |q(t)| 2 decreases and the excited state population increases in the later time (see the inset).
ACKOWLEDGEMENTSThe authors are grateful to C. Uchiyama for thoughtful comments and suggestions. This work was supported by CREST, JST.
H.-P Breuer, F Petruccione, The Theory of Open Quantum Systems. OxfordOUPH.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (OUP Oxford, 2007).
. H.-P Breuer, E.-M Laine, J Piilo, B Vacchini, Rev. Mod. Phys. 8821002H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini, Rev. Mod. Phys. 88, 021002 (2016).
. M B Plenio, P L Knight, Rev. Mod. Phys. 70101M. B. Plenio and P. L. Knight, Rev. Mod. Phys. 70, 101 (1998).
An open systems approach to quantum optics. H J Carmichael, Springer-VerlagBerlin HeidelbergH. J. Carmichael, An open systems approach to quantum optics (Springer-Verlag, Berlin Heidelberg, 1993).
H Wiseman, G Milburn, Quantum Measurement and Control. Cambridge University PressH. Wiseman and G. Milburn, Quantum Measurement and Control (Cambridge University Press, 2010).
. J Dalibard, Y Castin, K Mølmer, Phys. Rev. Lett. 68580J. Dalibard, Y. Castin, and K. Mølmer, Phys. Rev. Lett. 68, 580 (1992).
. K Mølmer, Y Castin, Quantum , Semiclassical Optics, Journal of the European Optical Society Part B. 849K. Mølmer and Y. Castin, Quantum and Semiclassical Optics: Journal of the European Optical Society Part B 8, 49 (1996).
. H M Wiseman, G J Milburn, Phys. Rev. A. 471652H. M. Wiseman and G. J. Milburn, Phys. Rev. A 47, 1652 (1993).
. H.-P Breuer, E.-M Laine, J Piilo, Phys. Rev. Lett. 103210401H.-P. Breuer, E.-M. Laine, and J. Piilo, Phys. Rev. Lett. 103, 210401 (2009).
. E.-M Laine, J Piilo, H.-P Breuer, Phys. Rev. A. 8162115E.-M. Laine, J. Piilo, and H.-P. Breuer, Phys. Rev. A 81, 062115 (2010).
. J.-G Li, J Zou, B Shao, Phys. Rev. A. 8162124J.-G. Li, J. Zou, and B. Shao, Phys. Rev. A 81, 062124 (2010).
. B M Garraway, P L Knight, Phys. Rev. A. 543592B. M. Garraway and P. L. Knight, Phys. Rev. A 54, 3592 (1996).
. B M Garraway, Phys. Rev. A. 552290B. M. Garraway, Phys. Rev. A 55, 2290 (1997).
. B M Garraway, Phys. Rev. A. 554636B. M. Garraway, Phys. Rev. A 55, 4636 (1997).
. L Mazzola, S Maniscalco, J Piilo, K.-A Suominen, B M Garraway, Phys. Rev. A. 8012104L. Mazzola, S. Maniscalco, J. Piilo, K.-A. Suominen, and B. M. Garraway, Phys. Rev. A 80, 012104 (2009).
. K Luoma, P Haikka, J Piilo, Phys. Rev. A. 9054101K. Luoma, P. Haikka, and J. Piilo, Phys. Rev. A 90, 054101 (2014).
. Z.-X Man, N B An, Y.-J Xia, Phys. Rev. A. 9062104Z.-X. Man, N. B. An, and Y.-J. Xia, Phys. Rev. A 90, 062104 (2014).
. J Piilo, S Maniscalco, K Härkönen, K.-A Suominen, Phys. Rev. Lett. 100180402J. Piilo, S. Maniscalco, K. Härkönen, and K.-A. Suomi- nen, Phys. Rev. Lett. 100, 180402 (2008).
. J Piilo, K Härkönen, S Maniscalco, K.-A Suominen, Phys. Rev. A. 7962112J. Piilo, K. Härkönen, S. Maniscalco, and K.-A. Suomi- nen, Phys. Rev. A 79, 062112 (2009).
. K Luoma, K Härkönen, S Maniscalco, K.-A Suominen, J Piilo, Phys. Rev. A. 8622102K. Luoma, K. Härkönen, S. Maniscalco, K.-A. Suominen, and J. Piilo, Phys. Rev. A 86, 022102 (2012).
. H M Wiseman, J M Gambetta, Phys. Rev. Lett. 101140401H. M. Wiseman and J. M. Gambetta, Phys. Rev. Lett. 101, 140401 (2008).
|
[] |
[
"Non-intrusive Subdomain POD-TPWL Algorithm for Reservoir History Matching",
"Non-intrusive Subdomain POD-TPWL Algorithm for Reservoir History Matching"
] |
[
"Cong Xiao \nDelft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands\n",
"Olwijn Leeuwenburgh \nCivil Engineering and Geosciences\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands\n\nTNO\nPrincetonlaan 6PO Box 800153508 TAUtrechtthe Netherlands\n",
"Hai Xiang Lin \nDelft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands\n",
"Arnold Heemink \nDelft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands\n"
] |
[
"Delft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands",
"Civil Engineering and Geosciences\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands",
"TNO\nPrincetonlaan 6PO Box 800153508 TAUtrechtthe Netherlands",
"Delft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands",
"Delft Institute of Applied Mathematics\nDelft University of Technology\nMekelweg 42628 CDDelftthe Netherlands"
] |
[] |
This paper presents a non-intrusive subdomain POD-TPWL (SD POD-TPWL) algorithm for reservoir data assimilation through integrating domain decomposition (DD), radial basis function (RBF) interpolation and the trajectory piecewise linearization (TPWL). It is an efficient approach for model reduction and linearization of general non-linear timedependent dynamical systems without intruding the legacy source code. In the subdomain POD-TPWL algorithm, firstly, a sequence of snapshots over the entire computational domain are saved and then partitioned into subdomains. From the local sequence of snapshots over each subdomain, a number of local basis vectors is formed using POD, and then the RBF interpolation is used to estimate the derivative matrices for each subdomain. Finally, those derivative matrices are substituted into a POD-TPWL algorithm to form a reduced-order linear model in each subdomain. This reduced-order linear model makes the implementation of the adjoint easy and resulting in an efficient adjoint-based parameter estimation procedure. The performance of the new adjoint-based parameter estimation algorithm has been assessed through several synthetic cases. Comparisons with the classic finite-difference based history matching show that our proposed subdomain POD-TPWL approach is obtaining comparable results. The number of full-order model simulations required is roughly 2-3 times the number of uncertain parameters. Using different background parameter realizations, our approach efficiently generates an ensemble of calibrated models without additional full-order model simulations.
| null |
[
"https://arxiv.org/pdf/1806.04135v1.pdf"
] | 119,723,604 |
1806.04135
|
2f6b35d2aec1b89e0974b329132010755a9c1aef
|
Non-intrusive Subdomain POD-TPWL Algorithm for Reservoir History Matching
9 Jun 2018
Cong Xiao
Delft Institute of Applied Mathematics
Delft University of Technology
Mekelweg 42628 CDDelftthe Netherlands
Olwijn Leeuwenburgh
Civil Engineering and Geosciences
Delft University of Technology
Mekelweg 42628 CDDelftthe Netherlands
TNO
Princetonlaan 6PO Box 800153508 TAUtrechtthe Netherlands
Hai Xiang Lin
Delft Institute of Applied Mathematics
Delft University of Technology
Mekelweg 42628 CDDelftthe Netherlands
Arnold Heemink
Delft Institute of Applied Mathematics
Delft University of Technology
Mekelweg 42628 CDDelftthe Netherlands
Non-intrusive Subdomain POD-TPWL Algorithm for Reservoir History Matching
9 Jun 2018Data assimilation, reduced-order modeling, model linearization, domain decomposition Abbreviation POD, proper orthogonal decompositionRBF, radial basis functionTPWL, trajectory piecewise linearizatonDD, domain decompositionFOM, full-order model
This paper presents a non-intrusive subdomain POD-TPWL (SD POD-TPWL) algorithm for reservoir data assimilation through integrating domain decomposition (DD), radial basis function (RBF) interpolation and the trajectory piecewise linearization (TPWL). It is an efficient approach for model reduction and linearization of general non-linear timedependent dynamical systems without intruding the legacy source code. In the subdomain POD-TPWL algorithm, firstly, a sequence of snapshots over the entire computational domain are saved and then partitioned into subdomains. From the local sequence of snapshots over each subdomain, a number of local basis vectors is formed using POD, and then the RBF interpolation is used to estimate the derivative matrices for each subdomain. Finally, those derivative matrices are substituted into a POD-TPWL algorithm to form a reduced-order linear model in each subdomain. This reduced-order linear model makes the implementation of the adjoint easy and resulting in an efficient adjoint-based parameter estimation procedure. The performance of the new adjoint-based parameter estimation algorithm has been assessed through several synthetic cases. Comparisons with the classic finite-difference based history matching show that our proposed subdomain POD-TPWL approach is obtaining comparable results. The number of full-order model simulations required is roughly 2-3 times the number of uncertain parameters. Using different background parameter realizations, our approach efficiently generates an ensemble of calibrated models without additional full-order model simulations.
Introduction
History matching is the process of calibrating uncertain reservoir model parameters such as gridblock permeabilities, porosities, faults multipliers and facies distributions, through minimization of a cost function that quantifies the misfit between simulated and observed data (typically well data such as oil or water rates or bottomhole pressure, but possibly also 4D seismic data. If the gradient of the cost function with respect to parameters can be computed using the adjoint of the reservoir model, history matching problems can be efficiently solved using a gradient-based minimization algorithm [1]. In general, significant effort is required to obtain and maintain a correct implementation of the adjoint model for complex nonlinear simulation models. Such implementations are generally instrusive, that is, they require access to the model code, which may not always be possible.
Many efforts have been taken to make the implementation of the adjoint model more feasible. One way is to replace the original complex model with a surrogate so that the construction of the adjoint model becomes easier. Courtier et al (1994) [2] proposed an incremental approach by replacing a high resolution nonlinear model with an approximated linear model so that the adjoint model can be more easily obtained. Liu et al ( , 2009 [3], [4] developed an ensemble-based four-dimensional variational (En4DVar) data assimilation scheme where the approximated linear model is constructed using an ensemble of model forecasts. Recently, to extend the ensemblebased tangent linear model (TLM) to more realistic applications, Bishop et al (2016, 2017) [5], [6] incorporated a local ensemble tangent linear model (LETLM) into 4D-Var scheme. The LETLM has the ability to capture localized physical features of dynamic models with relatively small ensemble size. However, the construction of a tangent linear model becomes intractable for high-dimensional systems. Proper Orthogonal Decomposition (POD), a model order reduction method, is a possible approach to decrease the dimensionality of the original model. The POD approach has been applied to various disciplines, including reservoir model simulations [7], [8] and has in some cases shown significant speed up [9] . The combination of model linearization and model reduction techniques has the potential to further ease the implementation of adjoint models for highdimensional complex dynamic systems. Vermeulen and Heemink (2006) [10] combined POD and a nonintrusive perturbation-based linearization method to build a reduced-order linear approximation of the original high-dimensional non-linear model. The adjoint of this reduced-order linear model can be easily constructed and therefore the minimization of the objective function can be handled efficiently. Altaf et al (2009) [11] and Kaleta et al (2011) [12] applied this method to a coastal engineering and reservoir history matching problem, respectively.
Alternatively, Trajectory Piecewise Linearization (TPWL) can be classified as a model-intrusive linearization method. In TPWL, a number of full-order 'training' runs is first simulated, and then a linear model is generated through first-order expansion around the 'closest' training trajectories. In reservoir engineering, Cardoso et al (2010) [13] was the first to integrate the POD and the TPWL methods and applied this strategy to oil production optimization. He et al. (2013He et al. ( , 2014 applied the POD-TPWL method to both reservoir history matching and production optimization [14], [15]. These studies suggested that POD-TPWL has the potential to significantly reduce the runtime for subsurface flow problems [16]. A drawback, however, is that the POD-TPWL method requires access to derivative matrices used internally by the numerical solver, and therefore cannot be used with most commercial simulators [14], [17]. And although the traditional construction of a reduced-order linear model [10], [11], [12] is model non-intrusive, the required derivative information is estimated using a global perturbation-based finite difference method, which needs a large number of full-order simulations and is therefore computationally demanding. Furthermore, the global perturbation also hinders the extension of this method to large-scale reservoir history matching which requires retaining many POD patterns. In order to avoid model intrusion and numerous full-order simulations, we propose to incorporate domain decomposition (DD) and radial basis function (RBF) interpolation into POD-TPWL to develop a new non-intrusive subdomain POD-TPWL algorithm.
RBF interpolation is mainly used to construct surrogate models, and has been applied e.g. to reservoir engineering and fluid dynamics [18], [19], [20]. Recently, Bruyelle et al (2014) [21] applied the neural network-based RBF to obtain the first-order and second-order derivative information of a reservoir model and estimate the gradients and Hessian matrix for reservoir production optimization. The accuracy of RBF-based gradient approximation is determined by the sampling strategies of the interpolation data [21]. For high dimensional problems, the classical global RBF interpolation algorithm requires a large num-ber of interpolation data to capture the flow dynamic as much as possible [22]. Moreover, the global RBF algorithm can cause some spurious long-distance correlations, which implies the possibilities to avoid some redundant interpolation data. This motivates us to develop a subdomian RBF interpolation technique for reservoir models where the domain decomposition (DD) technique potentially allows us to apply the methodology to large-scale problems. Different local RBF interpolation schemes are considered based on the details of local flow dynamics in each subdomain. The domain decomposition technique first introduced in the work of Przemieniecki [23] has been applied to various fields [24]. Lucia et al. (2003) [25] first introduced the DD method to model-order reduction for accurately tracking a moving strong shock wave. Subsequently, the DD method has also been applied to non-linear model reduction problems [26], [27], [28].
This paper presents a new non-intrusive subdomain POD-TPWL algorithm for subsurface flow problem. The key idea behind this subdomain POD-TPWL is to integrate the DD method and RBF algorithm into a model linearization technique based on the POD-TPWL. After constructing the reduced-order linear model using subdomain POD-TPWL algorithm, because of the linearity in the reduced-order subspace, the implementation of adjoint model is easy and, thus, it is convenient to incorporate this reduced-order linear model into a gradient-based reservoir history matching procedure. The runtime speedup and the robustness of the new history matching algorithm have been assessed through several synthetic cases. This paper is arranged as follows: The history matching problem and the classical adjoint-based solution approach are described in Section 2. Section 3 contains the mathematical background of the traditional POD-TPWL. Section 4 gives the mathematical descriptions of domain decomposition (DD) and radial basis function (RBF) interpolation, which are used to develop the non-intrusive subdomain POD-TPWL algorithm. In addition, a workflow for combining subdomain POD-TPWL with an adjointbased history matching algorithm is described. Section 5 discusses and evaluates an application of the new history matching workflow to some numerical 'twin' experiments involving synthetic reservoir models. Finally, Section 6 summarizes our contribution and discusses future work.
Problem Description
A single simulation step of a discretized two-phase oilwater reservoir system is described as follows,
x n+1 = f n+1 (x n , β), n = 1, · · ·, N(1)
where the dynamic operator f n+1 : R 2N g →R 2N g represents the nonlinear time-dependent model evolution, x n+1 ∈R 2N g represents the state vector (pressure and saturation in every gridblock), N g is the total number of gridblocks, n and n + 1 indicate the timesteps, N denotes the total number of simulation steps, and β denotes the vector of uncertain parameters, which is the spatial permeability field in our case. For more details about the discretization of the governing equations, see e.g., [29].
The relationship between simulated data y m+1 and state vector x m+1 can be described by a nonlinear operator h m+1 : R 2N g →R N d , which, in our case, represents the well model (for seismic data another model would be needed). N d is the number of measurements at each timestep. The simulated measurements are therefore described by
y m+1 = h m+1 (x m+1 , β), m = 1, · · ·, N 0 (2)
where N 0 is the number of timesteps at which measurements are taken.
The history matching process calibrates the uncertain parameters by minimizing a cost function defined as a sum of weighted squared differences between observed and modeled measurements (data). Additional incorporation of prior information into the cost function as a regularization term can further constrain the minimization procedure and make the history matching problem wellposed [30]. Eventually, the cost function is described by the sum of two terms.
J(x 1 , · · ·, x n , · · ·, x N , β) = 1 2 (β − β p ) T R p −1 (β − β p ) + 1 2 N 0 m=1 (d m o − h m (x m , β)) T R m −1 (d m o − h m (x m , β)) (3)
where d m o represents the vector of observed data at timestep m.
In twin experiments d m o is generated by adding some noise, e.g r m , to the data y m t simulated with a 'truth' model. We will assume here that r m is a time-dependent vector of observation errors at time level m, which is uncorrelated over time, and satisfies the Gaussian distribution G(0, R m ) where R m is the observation error covariance matrix at the timestep m. β p represents the prior parameter vector, and R p represents the error covariance matrix of the prior parameters, which characterizes the uncertainty in the prior model. A gradient-based optimization algorithm can be used to determine a parameter set that is not too far away from the prior information, while minimizing the misfit between the observed and simulated data.
The key step of a gradient-based minimization algorithm is to determine the gradient of the cost function with respect to the parameters. The gradient of the cost function can be formulated by introducing the adjoint model as follows (more details about the mathematical derivation can be found in [31]),
[ dJ dβ ] T = R p −1 (β − β p ) − N n=1 [λ n ] T ∂f n ∂β − N 0 m=1 [ ∂h m (x m , β) ∂β ] T R m −1 (d m o − h m (x m , β)) (4)
where the adjoint model in terms of the Lagrange multipliers λ n is given by
λ n = [ ∂f n+1 ∂x n ]λ n+1 +[ ∂h n (x n , β) ∂x n ] T R n −1 (d n o −h n (x n , β)) (5)
for n = N, · · · 1 with an ending condition λ N+1 = 0. This adjoint approach has a high computational efficiency because just one forward simulation and one backward simulation are required to compute the gradient, independent on the size of the variable vector. It should be pointed out that four derivative terms, e.g, ∂h m (x m ,β) ∂β , ∂h n (x n ,β) ∂x n , ∂f n ∂β and ∂f n+1 ∂x n , are required in the adjoint method. We will give detailed descriptions of how to efficiently obtain these four terms using our proposed subdomain POD-TPWL algorithm in the following sections.
POD-TPWL algorithm
In the TPWL scheme, one or more full order "training" runs using a set of perturbed parameters are simulated first. The states and the derivative information at each time step from these runs are used to construct the TPWL surrogate. Given the state x n and parameters β, the state x n+1 is approximated as a first-order expansion around the training solution (x n+1 tr , x n tr , β tr ) as follows,
x n+1 ≈x n+1 tr + E n+1 (x n − x n tr ) + G n+1 (β − β tr ) (6) E n+1 = ∂f n+1 ∂x n tr , G n+1 = ∂f n+1 ∂β tr(7)
The training solution (x n+1 tr , x n tr , β tr ) is chosen to be as 'close' as possible to the state x n . A detailed description of the criterion for closeness can be found in [32]. The matrices E n+1 ∈ R 2N g ×2N g and G n+1 ∈ R 2N g ×N g represent the derivative of the dynamic model (Eq.1) at timestep n+1 with respect to states x n tr and parameters β tr respectively. Eq.6 is, however, still in a high-dimensional space, e.g, x n+1 ∈ R 2N g , and β ∈ R N g , which motivates the development of the POD-TPWL algorithm [32].
Proper Orthogonal Decomposition (POD) provides a means to project the high-dimensional states into an optimal lower-dimensional subspace. The basis of this subspace is obtained by performing a Singular Value Decomposition (SVD) of a snapshot matrix containing the solution states at selected time steps (snapshots) computed from training simulations. The state vector x can then be represented in terms of the product of a coefficient vector ψ and a matrix of basis vectors φ
x = φψ(8)
Let φ p and φ s represent separate matrices of basis vectors for pressure and saturation respectively. In general there is no need to contain all columns of the left singular matrix in φ p and φ s (see e.g. [32]) and a reduced state vector representation can be obtained by selecting only the first columns according to e.g. an energy criterion. To normalize the reduced state vector, the columns of φ p are determined by multiplying the left singular matrix U p with the singular value matrix Σ p (and similarly for saturation), i.e.
φ p = U p Σ p , φ s = U s Σ s .(9)
In this paper, we use Karhunen-Loeve expansion (KLE) to parameterize the parameter space. KLE reduces the dimension of the parameter vector by projecting the highdimensional parameter into an optimal lower-dimensional subspace [33]. The basis of this subspace is obtained by performing an eigenvalue decomposition of the prior parameter covariance matrix R p . If this covariance matrix is not accessible the basis can alternatively be obtained from an SVD decomposition of matrix holding an ensemble of prior parameter realizations with ensemble mean β b = β p . Including normalization of the reduced parameter vector, a random parameter vector sample β can be generated as follows,
β = β b + φ β ξ, with φ β = U β Σ β(10)
where φ β denotes the matrix of parameter basis vectors, U β and Σ β are the left singular matrix and singular value matrix of the parameter matrix respectively, and ξ denotes a vector with independent Gaussian random variables with zeros mean and unit variance. A reduced parameter space representation can again be obtained by selecting only the first several columns of φ β according to e.g. an energy criterion.
The number of the retained columns for basis matrix (denoted as l p and l s for pressure and saturation, l β for parameter, respectively) is determined through an energy criterion [32]. We take φ p as an example. We first compute the total energy E t , which is defined as E t = L i=1 ν 2 i , where ν i denotes the i-th singular value of snapshot matrix for pressure. The energy associated with the first l p singular vectors is given by
E l p = l p i=1 ν i 2 .
Then l p is determined such that E l p exceeds a specific fraction of E t . The same process can be assigned to determine l s and l β .
Substituting Eq.8 and Eq.10 into Eq.6, we obtain the following POD-TPWL formula,
ψ n+1 ≈ψ n+1 tr + E n+1 ψ (ψ n − ψ n tr ) + G n+1 ξ (ξ − ξ tr )(11)E n+1 ψ = φ T ∂f n+1 ∂x n tr φ, G n+1 ξ = φ T ∂f n+1 ∂β tr φ β(12)
Similarly, the well model as Eq.2 is also linearized around a close training solution (ψ n+1 tr , ξ tr ) in the reduced space as follows,
y m+1 ≈y m+1 tr + A m+1 ψ (ψ m+1 − ψ m+1 tr ) + B m+1 ξ (ξ − ξ tr ) (13) A m+1 ψ = ∂h m+1 ∂x m+1 tr φ, B m+1 ξ = ∂h m+1 ∂β tr φ β(14)
Eq.11 and Eq.13 represent the POD-TPWL system for reservoir model and well model in the reduced-order space, respectively. In general the traditional POD-TPWL method modifies the source code to output all derivative matrices [32]. In this paper, we integrate domain decomposition technique and radial basis function interpolation to approximately estimate these derivative matrices without accessing to the code. These derivative matrices then are substituted into POD-TPWL algorithm to form a subdomain reduced-order linear model.
Adjoint-based history matching using subdomain reduced-order linear model
This section describes the mathematical background of domain decomposition (DD), and radial basis function (RBF) interpolation, which are used to construct the subdomain non-intrusive reduced order linear model. In addition, how to incorporate this reduced-order linear model into the adjoint-based history matching is described in the last subsection.
Domain Decomposition Method
A 2D or 3D computational domain is denoted as Ω. The entire domain Ω is assumed to be decomposed into S nonoverlapping subdomains Ω d , d ∈ {1, 2, · · ·, S } (such that Ω = S d=1 Ω d and Ω i ∩ Ω j = 0 for i j) and each subdomain has local unknowns.e.g, local pressure and saturation variables. In each subdomain Ω d , the generated global snapshots within that subdomain are used to construct a set of local POD basis functions φ d and the corresponding POD coefficients ψ d,n+1 at the timestep n+1 as described in the previous section. For each subdomain Ω d , the reservoir dynamic model as Eq.1 is modified to represent the underlying dynamic system associated with this subdomain Ω d and its surrounding subdomains Ω sd in the reduced subspace, and can be reformulated as
ψ d,n+1 = £ d,n+1 (ψ d,n , ψ sd,n+1 , ξ)(15)
The well model represents the underlying dynamic system just associated with this subdomain Ω d , and can be given by
y d,m+1 = d,n+1 (ψ d,m+1 , ξ)(16)
where, vector ψ d,n denotes the set of POD coefficients at the time level n for the subdomain Ω d , ψ sd,n+1 denotes the set of POD coefficients at time level n+1 for the surrounding subdomains Ω sd . In a 2-D case, the number of surrounding subdomains associated to this subdomain Ω d is between 2 and 4, see a simple example in Fig.1, which shows a maximum of four surrounding subdomains connected with the subdomain Ω 5 , three surrounding subdomains connected with the subdomain Ω 2 , Ω 4 , Ω 6 , Ω 8 , and two surrounding subdomains connected with the subdomain Ω 1 , Ω 3 , Ω 7 , Ω 9 . has been developed to capture localized physical features of dynamic models with relatively small ensemble size [5], [6]. The key point of LETLM is to identify the influence area which is very similar to the purpose of domain decomposition described here. However, this LETLM needs to sequentially construct the TLM piecewise for each state variable, which results in a large number of fullorder model simulations and overwhelming programming efforts. In this study, we construct the reduced-order tangent linear model (TLM) piecewise for each subdomain instead of each state variable. We propose to use RBF interpolation to obtain the derivative matrices that are required by the POD-TPWL. In addition, domain decomposition has the abilities to efficiently capture localized physical features [22], and therefore has the potential to improve the derivative estimate by local low-dimensional RBF interpolation which will be described in the next subsections.
Radial Basis Function Interpolation
RBF interpolation can be classified as a data-driven interpolation method, which is mainly used to construct surrogate model [18], [19]. High-dimensional interpolation needs a large number of data to obtain a satisfactory accuracy, a phenomenon often referred to as the "curse of dimensionality". To remedy this difficulty, domain decomposition technique approximates the global domain by the sum of the local subdomains, and therefore can be applied to form a locally low-dimensional RBF interpolation.
For subdomain Ω d , let £ d,n+1 (ψ d,n , ψ sd,n+1 , ξ) denote a RBF interpolation function for the POD coefficient ψ d,n+1 at the time level n+1. The RBF interpolation function is a linear combination of M radial basis functions in the form of,
£ d,n+1 (ψ d,n , ψ sd,n+1 , ξ) = M j=1 ω d,n+1 j × θ(||(ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )||)(17)
where, ω d,n+1 is a weighting coefficient vector of size M (number of training runs).
||(ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )
|| is a scalar distance using L 2 norm. θ is a set of specific radial basis functions.
The specific coefficient ω d,n+1 j is determined so as to ensure that the interpolation function values £ d,n+1 at the training data points (ψ d,n j , ψ sd,n+1 j , ξ j ), matches the given data ψ d,n+1 j exactly. This can be expressed by,
D d,n+1 ω d,n+1 = Z d,n+1(18)
where
D d,n+1 = θ(l n+1 (1, 1)) ... θ(l n+1 (1, M)) . θ(l n+1 (i, j) . θ(l n+1 (M, 1)) ... θ(l n+1 (M, M)) l n+1 (i, j) = ||(ψ d,n i , ψ sd,n+1 i , ξ i ) − (ψ d,n j , ψ sd,n+1 j , ξ j )||, i = 1, · · ·, M; j = 1, · · ·, M(19)ω d,n+1 = [ω d,n+1 1 , ω d,n+1 2 , · · ·, ω d,n+1 M ] T (20) Z d,n+1 = [ψ d,n+1 1 , ψ d,n+1 2 , · · ·, ψ d,n+1 M ] T(21)
The weighting coefficients are determined by solving the linear system of equation Eq.18. A list of well-known radial basis function are provided in Table 1. In general, some different type of radial basis function θ can be chosen depending on specific problems. In our case, we chose Multi-Quadratic radial basis function. l represents the Eu-
clidean distance (ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )
. ǫ denotes the shape parameters, which can be optimized using greedy algorithm [20].
After the construction of RBF interpolation, we can analytically estimate the gradient at the 'closet' training data points, e.g assuming the i-th training (ψ d,n i , ψ sd,n+1 i , ξ i ), by differentiating the RBF as follows,
∂£ d,n+1 ∂ξ | ξ=ξ i = M j=1 ω d,n+1 j × ∂θ(||(ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )||) ∂ξ | ξ=ξ i (22) ∂£ d,n+1 ∂ψ sd,n+1 | ψ sd,n+1 =ψ sd,n+1 i = M j=1 ω d,n+1 j × ∂θ(||(ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )||) ∂ψ sd,n+1 | ψ sd,n+1 =ψ sd,n+1 i (23) ∂£ d,n+1 ∂ψ d,n | ψ d,n =ψ d,n i = M j=1 ω d,n+1 j × ∂θ(||(ψ d,n , ψ sd,n+1 , ξ) − (ψ d,n j , ψ sd,n+1 j , ξ j )||) ∂ψ d,n | ψ d,n =ψ d,n i(24)Functions Definition Gaussian θ(l) = e −( l ǫ ) 2 Linear Spline θ(l) = l Multi-Quadratic θ(l) = √ l 2 + ǫ 2 Inverse Caddric θ(l) = 1 l 2 +ǫ 2 Cubic Spline θ(l) = l 3 Thin Plate Spline θ(l) = l 2 logl Inverse Multistory θ(l) = 1 √ l 2 +ǫ 2
Similarly, the well model Eq.16 also can be approximately constructed using RBF interpolation method as follows, (25) And the gradient at the sets of training data points by differentiating the RBF function Eq.25 with respect to (ψ d,m+1 i , ξ i ) can be given by
y d,m+1 ≈ d.n+1 (ψ d,m+1 , ξ) = M j=1 ε d,m+1 j × θ( (ψ d,m+1 , ξ) − (ψ d,m+1 j , ξ j ) )∂ d,n+1 ∂ξ | ξ=ξ i = M j=1 ε d,m+1 j × ∂θ( (ψ d,m+1 , ξ) − (ψ d,m+1 j , ξ j ) ) ∂ξ | ξ=ξ i (26) ∂ d,n+1 ∂ψ d,m+1 | ψ d,m+1 =ψ d,m+1 i = M j=1 ε d,m+1 j × ∂θ( (ψ d,m+1 , ξ) − (ψ d,m+1 j , ξ j ) ) ∂ξ | ψ d,m+1 =ψ d,m+1 i(27)
where, d,n+1 (ψ d,m+1 , ξ) denotes a RBF interpolation function for the simulated measurements y d,m+1 at the time level m+1 for the subdomain Ω d from the set (ψ d,m+1 , ξ). ε d,m+1 is a weighting coefficient vector of size M (number of training data sets).
(ψ d,m+1 , ξ) − (ψ d,m+1
j , ξ j ) is a scalar distance by L 2 norm. θ is a set of specific radial ba-sis functions and weighted by a corresponding coefficient
ε d,m+1 j .
Subdomain POD-TPWL algorithm
By considering the dynamic interaction between neighboring subdomains as in Eq.15, the coefficients ψ d,n+1 for the subdomain Ω d can be obtained by the modification of Eq.11 as follows,
ψ d,n+1 ≈ ψ d,n+1 tr + E d,n+1 ψ tr (ψ d,n − ψ d,n tr ) + E sd,n+1 ψ tr (ψ sd,n+1 − ψ sd,n+1 tr ) + G n+1 ξ tr (ξ − ξ tr ) (28)
Coupling domain decomposition and radial basis function interpolation, the derivative matrices required by POD-TPWL for the subdomain Ω d are estimated as follows
E d,n+1 ψ tr ≈ ∂£ d,n+1 ∂ψ d,n | ψ d,n =ψ d,n tr , E sd,n+1 ψ tr ≈ ∂£ d,n+1 ∂ψ sd,n+1 | ψ sd,n+1 =ψ sd,n+1 tr G n+1 ξ ≈ ∂£ d,n+1 ∂ξ | ξ=ξ tr(29)
Similarly, substituting Eq.26-Eq.27 into Eq.13, the simulated measurements y d,m+1 of the subdomain Ω d can be reformulated as
y d,m+1 ≈ y d,m+1 tr + A d,m+1 ψ tr (ψ d,m+1 − ψ d,m+1 tr ) + B m+1 ξ tr (ξ − ξ tr ) (30) A d,m+1 ψ ≈ ∂ d,m+1 ∂ψ d,m+1 | ψ d,m+1 =ψ d,m+1 tr , B m+1 ξ tr ≈ ∂ d,m+1 ∂ξ | ξ=ξ tr (31)
Our reformulated subdomain POD-TPWL algorithm has three underlying advantages over the traditional POD-TPWL algorithm: (1) the approximation of the derivative matrices is non-intrusive, e.g, it does not require the modification of legacy code; (2) the implementation of POD-TPWL is local in each subdomain, which has the potential to capture features dominated by local dynamics better than global approximations. Therefore, we could also refer to the subdomain POD-TPWL algorithm as local POD-TPWL; (3) The non-adjacent subdomains almost have no direct dynamic interactions, this kind of subdomain POD-TPWL algorithm can be easily parallelized. Referring to Fig.1, subdomains Ω 1 , Ω 3 , Ω 5 , Ω 7 , Ω 9 have no direct interactions, and therefore the subdomain POD-TPWL algorithm can be simultaneously implemented in these five subdomains. This is similar for the subdomains Ω 2 , Ω 4 , Ω 6 , Ω 8 .
The subdomain POD-TPWL algorithm consists of an offline stage and an online stage. The offline stage describes a computational procedure on how to construct a set of local RBF and estimate the derivative information for each subdomain. Firstly, the solutions of the full-order model are saved as a sequence of snapshots over the whole computational domain and then partitioned into subdomains. From the local sequence of snapshots over each subdomain, a number of local basis vectors is formed using POD, and then unlike the traditional practices that RBF is used to construct a set of surrogates for each subdomain, we use RBF to estimate the derivative matrices for each subdomain. Finally, those estimated derivative matrices are substituted into POD-TPWL algorithm to form a reduced-order linear model in each subdomain. While the online stage describes how to iteratively implement the subdomain POD-TPWL where the dynamic interactions between a subdomain and its surrounding subdomains are considered. Referring to Eq.28, the variables of one subdomain at current time level can be linearized around the variables of this subdomain at previous timestep and variables of neighboring subdomains at current timestep, which have not been determined. Thus, some additional iterative steps are needed.
Sampling Strategy
In our proposed subdomain POD-TPWL algorithm, training points are required for both RBF interpolation and to construct the POD basis. For POD, the snapshot matrix generated from the training simulations are expected to sufficiently preserve the dynamic behavior of the system. For RBF interpolation, the training points are selected to compute the derivative matrices. The procedure on how to choose these training points will be described here.
Sampling strategy for POD. A set of parameters is initially sampled and used as input for full-order model (FOM) simulations from which a snapshot matrix is constructed. The singular value spectrum is computed for this initial set of samples. The number of samples is then increased one at a time, i.e. adding one FOM simulation, and the SVD is recomputed, until no significant changes are observed in the singular value spectrum.
Sampling strategy for RBF. The accuracy of the RBF interpolation will be reduced if too few data points are chosen, while the computational cost increases with the number of data points, which will be prohibitive if too many points are chosen. To limit the number of FOM simulations used to construct the interpolation model for the POD coefficients we use 2-sided perturbation of each coefficient ξ j resulting in 2 × l β + 1 points. In some experiments we add additional points by simultaneous random sampling of perturbations ∆ξ. An alternative could be use to use Smolyak sparse grid sampling [34].
Adjoint-based history matching algorithm
After linearizing the original full-order model to a reduced-order linear model, because of the linearity in the reduced-order space, the implementation of the adjoint model is easily realized. It is convenient to incorporate this reduced-order linear model established using the subdomain POD-TPWL into the adjoint-based reservoir history matching.
The cost function in the reduced-order space can be given by reformulating the Eq.3 as follows,
(ξ) = 1 2 (β b + φ β ξ − β p ) T R p −1 (β b + φ β ξ − β p ) + 1 2 S d=1 N 0 m=1 [d d,m o − y d,m tr − A d,m ψ tr (ψ d,m − ψ d,m tr ) − B m ξ tr (ξ − ξ tr )] T R m −1 [d d,m o − y d,m tr − A d,m ψ tr (ψ d,m − ψ d,m tr ) − B m ξ tr (ξ − ξ tr )](32)
and its gradient is
[ d dξ ] T = (φ β ) T R p −1 (β b + φ β ξ − β p ) − S d=1 N 0 m=1 [B m ξ tr ] T R m −1 [d d,m o − y d,m tr − A d,m ψ tr (ψ d,m − ψ d,m tr ) − B m ξ tr (ξ − ξ tr )] − S d=1 N n=1 [G n ξ tr ] T λ d,n(33)
where λ d,n is obtained as the solution of the adjoint model for the subdomain Ω d is given by
[I − (E d,n ψ tr ) T ]λ d,n = S d=1 [A d,n ψ tr ] T R n −1 [d d,n o − y d,n tr − A d,n ψ tr (ψ d,n − ψ d,n tr ) − B n ξ tr (ξ − ξ tr )] + [E sd,n ψ tr ] T λ d,n+1(34)
The minimization of the cost function (Eq.32) can be performed using a steepest descent algorithm [35] and is stopped when either one of the following stopping criteria is satisfied
• No more change in the cost function,
| (ξ k+1 ) − (ξ k )| max| (ξ k+1 )|, 1 < η (35)
• No more change in the estimate of parameters,
|ξ k+1 − ξ k | max|ξ k+1 |, 1 < η ξ(36)
• The maximum number of iterations has been reached. i.e k <= N max (37) where η and η ξ are predefined error constraints and N max is the maximum number of iterations. As mentioned in [12], the solution of the reduced and linearized minimization problem based on Eq.32 is not necessarily the solution of the original problem based on Eq.3. Therefore an additional stopping criterion should be introduced for the original model as follows [36],
N d N 0 − 2 2N d N 0 2J(β k ) N d N 0 + 2 2N d N 0 (38)
where, N 0 is the number of timesteps where the measurements are taken, N d is the number of measurements at each timestep, β k represents the updated parameters vector at the k-th outer-loop. J is the cost function computed as Eq.3. If the objective function does not obey the stopping criterion as Eq.38, then additional outer-loops are required to reconstruct new reduced-order linear models using the updated parameters. Since the dynamic patterns are mainly dominated by the model parameterization, whenever the parameters are changed, a new background state and a new set of patterns need to be identified. Thereafter, a new reduced-order linear model is built and the aforementioned iterative inner-loop is performed again.
Our proposed non-intrusive subdomain POD-TPWL has computational advantages over the traditional construction of reduced-order linear models using perturbation-based finite-difference method proposed in [12], especially when the reduced-order linear model is required to be reconstructed for each outer-loop. Instead of re-perturbing the parameter and state variables one by one to approximate the derivative matrices as proposed in [12], which would require an additional (l p +l s +l β +1) full order model (FOM) simulations, our algorithm runs only one additional FOM simulation using updated parameters. The updated parameters and simulated snapshots are added into the previous group of sampling interpolation points and corresponding snapshots. The derivative matrices for the updated parameters are approximated based on the updated group of interpolation points and snapshots. The overall workflow has been summarized conceptually in Fig.2. The individual steps of the history matching algorithm described in this section are summarized in the flow chart presented in Fig.3.
Numerical experiments and Discussion
In this section, some numerical experiments are presented that aim to demonstrate and evaluate our proposed adjoint-based history matching algorithm. In our numerical experiments, MRST, a free open-source software for reservoir modeling and simulation [37], is used to run the full-order model simulations.
Description of model settings
A 2D heterogeneous oil-water reservoir is considered with two-phase imcompressible flow dynamics. The reservoir contains 8 producers and 1 injector, which are labeled as P 1 to P 8 , and I 1 respectively, see Fig.4. Detailed information about the reservoir geometry, rock properties, fluid properties, and well controls are summarized in Table 2.
Reduced model construction
We generate an ensemble of 1000 Gaussian-distributed realizations of log-permeability. We also assume that the generated log-permeability fields are not conditioned to the permeability values at the well locations. The logpermeability fields and the corresponding porosity fields are described by the following statistics:
σ β = 5(39)C β (x i1, j1 ; y i2, j2 ] = σ 2 β e −[( |x i1 −x i2 | χx ) 2 +( |y i1 −y i2 | χy ) 2 ](40)χ x L x = 0.2, χ y L y = 0.2 (41) φ = 0.25( e β 200 ) 0.1(42)
Here, σ β is the standard deviation of log-permeability β; C β is the covariance of β; x i1, j1 =(x i1 ,y j1 ) denotes the coordinates of a grid block; χ x (or χ y ) is the correlation length in x (or y) direction; and L x (or L y ) is the domain length in x (or y) direction. The background logpermeability β b is taken as the average of the 1000 realizations. One of the realizations was considered to be the truth, and is illustrated in Fig.5(a). The permeability field was parameterized using KL-expansion and about 95% energy is maintained, resulting in 18 permeability patterns with l β = 18 corresponding independent PCA coefficients, which are used in the workflow as a low-dimensional representation of the 2500 grid block permeability values. Fig.5(b) shows the projection of the 'true' permeability field in this low-dimensional subspace which shows that the truth can be almost perfectly reconstructed in this subspace. Four realizations for log-permeability field generated are additionally shown in Fig.6. After having reduced the parameter space, the next step is to reduce the reservoir model. The first step is to generate a set of training runs from which snapshots will be taken. Since the required number of training runs is not known a priori we follow the following procedure: (1) generate a sample PCA coefficient vector by sampling from the set −1, 1, (2) run a full-order model simulation with these parameters, (3) extract snapshots and add to a snapshot matrix, (4) compute the singular value decomposition of the snapshot matrix (5) repeat steps (1) to 4) until changes in the singular values are insignificant. For this case this produced a set of 15 training runs and 240 snapshots for pressure and saturation each.
The next step is to build a local RBF interpolation model. We divide the entire domain into 9 rectangle subdomains as illustrated in Fig.7. The choice of subdomains is fairly arbitrary at this point since we have no formal algorithm to determine the best number and design of the subdomains. The previously collected global snapshots for pressures and saturations are divided into local snapshots. For each subdomain, two separate eigenvalue problems for pressure and saturation are solved using POD. The number of the reduced parameters and state patterns for each subdomain and the global domain are listed in Table 3 where 95% and 90% of energy are preserved for the pressure and saturation respectively in each subdomain.
After implementing the KL-expansion, original parameters are represented by a set of independent Gaussian random variables with zero mean and unit variance. In our case, the initial 37 sampling points are selected within interval [-1,1] as described in subsection of sampling strategy, where the j-th element ξ j i of the i-th PCA coefficient vector ξ j is perturbed sequentially in 2 opposite directions (positive and negative) by a specific amplitude perturbation ∆ ξ j i .
The history period is 5 years during which observations are taken from 8 producers and 1 injector every two model timesteps (nearly 73 days) resulting in 25 time instances. Noisy observations are generated from the model with the "true" permeability field and include bottom-hole pressures (BHP) in the injector and fluid rates and water-cut (WCT) in the producers. As a result we have 200 fluid rates and 200 WCT values measured in the producers and 25 bottom-hole pressures measured in the injector, which gives in total 425 measurement data. Normal distributed independent measurement noise with a standard deviation equal to 5% of the 'true' data value, was added to all observations. The generated measurements are shown in Fig.8.
To analyze the results, we define two error measures based on data misfits e obs and parameter misfits e β as follows,
e obs = N o i=1 N d j=1 (d i, j obs − d i, j upt ) 2 N o N d (43) e β = N g i=1 (β i true − β i upt ) 2 N g (44)
where, d i, j obs and d i, j upt represent the measurements and simulated data using the updated model respectively; β i true and β i upt denote the grid block log-permeability from the 'true' model and updated model respectively. Table 4 show the results of the first numerical experiments, including the updated logpermeability field, the value of cost function at each iteration and the mismatch between observed data and predictions. To demonstrate the performance of our proposed methodology, we compared the results with those of finite-difference (FD) based history matching algorithm without domain decomposition and model order reduction. The total computational cost of any minimization problem strongly depends on the number of parameters. In our work, for a fair comparison, we use the same reparameterization to reduce the number of parameters and implement the finite-difference based history matching in this reduced-order parameter subspace. The cost function for finite-difference based history matching can be defined as follows,
History matching results
Figures 9, 10 and 11 and
J(ξ) = 1 2 (β b + φ β ξ − β p ) T R p −1 (β b + φ β ξ − β p ) + 1 2 N 0 m=1 (d m o − h m (x m , ξ)) T R m −1 (d m o − h m (x m , ξ)) (45)
The finite-difference method is used to compute the numerical gradient of the cost function as Eq.45 with respect to 18 PCA coefficients. A FD gradient is determined by one-sided perturbation of each of the 18 PCA coefficients. Thus, 19 full-order model (FOM) simulations are required for each iteration step. The stopping criteria are set η = 10 −4 , η ξ = 10 −3 , and N max =30. As can be seen from Fig.10 and Table 4, the model-reduced approach needs 55 full order model (FOM simulations, consiting of 15 FOM simulations to collect the snapshots and 37 FOM simulations to construct the initial reduced-order linear model in the first outer-loop. The remaining 3 FOM simulations are used to reconstruct the reduced-order linear models in the next three outer-loops and to calculate the cost function as Eq.3 in the original space. Fig.9 shows the true, initial and final estimates of log-permeability field. In this case, the main geological structures of the the 'true' model can be reconstructed with both methods. However, the parameter estimates obtained with proposed methodology more accurately reproduce the true amplitudes than those obtained with the classic finite-difference based history matching. From Fig.11 and Table 4, we can both qualitatively and quantitatively observe that the history matching process results in an improved prediction in all of the eight production wells. Fig.9 illustrates the data match of fluid rate and water-cut up to year 5 and an additional 5-year prediction until year 10 for all 8 producers. The prediction based on the initial model is far from that of the true model. After history matching, the predictions from the updated model match the observations very well. Also the prediction of water breakthrough time is imporved for all of the production wells, also for wells that show water breakthrough only after the history period. One of the key issues for the subdomain POD-TPWL is the implementation of the domain decomposition technique. Our proposed subdomain POD-TPWL (SD POD-TPWL) can be easily generalized to the global domain POD-TPWL (GD POD-TPWL). The differences between SD POD-TPWL and GD POD-TPWL are: a) model order reduction in global domain versus redcution in each subdomain separately; b) derivative estimation using RBF interpolation in the global domain versus interpolation for each subdomain. As shown in Table 3, the total dimension of the reduced-order linear model is 18+122+51=191 for domain decomposition and 18+72+36=126 for the global domain. Table 5 shows the total number of the reduced Figure 9: True, initial and updated log-permeability fields using SD POD-TPWL, GD POD-TPWL, and the FD method for case 1.
variables in each subdomain and in the global domain. While the total sum of the reduced variables in each subdomain is larger than that of the global domain, the number of reduced variables in each individual subdomain is relatively small. Furthermore, these local reduced variables have the surprisingly abilities to accurately capture the flow dynamics, as suggested by Fig.12. Fig.12 shows the distribution of pressure and saturation at the final time.
In this case, the reconstructions of the saturation and pressure field using a small number of patterns in each subdomain are comparable with those of the global domain.
In addition, as shown in Table. scheme. In the SD POD-TPWL algorithm this problem is avoided since for large-scale problems the dimension of the reduced-order linear model for the subdomain does not increase significantly; we only need to activate more subdomains. 118=19(Ω 2 )+17(Ω 4 )+23(Ω 5 )+20(Ω 6 )+21(Ω 8 )+18 6 95=17(Ω 3 )+23(Ω 5 )+20(Ω 6 )+17(Ω 9 )+18 7 74=17(Ω 4 )+18(Ω 7 )+21(Ω 8 )+18 8 97=23(Ω 5 )+18(Ω 7 )+21(Ω 8 )+17(Ω 9 )+18 9 76=20(Ω 6 )+21(Ω 8 )+17(Ω 9 )+18
For Case 1, history matching results using GD POD-TPWL are slightly better than those from the subdomain POD-TPWL, especially for the high-permeable zone, e.g, the red area in Fig.9. The water-front of the waterflooding process propagates forward quickly (as the blue area in Fig.12) and therefore there are strong dynamic interac- tions within this area. Our chosen domain decomposition may artificially cut off this inherent dynamic interaction between the east-south corner and the west-north corner. A flow-informed domain decomposition technique may therefore be required to identify the relevant dynamic interactions, especially for strongly heterogeneous reservoir models such as those based on strongly contrasting facies distributions or channels. Solutions in our previous numerical experiments do not enable us to quantify the uncertainty of the permeability field and the predictions. In general random maximum likelihood (RML) procedure [30] enables the assessment of the uncertainty by generating multiple 'samples' from the posterior distribution. Each of these samples is a history matched realization, which also honors the data. Traditional 4D-Var or gradient-based history matching method obtains only one specific solution. Additional solutions are obtained by repeatedly implementing the minimization process, but has a very highly computational cost. The RML procedure can be efficiently implemented using our proposed reduced-order history matching algorithm. When different background parameters are chosen to construct the reduced-order linear models, several valid solutions are obtained based on acceptable data misfits. In this case, we choose 20 different background parameter sets to repeatedly implement our proposed adjoint-based reservoir history matching process. Once the reduced model has been constructed the history matching can be efficiently repeated for different initial (background) models. Fig.13 shows an ensemble of the posterior realizations (updated log-permeability field) using 20 different background parameters sets. The main geological features, e.g, the high permeable area, are partly reconstructed in all of these 20 cases. Fig.14 and Fig.15 show an ensemble of forecasts and the corresponding data misfits respectively using these 20 different initial and posterior models. Almost all of these 20 calibrated models produce improved predictions of the fluid rate and WCT for all eight producers that are generally consistent with the data. Thus, all of these 20 updated log-permeability fields can be regarded as acceptable solutions of the history matching problem. The spread of the predictions from the posterior realizations is significantly decreased relative to the predictions from the prior realizations. In the course of uncertainty quantification, we randomly choose these 20 background parameter sets from the prerun 52 FOM simulations which is used to construct the reduced-order linear in the first outer-loop. In addition, the analysis of the ensemble spreading enables us not to implement the additional outer-loops for updating the reduce-order linear model. This means there is no need to run additional FOM simulations for the outerloops, which makes our method very efficient. Finally, in order to obtain these 20 solutions, the RML procedure totally requires 52 FOM simulations, including 15 FOM simulations for collecting snapshots, 37 FOM simulations for the initial construction of the reduced-order linear model and no additional outer-loop.
Computational aspects
This section discusses the computational aspects of our proposed adjoint-based reservoir history matching algorithm. The offline computational costs for subdomain POD-TPWL algorithm comprise (1) executing reparameterization using eigenvalue decomposition of the covariance matrix, (2) implementing model order reduction using POD in each subdomian, (3) conducting RBF interpolation and computing the derivative matrices. The cost of eigenvalue decomposition and POD is negligible for small models, while it will become significant for largescale models. In our cases, the required number of FOM simulations is roughly 2-3 times the number of PCA coefficients, e.g, 54 simulations for the synthetic model, 113 (sampling within a small interval [-0.1, 0.1]) and 199 (sampling within a relative large interval [-1,1]) FOM simulations for the SAIGUP model, respectively. This process is code non-intrusive without the need of large programming effort. Besides, this process is also easily parallelized. Once available, the costs of running the reduced model are negligible. We should note that the gradient-based reservoir history matching generally requires O(10 2 −10 4 ) FOM simulations, thus, an offline cost of O(10−10 2 ) FOM simulations in these settings is attractive. For large-scale reservoir history matching, the main computational cost is dominated by the required number of FOM simulations. In our proposed method, most part of the FOM simulations is mainly in offline stage, which means that our method is easily implemented.
Conclusions
We have introduced a variational data assimilation method where the adjoint model of the original highdimensional non-linear model is replaced by a subdomain reduced-order linear model. Reparameterization and proper orthogonal decomposition techniques are used to simultaneously reduce the parameter space and reservoir model. In order to avoid the need for simulator code access and modification and numerous full-order model simulations, we integrated domain decomposition and radial basis function interpolation with trajectory piecewise linearization to form a new subdomain POD-TPWL algorithm. The reduced-order linear model is easily incorporated into an adjoint-based parameter estimation procedure. The use of domain decomposition allows for largescale applications since the number of interpolation points required depends primarily on the number of the parameters and not on the dimension of the underlying full-order model.
We used the subdomain model-reduced adjoint-based history matching approach to calibrate the unknown permeability fields of a 2D synthetic model with noisy synthetic measurements. The permeability field is parameterized using a KL-expansion resulting in a small number of permeability patterns that are used to represent the original gridblock permeability. The reservoir domains is divided into 9 subdomains. In the numerical experiment, our methodology accurately reconstructs the 'true' permeability field and shows similar results as more classic finite-difference based history matching. Our method also significantly improves the prediction of fluid rate and water breakthrough time of production wells. Without any additional full-order model simulations, our approach efficiently generates an ensemble of models that all approximately match the observations. For the cases studied in this paper, the number of full-order model simulations required for history matching is roughly 2-3 times the number of the number of global parameter patterns.
There are a number of aspects of the proposed methodology that could possibly be improved. It was observed that sampling strategy has to be chosen with care to obtain an efficient implementation. Some diagnostics could possibly be devised to determine if and how many additional sampling points need to be generated. We have chosen somewhat arbitrary decompositions of the global domain into subdomains. It may be beneficial to choose the subdomains based on information about the main dynamical patterns. Since in reservoir applications these pat- terns are strongly affected by the placement of producers and injectors the subdomain decomposition could possi-bly be informed by the well lay-out. In this paper we considered a global parameterization of the log-permeability field where the PCA patterns are defined over the entire domain. From a computational point of view, a local parameterization where the parameters are defined in each subdomain separately is very attractive. Since in this case parameters can be perturbed independent of each other and the effects of all these perturbations can be computed with very few full-order model simulations. The local parameterization technique is the focus of our ongoing research. Also more complex history matching problem should be tested to shwo whether very promising results could be obtained.
Figure 1 :
1Illustration of domain decomposition in a 2-D case The POD-TPWL algorithm forms a reduced-order tangent linear model (TLM) of the original nonlinear model. Recently, a local ensemble tangent linear model (LETLM)
Figure 2 :Figure 3 :
23The illustration of the reconstruction method of subdomain POD-TPWL algorithm The illustration of adjoint-based history matching using subdomain POD-TPWL algorithm
Figure 4 :
4The well placement in the 2-D reservoir model for case 1
Figure 5 :Figure 6 :
56Comparison of the 'true' reservoir model in full-order space and in reduced-order space for Case 1). Examples of realizations of the log-permeability field generated from PCA coefficients sampled randomly from the set −1, 1 for Case 1.
Figure 7 :
7Illustration of the applied domain decomposition for Case 1.
Figure 8 :
8Measured quantities for case 1. Blue solid line: reference model (truth). Black dashed line: noisy data.
Figure 10 :
106, both GD POD-TPWL and SD POD-TPWL can converge to satisfactory results. The SD POD-TPWL needs 55 FOM simulations, while the GD POD-TPWL algorithm requires 73 FOM simulations (15 FOM simulations are run to collect the snapshots, 55 FOM simulations are used to construct the initial reduced-order linear model in the first outer-loop, and the remaining 3 FOM simulations are used to reconstruct the reduced-order linear models in the following three outerloops). Therefore, compared to the global RBF interpolation, the proposed local RBF interpolation technique requires only a small number of reduced variables per subdomain and is much more computationally efficient. If the dimension of the underlying model would be much larger, the GD POD-TPWL would result in a reducedorder linear model with a higher dimension and therefore more interpolation points would be required in the RBF Cost function value decrease using (a) finite-difference method, and (b) subdomain POD-TPWL for case 1. OL-i means the i-th outer-loop.
Figure 11 :
11Forecasts of the liquid rate and water-cut for case 1 from the initial model (green line), the reference model (blue line), and the model updated using the SD POD-TPWL (red line). Measured data are indicate by open circles.
Figure 12 :
12Water saturation and pressure fields from the full-order model and from SD POD-TPWL and GD POD-TPWL based models for case 1.
Figure 13 :
13Ensemble of 20 updated log-permeability fields for case 1.
Figure 14 :Figure 15 :
1415Ensemble of fluid rate and water-cut prediction for case 1. The gray lines represent the predictions from the 20 prior permeability realizations, while the red lines represent the predictions from the corresponding 20 posterior permeability realizations calibrated using our method. The circles represent the noisy data. Error analysis in terms of measurement and permeability misfits for the 20 solutions for case 1.
Table 1 :
1Some well-known radial basis functions
Table 2 :
2Experiment settings using MRST fro case 1Description
Value
Dimensions
50 × 50 ×1
Grid cell size
20 × 20 × 10
Number of wells
8 producers, 1 injector
Fluid density
1014 kg/m 3 , 859 kg/m 3
Fluid viscosity
0.4 mP·s, 2 mP·s
Initial pressure
30 MPa
Initial saturation
S o =0.80, S w =0.20
Connate water saturation
S wc =0.20
Residual oil saturation
S or =0.20
Corey exponent, oil
4.0
Corey exponent, water
4.0
Injection rate
200m 3 /d
BHP
25MPa
History production time
5 year
Prediction time
10 year
Timestep
0.1 year
Measurement timestep
0.2 year
Table 3 :
3Summary of the number of reduced variables for the global domain and after domain decomposition for case 1 (Note:s refers to saturation, p refers to pressure)Domain Decomposition
Global Domain
SD
β
s
p
β
s
p
1
18
14
7
18
72
36
2
13
6
3
12
5
4
13
4
5
16
7
6
14
6
7
13
5
8
15
6
9
12
5
Total
18
122
51
18
72
36
Table 4 :
4comparison between SD POD-TPWL and FD method for case 1-
Iterations
FOM
J(ξ)×10 4
e obs
e β
Initial model
-
-
1.69
28.38
2.28
SD POD-TPWL
103
55
0.0160
3.35
0.68
FD
52
988
0.0153
3.28
0.72
'True' model
-
-
0.0068
2.12
0
Table 5 :
5The number of interpolation variables in each subdomain and global domain for case 1. Ω d is the d-th subdomainDomain Decomposition
Global Domain
1
75=21(Ω 1 )+19(Ω 2 )+17(Ω 4 )+18
126 =72 +36+18
2
98=21(Ω 1 )+19(Ω 2 )+17(Ω 3 )+23(Ω 5 )+18
3
74=19(Ω 2 )+17(Ω 3 )+20(Ω 6 )+18
4
97=21(Ω 1 )+17(Ω 4 )+23(Ω 5 )+18(Ω 7 )+18
5
Table 6 :
6comparison between SD POD-TPWL and GD POD-TPWL for case 1-
FOM
J(ξ)×10 4
e obs
e β
Initial model
-
1.69
28.38
2.28
SD POD-TPWL
55
0.0160
3.35
0.68
GD POD-TPWL
73
0.0140
3.21
0.61
FD
988
0.0153
3.28
0.72
'True' model
-
0.0068
2.12
0
AcknowledgmentWe thank the research funds by China Scholarship Council (CSC) and Delft University of Technology.
Methods of mathematical physics: Wiley interscience. R Courant, Hilbert, R Courant and D Hilbert. Methods of mathematical physics: Wiley interscience. 1962.
A strategy for operational implementation of 4d-var, using an incremental approach. J-N Philippe Courtier, Anthony Thépaut, Hollingsworth, Quarterly Journal of the Royal Meteorological Society. 120519PHILIPPE Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational implementation of 4d-var, using an incre- mental approach. Quarterly Journal of the Royal Meteorological Society, 120(519):1367-1387, 1994.
An ensemblebased four-dimensional variational data assimilation scheme. part i: Technical formulation and preliminary test. Chengsi Liu, Qingnong Xiao, Bin Wang, Monthly Weather Review. 1369Chengsi Liu, Qingnong Xiao, and Bin Wang. An ensemble- based four-dimensional variational data assimilation scheme. part i: Technical formulation and preliminary test. Monthly Weather Review, 136(9):3363-3373, 2008.
An ensemble-based four-dimensional variational data assimilation scheme. part ii: Observing system simulation experiments with advanced research wrf (arw). Chengsi Liu, Qingnong Xiao, Bin Wang, Monthly Weather Review. 1375Chengsi Liu, Qingnong Xiao, and Bin Wang. An ensemble-based four-dimensional variational data assimilation scheme. part ii: Ob- serving system simulation experiments with advanced research wrf (arw). Monthly Weather Review, 137(5):1687-1704, 2009.
Localized ensemble-based tangent linear models and their use in propagating hybrid error covariance models. Sergey Frolov, H Craig, Bishop, Monthly Weather Review. 1444Sergey Frolov and Craig H Bishop. Localized ensemble-based tan- gent linear models and their use in propagating hybrid error covari- ance models. Monthly Weather Review, 144(4):1383-1405, 2016.
The local ensemble tangent linear model: an enabler for coupled model 4d-var. H Craig, Sergey Bishop, Frolov, David D Douglas R Allen, Karl Kuhl, Hoppel, Quarterly Journal of the Royal Meteorological Society. 143703Craig H Bishop, Sergey Frolov, Douglas R Allen, David D Kuhl, and Karl Hoppel. The local ensemble tangent linear model: an enabler for coupled model 4d-var. Quarterly Journal of the Royal Meteorological Society, 143(703):1009-1020, 2017.
Generation of loworder reservoir models using system-theoretical concepts. T Heijn, Markovinovic, Jansen, SPE Reservoir Simulation Symposium. Society of Petroleum Engineers. T Heijn, R Markovinovic, JD Jansen, et al. Generation of low- order reservoir models using system-theoretical concepts. In SPE Reservoir Simulation Symposium. Society of Petroleum Engineers, 2003.
Accelerating iterative solution methods using reduced-order models as solution predictors. International journal for numerical methods in engineering. R Markovinović, Jansen, 68R Markovinović and JD Jansen. Accelerating iterative solution methods using reduced-order models as solution predictors. Inter- national journal for numerical methods in engineering, 68(5):525- 541, 2006.
Development and application of reduced-order modeling procedures for subsurface flow simulation. Ma Cardoso, P Durlofsky, Sarma, International journal for numerical methods in engineering. 779MA Cardoso, LJ Durlofsky, and P Sarma. Development and appli- cation of reduced-order modeling procedures for subsurface flow simulation. International journal for numerical methods in engi- neering, 77(9):1322-1350, 2009.
Model-reduced variational data assimilation. Ptm Vermeulen, Heemink, Monthly weather review. 13410PTM Vermeulen and AW Heemink. Model-reduced variational data assimilation. Monthly weather review, 134(10):2888-2899, 2006.
Inverse shallow-water flow modeling using model reduction. International journal for multiscale computational engineering. Arnold W Muhammad Umer Altaf, Martin Heemink, Verlaan, 7Muhammad Umer Altaf, Arnold W Heemink, and Martin Verlaan. Inverse shallow-water flow modeling using model reduction. Inter- national journal for multiscale computational engineering, 7(6), 2009.
Model-reduced gradient-based history matching. P Małgorzata, Remus G Kaleta, Arnold W Hanea, Jan-Dirk Heemink, Jansen, Computational Geosciences. 151Małgorzata P Kaleta, Remus G Hanea, Arnold W Heemink, and Jan-Dirk Jansen. Model-reduced gradient-based history matching. Computational Geosciences, 15(1):135-153, 2011.
Linearized reduced-order models for subsurface flow simulation. M A Cardoso, J Louis, Durlofsky, Journal of Computational Physics. 2293MA Cardoso and Louis J Durlofsky. Linearized reduced-order models for subsurface flow simulation. Journal of Computational Physics, 229(3):681-700, 2010.
Reduced-order flow modeling and geological parameterization for ensemblebased data assimilation. Jincong He, Pallav Sarma, Louis J Durlofsky, Computers & geosciences. 55Jincong He, Pallav Sarma, and Louis J Durlofsky. Reduced-order flow modeling and geological parameterization for ensemble- based data assimilation. Computers & geosciences, 55:54-69, 2013.
Reduced-order modeling for compositional simulation by use of trajectory piecewise linearization. Jincong He, J Louis, Durlofsky, SPE Journal. 1905Jincong He, Louis J Durlofsky, et al. Reduced-order modeling for compositional simulation by use of trajectory piecewise lineariza- tion. SPE Journal, 19(05):858-872, 2014.
Enhanced linearized reduced-order models for subsurface flow simulation. Jincong He, Louis J Saetrom, Durlofsky, Journal of Computational Physics. 23023Jincong He, J Saetrom, and Louis J Durlofsky. Enhanced linearized reduced-order models for subsurface flow simulation. Journal of Computational Physics, 230(23):8313-8341, 2011.
Trajectory piecewise quadratic reduced-order model for subsurface flow, with application to pde-constrained optimization. Sumeet Trehan, J Louis, Durlofsky, Journal of Computational Physics. 326Sumeet Trehan and Louis J Durlofsky. Trajectory piecewise quadratic reduced-order model for subsurface flow, with applica- tion to pde-constrained optimization. Journal of Computational Physics, 326:446-473, 2016.
Unlocking fast reservoir predictions via nonintrusive reduced-order models. Hector Klie, SPE Reservoir Simulation Symposium. Society of Petroleum EngineersHector Klie et al. Unlocking fast reservoir predictions via nonin- trusive reduced-order models. In SPE Reservoir Simulation Sym- posium. Society of Petroleum Engineers, 2013.
Nonintrusive reduced order modelling of waterflooding in geologically heterogeneous reservoirs. D Xiao, Fang, Pain, A Im Navon, Muggeridge, ECMOR XV-15th European Conference on the Mathematics of Oil Recovery. D Xiao, F Fang, C Pain, IM Navon, and A Muggeridge. Non- intrusive reduced order modelling of waterflooding in geologically heterogeneous reservoirs. In ECMOR XV-15th European Confer- ence on the Mathematics of Oil Recovery, 2016.
Non-intrusive model reduction for a 3d unstructured mesh control volume finite element reservoir model and its application to fluvial channels. D Xiao, Fang, Pain, P Im Navon, A Salinas, Muggeridge, Computers & Geosciences. D Xiao, F Fang, C Pain, IM Navon, P Salinas, and A Muggeridge. Non-intrusive model reduction for a 3d unstructured mesh control volume finite element reservoir model and its application to fluvial channels. Computers & Geosciences, 2016.
Neural networks and their derivatives for history matching and reservoir optimization problems. Jérémie Bruyelle, Dominique Guérillot, Computational Geosciences. 183-4549Jérémie Bruyelle and Dominique Guérillot. Neural networks and their derivatives for history matching and reservoir optimization problems. Computational Geosciences, 18(3-4):549, 2014.
Domain decomposition for time-dependent problems using radial based meshless methods. P Phani, K Chinchapatnam, Prasanth B Djidjeli, Nair, Numerical Methods for Partial Differential Equations. 231Phani P Chinchapatnam, K Djidjeli, and Prasanth B Nair. Domain decomposition for time-dependent problems using radial based meshless methods. Numerical Methods for Partial Differential Equations, 23(1):38-59, 2007.
Matrix structural analysis of substructures. S Janusz, Przemieniecki, AIAA Journal. 11Janusz S Przemieniecki. Matrix structural analysis of substruc- tures. AIAA Journal, 1(1):138-147, 1963.
Multi-resolution flow simulations by smoothed particle hydrodynamics via domain decomposition. Xin Bian, Zhen Li, George Em Karniadakis, Journal of Computational Physics. 297Xin Bian, Zhen Li, and George Em Karniadakis. Multi-resolution flow simulations by smoothed particle hydrodynamics via domain decomposition. Journal of Computational Physics, 297:132-155, 2015.
Reduced order modeling of a two-dimensional flow with moving shocks. J David, Paul I Lucia, Philip S King, Beran, Computers & Fluids. 327David J Lucia, Paul I King, and Philip S Beran. Reduced order modeling of a two-dimensional flow with moving shocks. Com- puters & Fluids, 32(7):917-938, 2003.
A domain decomposition strategy for reduced order models. application to the incompressible navier-stokes equations. Joan Baiges, Ramon Codina, Sergio Idelsohn, Computer Methods in Applied Mechanics and Engineering. 267Joan Baiges, Ramon Codina, and Sergio Idelsohn. A domain de- composition strategy for reduced order models. application to the incompressible navier-stokes equations. Computer Methods in Ap- plied Mechanics and Engineering, 267:23-42, 2013.
Nonlinear model order reduction based on local reduced-order bases. David Amsallem, J Matthew, Charbel Zahr, Farhat, International Journal for Numerical Methods in Engineering. 9210David Amsallem, Matthew J Zahr, and Charbel Farhat. Nonlin- ear model order reduction based on local reduced-order bases. International Journal for Numerical Methods in Engineering, 92(10):891-916, 2012.
Temporal localized nonlinear model reduction with a priori error estimate. Saifon Chaturantabut, Applied Numerical Mathematics. Saifon Chaturantabut. Temporal localized nonlinear model reduc- tion with a priori error estimate. Applied Numerical Mathematics, 2017.
Fundamentals of Numerical Reservoir Simulation. Donald W Peaceman, Elsevier Scientific Publishing CompanyDonald W Peaceman. Fundamentals of Numerical Reservoir Sim- ulation. 1977. Elsevier Scientific Publishing Company.
Inverse theory for petroleum reservoir characterization and history matching. S Dean, Albert C Oliver, Ning Reynolds, Liu, Cambridge University PressDean S Oliver, Albert C Reynolds, and Ning Liu. Inverse the- ory for petroleum reservoir characterization and history matching. Cambridge University Press, 2008.
Adjoint-based optimization of multi-phase flow through porous media-a review. Jd Jansen, Computers & Fluids. 461JD Jansen. Adjoint-based optimization of multi-phase flow through porous media-a review. Computers & Fluids, 46(1):40- 51, 2011.
Constraint reduction procedures for reduced-order subsurface flow models based on podtpwl. Jincong He, J Louis, Durlofsky, International Journal for Numerical Methods in Engineering. 1031Jincong He and Louis J Durlofsky. Constraint reduction proce- dures for reduced-order subsurface flow models based on pod- tpwl. International Journal for Numerical Methods in Engineer- ing, 103(1):1-30, 2015.
Application of the karhunen-loeve expansion to feature selection and ordering. Keinosuke Fukunaga, L G Warren, Koontz, IEEE Transactions on computers. 1004Keinosuke Fukunaga and Warren LG Koontz. Application of the karhunen-loeve expansion to feature selection and ordering. IEEE Transactions on computers, 100(4):311-318, 1970.
Quadrature and interpolation formulas for tensor products of certain classes of functions. Sergey Smolyak, In Soviet Math. Dokl. 4Sergey Smolyak. Quadrature and interpolation formulas for tensor products of certain classes of functions. In Soviet Math. Dokl., volume 4, pages 240-243, 1963.
. S J Wright, J. Numerical optimization. SpringerWright S.J. Nocedal, J. Numerical optimization :. Springer, 1999.
Inverse Problem Theory: Methods for Data Fitting and Model Parameter Estimation. A Tarantola, DBLP. A. Tarantola. Inverse Problem Theory: Methods for Data Fitting and Model Parameter Estimation. DBLP, 2005.
Open-source matlab implementation of consistent discretisations on complex grids. Knut-Andreas Lie, Stein Krogstad, Halvor Møll Nilsen, and Bård Skaflestad. 16Ingeborg Skjelkvåle Ligaarden, Jostein Roald NatvigKnut-Andreas Lie, Stein Krogstad, Ingeborg Skjelkvåle Lig- aarden, Jostein Roald Natvig, Halvor Møll Nilsen, and Bård Skaflestad. Open-source matlab implementation of consistent discretisations on complex grids. Computational Geosciences, 16(2):297-322, 2012.
|
[] |
[
"SELF-SIMILAR SOLUTIONS FOR THE MUSKAT EQUATION",
"SELF-SIMILAR SOLUTIONS FOR THE MUSKAT EQUATION"
] |
[
"Eduardo García-Juárez : ",
"Javier Gómez-Serrano ; ",
"ANDHuy Q Nguyen˚ ",
"Benoît Pausader "
] |
[] |
[] |
We show the existence of self-similar solutions for the Muskat equation. These solutions are parameterized by 0 ă s ! 1; they are exact corners of slope s at t " 0 and become smooth in x for t ą 0.
|
10.1016/j.aim.2022.108294
|
[
"https://arxiv.org/pdf/2109.02565v1.pdf"
] | 237,420,621 |
2109.02565
|
db917041be501e97129f112be1f45e8497fd5817
|
SELF-SIMILAR SOLUTIONS FOR THE MUSKAT EQUATION
6 Sep 2021
Eduardo García-Juárez :
Javier Gómez-Serrano ;
ANDHuy Q Nguyen˚
Benoît Pausader
SELF-SIMILAR SOLUTIONS FOR THE MUSKAT EQUATION
6 Sep 2021
We show the existence of self-similar solutions for the Muskat equation. These solutions are parameterized by 0 ă s ! 1; they are exact corners of slope s at t " 0 and become smooth in x for t ą 0.
1. Introduction 1.1. The Muskat problem. The Muskat problem describes the evolution of the free boundary between immiscible and incompressible fluids permeating a porous medium in a gravity field. Each fluid is assumed to have constant physical properties and their velocities are governed by Darcy's law, µ κ upz, tq "´∇ppz, tq´ρgp0, 1q, z P R 2 , t P R`,
where p, u denote the pressure and velocity of the fluids, ρ, µ are their density and viscosity, κ is the permeability constant of the medium, and g the gravitational constant. With or without surface tension effects, it has long been known that the problem can be reduced to an evolution equation for the free interface [23,32,45]. The case of a two-fluid graph interface
Γptq " tpx, f px, tqq : x P Ru with only gravity effects admits a particularly compact form [23], now called the Muskat equation:
B t f " 1 π ż R B x ∆ α f 1`p∆ α f q 2 dα, ∆ α f pxq " f pxq´f px´αq α , x P R. (1.1)
Above, all physical constants have been normalized for notational simplicity. The Muskat equation is well-posed locally in time for sufficiently smooth initial data, and globally in time if the initial interface is sufficiently flat [8,20,22,23,24,45,46,47]. Most notably, an initially smooth interface can turn [13] and later lose regularity in finite time [14]. Furthermore, many other behaviors are possible, with interfaces that turn and then go back to the graph scenario [25,26]. Thus, finding criteria for global existence became one of the main questions for the Muskat equation. Since equation (1.1) has a natural scaling given by f px, tq Ñ λ´1f pλx, λtq, λ ą 0, these criteria are stated in terms of critical regularity, i.e., spaces that scale like 9 W 1, 8 . In this sense, having the product of the maximal and minimal slopes strictly less than 1 is sufficient for global existence [11]. See also [28]. Medium-size initial data in critical spaces but with uniformly continuous slope guarantees global wellposedness [20]. If the initial data is sufficiently small in 9 H 3 2 , then the slope can be arbitrarily large [27] and even unbounded [5,7]. The result [5] also shows local existence and uniqueness in H 3 2 . This is currently the best (lowest) regularity result in terms of the space of the initial data, which is a problem that has garnered a lot of attention recently (e.g. [3,6,17,18,21,42,43,44]).
Main result.
In this paper, we show the existence of self-similar solutions for the Muskat problem. These solutions correspond to the global-in-time evolution of initially exact corners, and thus they do not fit into the aforementioned results 1 . 1 The results in [12] allow for merely medium size bounded slopes but require sublinear growth of the profile, while our solutions grow linearly in space. 1 We can rewrite (1.1) in terms of a closed system for the slope h " B x f :
0 " B t h´1 π ż R ∆ α B x h 1`p ffl α hq 2 dα`2 π ż R p∆ α hq 2¨fflα h p1`p ffl α hq 2 q 2 dα, α hpxq " 1 α ż x x´α hpzqdz. (1.2)
Plugging the ansatz hpx, tq " kpx{tq in (1.2), we arrive at the equation
0 " Sk`1 π ż R ∆ α B y k 1`p ffl α kq 2 dα´2 π ż R p∆ α kq 2¨fflα k p1`p ffl α kq 2 q 2 dα, S :" yB y . (1.3)
for which we construct a local curve of solutions:
Theorem 1.1.
There exists s˚ą 0 such that for all |s| ă s˚, there exists a self-similar solution of (1.2) h s px, tq " k s px{tq satisfying lim yÑ`8 k s pyq " s. In addition, we have that
}k s pyq´s 2 π arctanpyq} H 1`|s|}B s k s pyq´2 π arctanpyq} H 1 À |s| 3 .
Remark 1.2. piq In particular, we see that k s P L 8 z 9 H 1 2 pRq. piiq In fact, we will compare k s and p2s{πq arctanpp1`s 2 {3qyq, but one can readily see that the difference between the two ansätze is Ops 3 q. piiiq Due to the symmetry we can find solutions with negative s by setting k´spyq "´k s pyq. From now on, we assume s ě 0.
Despite the numerous works on the Muskat equation and the mathematically equivalent vertical Hele-Shaw problem, self-similar solutions were only known for the simplified thin film Muskat [30,33,38]. See also [31] where the authors find traveling solutions for the Muskat problem with surface tension effects included.
From our numerical results, it might look surprising that, no matter how big the slope is, the initial corner instantly smooths out, as opposed to the known "waiting time" phenomenon in the Hele-Shaw problem [1,9,10,16,19,32,35,36,37]. We must note however that those works correspond to a horizontal Hele-Shaw cell (hence without gravity) and some include fluid injection. Moreover, some of these works are in a one-phase setting, which even for Muskat significantly changes the possible behaviors [2,4,15,29,34].
1.3.
Outline of the paper. The rest of the paper is structured as follows. In Section 2 we summarize the notation that will be used along the paper. Next, in Section 3, we first extract the quasilinear structure of (1.3) and rewrite it as a fixed point equation. This section contains the proof of the main Theorem 1.1 via Proposition 3.3. Section 4 contains the analysis of all the terms involved in the equation. We will use key cancellations provided by some "elementary bricks" that we will be able to extract through the symmetrization of the nonlinear terms. Finally in Section 5 we illustrate our main Theorem by numerically computing part of the branch of self-similar solutions.
Notations
2.1. General notations. In the following, we fix ϕ P C 8 c p´4{3, 4{3q, a nonnegative even function such that ϕ " 1 on r´3{4, 3{4s. For simplicity of notation, we let ffl˘α ,0 f be an arbitrary function among
˘α,0 f P tf, α f, ´α f u.
We will work mostly on the Fourier side. We define the Fourier transform as
F pf qpξq " p f pξq :" 1 ? 2π ż R f pyqe´i yξ dy.
We note that the Fourier transform of a real odd function is an odd function taking purely imaginary values. We define the Fourier multiplier |∇| by F t|∇|f u pξq " |ξ| p f pξq.
Given an operator T , we let p T rf s " F´1T r p fs (2.1) be its conjugation by the Fourier transform. To avoid functional analytic considerations, we say that a functional g Þ Ñ N pgq is analytic around a function L s if for any choice of g 1 , g 2 in the appropriate space (here H 1 ) the restricted function N g1,g2 pt 1 , t 2 q :" N pL s`t1 g 1`t2 g 2 q is analytic. In this case, we denote N 1 rgs :" d dt 1 N g,0 p0, 0q, N ě2 rgs :" N pL s`g q´N pL s q´N 1 rgs, N ě1 rgs :" N 1 rgs`N ě2 rgs,
(2.2)
and we observe that
N ě2 pL s`g2 q´N ě2 pL s`g1 q " ż 1 t1"0 d dt 1 d dt 2 N g1,g2´g1 pt 1 , 0qdt 1`ż 1 θ"0 p1´θq d 2 dθ 2 N g1,g2´g1 p1, θqdθ. (2.3)
In these notations, the "center" L s is implicit, but since we will always consider functionals around L s defined in (3.11), there should be no ambiguity.
2.2. Algebra of operators. We will use operators of the form N pf qpyq :"
ż 8 α"0 mpf ; α, yq¨Gpf, α f, ´α f q dα α
associated to some multilinear function f Þ Ñ mpf ; α, yq (i.e. multilinear in f for each fixed α, y) and some numerical analytic function G. For such operators, we compute that
d dt 2 N g1,g2 pt 1 , t 2 q " ż 8 α"0 m 2 pg 1 , g 2 , L t1,t2 ; α, yq¨GpL t1,t2 , α L t1,t2 , ´α L t1,t2 q dα ὰ ż 8 α"0 m 1 pg 1 , L t1,t2 ; α, yq¨v g2¨∇ GpL t1,t2 , α L t1,t2 , ´α L t1,t2 q dα ὰ ż 8 α"0 m 1 pg 2 , L t1,t2 ; α, yq¨v g1¨∇ GpL t1,t2 , α L t1,t2 , ´α L t1,t2 q dα ὰ ż 8 α"0 mpL t1,t2 ; α, yq¨∇ 2 GpL t1,t2 , α L t1,t2 , ´α L t1,t2 qrv g1 , v g2 s dα α ,(2.5)
where m 1 pf, g, . . . gq " d g m 1¨f and similarly for m j , and L t1,t2 :" L s`t1 g 1`t2 g 2 . Similarly
d 2 dθ 2 N g1,g2´g1 p1, θq " ż 8 α"0 m 2 pg 2´g1 , g 2´g1 , L θ ; α, yq¨GpL θ , α L θ , ´α L θ q dα ὰ 2 ż 8 α"0 m 1 pg 1´g1 , L θ ; α, yq¨v g2´g1¨∇ GpL θ , α L θ , ´α L θ q dα ὰ ż 8 α"0 mpL θ ; α, yq¨∇ 2 GpL θ , α L θ , ´α L θ qrv g2´g1 , v g2´g1 s dα α .
(2.6)
3. Reduction to a fixed point estimate 3.1. Analysis of the quasilinear structure. We can extract the quasilinear part from (1.3). This will be defined in terms of two main terms. We define the function F and the operator W as follows
F ptq :" " 1`t 2 ‰´1 , W rgspyq :" 1 π ż R p ffl α gpyqq 2´g2 pyq 1`p ffl α gpyqq 2 dα α ,(3.1)
and we obtain the following expression:
Lemma 3.1. The self-similar profile k satisfies |∇|k´S " k`k 3 3 `W rksB y k " Rrks`T rks, (3.2)
where the semilinear terms are defined as
Rrhs :" 1 π ż R B α thpy´αqu¨p ffl α hq 2´h2 1`p ffl α hq 2 dα α ,
T rhs :" p1`h 2 qT rhs;
T rhs :" 1 π ż R p∆ α hq 2¨F 1 p α hqdα. (3.3)
Proof of Lemma 3.1. Dividing by 1`k 2 , it suffices to show that (1.3) can be rewritten as rF pkq|∇|´Ss k`V rksB x k " F pkqRrks`T rks,
V rgspyq :" 1 π ż R " F pgpyqq´F p α gpyqq * dα α . (3.4)
The only nontrivial part is the decomposition of the second term in (1.3). We can expand
1 π ż R B y p∆ α gq¨F p α gqdα "´F pgq¨|∇|g`1 π ż R B y p∆ α gq¨"F p α gq´F pgq * dα "´F pgq¨|∇|g`B y gpyq 1 π ż R " F p α gq´F pgq * dα ά 1 π ż R B y gpy´αq¨"F p α gq´F pgq * dα α ,
and rearranging the terms, we arrive at (3.4).
3.2.
Study of the linear equation: Duhamel formula. Given a constant κ ą 0, we now consider the linear adjusted equation from (3.4) pκ|∇|´Sq k " p 1`By p 2 (3.5)
for an odd function k. Taking the Fourier transform and using Duhamel's formula, we obtain the ODE
B ξ ! ξe κ|ξ| p k ) " e κ|ξ| p p 1`i ξe κ|ξ| p p 2 .
If we assume that ξ p kpξq is continuous at the origin, and we integrate from 0, we find that
p kpξq " C e´κ |ξ| ξ`T´1 rp p 1 s`T 0 rp p 2 s, (3.6)
for some constant C, where the linear operators are given by
T´1 : g Þ Ñ 1 ξ ż ξ η"0 e κp|η|´|ξ|q gpηqdη, T 0 : g Þ Ñ i ξ ż ξ η"0 ηe κp|η|´|ξ|q gpηqdη. (3.7)
In particular, under our assumptions, the odd solutions to the free equation pκ|∇|´Sq k " 0 with the condition lim yÑ`8 kpyq " s are given by L s,κ pyq :" 2s π arctanpy{κq.
(3.8)
From now on, we shall restrict ourselves to the study of odd solutions.
3.2.1. Operator estimates. It remains to estimate the solution operators from (3.7).
Lemma 3.2. The operators
T´1 : g Þ Ñ 1 ξ ż ξ η"0 e κpη´ξq gpηqdη, T 0 : g Þ Ñ i ξ ż ξ η"0 ηe κpη´ξq gpηqdη,
defined for functions on L 2 loc p0, 8q satisfy the boundedness properties
}ξT´1} L 2 ÑL 2`}T´1} L 3 ÑL 2 À 1, }T 0 } L 2 ÑL 2`}T 0 } L 2 pξ 2 dξqÑL 2 pξ 2 dξq À 1.
Proof of Lemma 3.2. To control T´1, we first use Hölder's inequality to bound
|T´1rgspξq| ď ξ´1 pˆ1´e´κ ξ κξ˙1 p 1˜ż ξ η"0 e κpη´ξq |gpηq| p dη¸1 p À }g} L p¨mintξ´1 p , ξ´1u,
which shows the second bound. For the first, we compute by duality that
xξT´1rgs, hy " ij Kpξ, ηqgpηqhpξqdξdη, Kpξ, ηq :" 1 t0ďηďξu e κpη´ξq ,
and Schur's test allows to conclude. This also gives the second estimate on T 0 , and the first one follows a similar proof with kernel K 1 pξ, ηq " pη{ξqKpξ, ηq.
Fixed point formulation.
3.3.1. Linearization at self-similar profile. The key observation we will use is that for the solutions of the linearized problem (3.8), the quasilinear part simplifies significantly since L 2 s,κ is almost constant away from a neighborhood of the origin:
L 2 s,κ pyq " s 2`p pyq, ppyq " Opxκ{yy´1q, L 3 s,κ " s 2 L s,κ`p L 2 s,κ´s 2 qL s,κ .
We can now linearize (3.2) at the constant s to get an equation of the form (3.5) with
κ :" " 1`s 2 {3 ‰´1 . (3.9)
More precisely, we obtain
|∇|k´"1`s 2 {3 ‰ Sk " 1 3 S " pk 2´s2 qk ‰´W rksB y k`Rrks`T rks.
which we prefer to rewrite via a normal form as
rκ|∇|´Ss h " κ 2 3 |∇| "
pk 2´s2 qk ‰´κ W rksB y k`κRrks`κT rks, h :" k`κ 3 pk 2´s2 qk.
(3.10)
Fixed point formulation.
We can now seek solutions of (3.10) as perturbations of (3.8) when s, κ are related as in (3.9). We seek solutions of the form
kpyq " L s pyq`gpyq, L s pyq :" 2s π arctanpy{κq " s 2 π arctanpp1`s 2 {3qyq. (3.11)
We note for later use that L s is a smooth function of s and
B s L s " 2 π arctanpy{κq`4 3π s 2 y 1`py{κq 2 , B s pL 2 s´s 2 q " 2sˆs´2pL 2 s´s 2 q`2 π arctanpy{κq 4 3π s 2 y 1`py{κq 2˙. (3.12)
We define π s by π s :" pL 2 s´s 2 qL s .
Then, plugging (3.11) into (3.10), we obtain
h " L s`p κ{3qπ s`p 1`κpL 2 s´s 2 {3qqg`κL s g 2`p κ{3qg 3 , rκ|∇|´Ss h " κ 2 |∇| " p1{3qπ s`p L 2 s´s 2 {3qg`L s g 2`g3 {3 ‰ κW rL s sB y L s´κ pW rL s`g s´W rL s sq B y pL s`g q´κB y pW rL s sgq`κgB y W rL s s κpRrL s s`R 1 rgs`R ě2 rgs`T rL s s`T 1 rgs`T ě2 rgsq.
Thus, using (3.6) and the notation (2.1), we obtain two equations for h (after identifying the element in the kernel for the second equation from the limit at 8):
h " L s`p κ{3qπ s` 3p1`s 2 q{p3`s 2 q`κpL 2 s´s 2 q ( g`κL s g 2`p κ{3qg 3 " L s`κ p T´1 tpκ{3q|∇|π s´W rL s sB y L s`R rL s s`T rL s sù κ p T´1tκ|∇|pL 2 s´s 2 {3qg´B y pW rL s sgq`gB y W rL s s´W 1 rgsB y L s`R1 rgs`T 1 rgs κ p T´1tκ|∇|pL s g 2`g3 {3q´W ě2 rgsB y L s´W hi ě1 rgsB y g´B y pgW lo ě1 rgsq`gB y W lo rgs R ě2 rgs`T ě2 rgsu,
where R 1 , T 1 , R ě2 , T ě2 follow the convention in (2.2) and W " W lo`W hi is a decomposition later introduced in (4.18). Combining the two equations for h, we arrive at the fixed-point formulation:
Π " Ag`N pgq,(3.13)
with forcing term Π :" pκ{3qπ s´p κ 2 {3q p T´1r|∇|π s s`κ p T´1 rW rL s sB y L s´R rL s s´T rL s ss , (3.14)
linear operator 15) and nonlinearity
Ag :"´p1`2 s 2 3`s 2`κ pL 2 s´s 2 qqg`κ 2 p T´1 " |∇|pL 2 s´s 2 {3qg ‰ κ p T´1 rgB y W rL s s´W 1 rgsB y L s`R1 rgss´κ p T 0 rW rL s sgs`κ p T´1rT 1 rgss,(3.N pf q :"´κpL s`g {3qg 2`κ2 p T´1 " |∇|pL s g 2`g3 {3q ‰ κ p T´1 "´W ě2 rgsB y L s´W hi ě1 rgsB y g´gB y W lo ě1 rgs ‰´κ p T 0 rgW lo ě1 rgss κ p T´1 rR ě2 rgss`κ p T´1 rT ě2 rgss ,(3.16)
where we used that p T´1rB y f s " p T 0 rf s. In view of this, Theorem 1.1 follows from the following existence result. Proposition 3.3. There exists s˚such that for all 0 ă s ă s˚, there exists exactly one solution g s of (3.13) with (3.14)-(3.16) in a ball in H 1 , B H 1 p0, s˚q. In addition the mapping s Þ Ñ g s is C 1 in H 1 and
}B s g s } H 1 À s 2 . (3.17)
This proposition will be an easy consequence of the following quantitative estimates proved in the next section.
Lemma 3.4. There holds that
}Π} H 1 À s 3 , }B s Π} H 1 À s 2 .
Lemma 3.5. There holds that
}A`Id} H 1 ÑH 1 À s 2 , }B s A} H 1 ÑH 1 À s.
In particular, there exists s 1 ą 0 such that A is invertible in H 1 for all 0 ď s ď s 1 .
Lemma 3.6. There holds that N p0q " 0 and whenever
}g 1 } H 1`}g 2 } H 1 ď 1, }N pg 1 q´N pg 2 q} H 1 À }g 1´g2 } H 1¨"s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ .
Proof of Proposition 3.3. We are now ready to prove Proposition 3.3 via a fixed point formulation. For ε ą 0, we consider X :" tg P H 1 : }g} H 1 ď εu and we want to show that (3.13) has a unique solution in X, provided that 0 ă s ď s˚is small enough. We define the mapping
Φ : g Þ Ñ A´1 rΠ´N pgqs ,
which is well defined on H 1 since A is invertible for 0 ă s ! 1 by Lemma 3.5. Using Lemma 3.5 and Lemma 3.6, we see that Φ : X Ñ X provided 0 ă ε ď ε˚is small enough. Finally, decreasing ε˚and using Lemma 3.6 again, we see that Φ is a contraction on X. By the Banach fixed point theorem it has a unique fixed point in X.
In addition, we can study the smoothness of s Þ Ñ g s . Deriving (3.13), we find that pA`d gs N qrB s g s s " B s Π s´p B s Aq g s , and using Lemma 3.4, Lemma 3.5, and Lemma 3.6 again, we deduce that s Þ Ñ g s is C 1 and we have (3.17).
It remains to prove Lemma 3.4, Lemma 3.5 and Lemma 3.6. This will be done in the next section.
Quantitative analysis
4.1. Analysis of the elementary bricks. We need to control many terms. Fortunately, in many of them, the key cancellation is provided by simple "elementary bricks" which can be analyzed separately. We define the quadratic expressions for α ě 0:
δ α rf spyq :" p α f pyqq 2´p ´α f pyqq 2 , J α rhspyq :"ˆ α hpyq˙2´2h 2 pyq`ˆ ´α hpyq˙2 , (4.1)
and let δ α rf, gs and J α rf, gs denote the bilinear expression obtained by polarization. We see that δ α rf s is odd if f is odd, while J α rf s is even if f is either even or odd. We start by estimating the key cancellation.
Lemma 4.1. Given δ α rf, gs defined in (4.1), y ě 0 and α ě 0, there holds that s´2|δ α rL s spyq| À 1 t0ďαďyu α lnxαy¨minty, y´2u`1 t0ďyďαu y mintα, α´1u, s´2|B y δ α rL s spyq| À 1 t0ďαďyu α lnxαy¨xy´αy´1xyy´2`1 t0ďyďαu mintα, α´1u,
2)
and |δ α rL s , gs| À s}g} H 1¨mintα 1 2 , α´1 2 u, |δ α rg, hs| À }g} H 1 }h} H 1 mintα 1 2 , α´1u.
(4.3)
In addition, |δ α rL s , gs| À s ż t|t|ďαu " |g 1 py`tq|`xαy´1|gpy`tq| ‰ dt, |δ α rg 1 , g 2 s| À }g 2 } H 1 ż t|t|ďαu mintα´1 2 , α´3 2 u|g 1 py`tq|dt,
}g 1 } H 1 ż t|t|ďαu mintα´1 2 , α´3 2 u|g 2 py`tq|dt (4.4)
and a derivative brings powers of α´1, |αB y δ α rL s , gs|pyq À s´|gpy`αq|`|gpyq|`|gpy´αq|`1 α ż α t"´α |gpy`tq|dt¯. We see that the first term in δ vanishes to first order for odd f if α " 1, while the second term contains a difference and vanishes to first order for α ! 1 when g is smooth. The bounds in (4.2) follow directly from the formula (4.6) and the bounds For general odd functions, we obtain thaťˇˇˇ1
α ż ά α gpy`tqdtˇˇˇˇÀ mint ? y`α, xαy´1 2 u}g} H 1 ,ˇˇˇ1 α ż α t"0 tgpy`tq´gpy´tqu dtˇˇˇˇÀ ż t|t|ďαu |g 1 py`tq|dt,ˇˇˇ1 α ż α t"0 tgpy`tq´gpy´tqu dtˇˇˇˇÀ mintα 1 2 , α´1 2 u}g} H 1 ,(4.8)
from which we deduce (4.3) and (4.4). In addition, we observe that δrB x g 1 , g 2 s " g 1 py´αq´g 1 py`αq α¨1 α ż α t"0 tg 2 py`tq´g 2 py´tqu dt, δrg 1 , B x g 2 s "´1 α ż ά α g 1 py`tqdt¨g 2 py`αq´2g 2 pyq`g 2 py´αq α , and using (4.7), we easily arrive at (4.5).
Lemma 4.2.
Let J α be defined as in (4.1). For 1 ď p ď 8, there holds that
} α h} L 8 α L p y`} αB αˆ ˘α h˙} L 8 α L p y À }h} L p , }J α rhs} L 8`}αB α J α rhs} L 8 À }h} 2 L 8 ,(4.
9)
and for L s we have the more precise bound, s´2|J α rL s spyq| À α lnxαy¨xyy´21 t0ďαďyu`m intα 2 , 1u1 t0ďyďαu À mintα, 1u, (4.10)
while s´1|J α rh, L s spyq| À ż |t|ďα " |hpy`tq|`|h 1 py`tq| ‰ dt,
|J α rh 1 , h 2 spyq| À ÿ ta,bu"t1,2u }h a } L 8 ż |t|ďα " |h b py`tq|`|h 1 b py`tq| ‰ dt.
(4.11)
Proof of Lemma 4.2. For (4.9), the estimates in ffl α h follow from direct computations. They directly imply the estimates on J α . To analyze J α , we can rewrite J α rhs " Jrh, hs where 2 Jrg, hs " 1 2α
ż α t"0 pgpy`tq´2gpyq`gpy´tqqdt¨1 α ż α t"0 phpy`tq`2hpyq`hpy´tqqdt 1 2ˆ1 α ż α t"0 pgpy`tq´gpy´tqqdt˙¨ˆ1 α ż α t"0
phpy`tq´hpy´tqqdt˙. For (4.10), the second term can be estimated using (4.7), while for the first term, we rewrite
1 α ż α t"0 phpy`tq´2hpyq`hpy´tqqdt " 1 α ż α u"´α pα´|u|q 2 h 2 py`uqdu. (4.13)
It remains to check (4.11). For the linear term, this follows from (4.12) and (4.7) for the second term, while for the first term, we use (4.13) in case g " L s anˇ1 α ż α t"0 phpy`tq´2hpyq`hpy´tqqdtˇˇˇˇď ż t|t|ďαu |h 1 py`tq|dt else. For the quadratic estimate, we proceed similarly for the first term in (4.12) and we use (4.8) for the second term.
4.2.
Control on the nonlinear operators. In this section, we will upgrade the bounds on the basic bricks in Section 4.1 to bounds on more complicated operators. Since they will not be multilinear, we will only operate under the assumption that 2pa`b`´a´b´q " pa`´a´qpb``b´q`pa``a´qpb`´b´q, 2pa`b``a´b´q " pa``a´qpb``b´q`pa`´a´qpb`´b´q, (4.17)
}g 1 } H 1`}g 2 } H 1`}g 3 } H 1 ď 1.(4.
we can rewrite W rgs " W rg, g; gs, where
W rg 1 , g 2 ; g 3 s " 1 π ż R`δ α rg 1 , g 2 s¨F p α g 3 q¨F p ´α g 3 q¨p1`g 2 3 q dα α " ż R`δ α rg 1 , g 2 s¨Gpg 3 , α g 3 , ´α g 3 q dα α
for some function G analytic in a neighborhood of p0, 0, 0q. In fact, to properly control the linear part, it will be convenient to decompose further W " W lo`W hi where and
W hi rg 1 , g 2 ; g 3 s " ż R`δ α rg 1 , g 2 s¨Gpg 3 , α g 3 , ´α g 3 qϕpαq dα α , W lo rg 1 , g 2 ; g 3 s " ż R`δ α rg 1 , g 2 s¨Gpg 3 , α g 3 , ´α g 3 qp1´ϕpαqq dα α .}W1 rgs} L 8`}W hi 1 rgs} L 2`}B y pW lo 1 rgsq} L 2 À s}g} H 1 . (4.20)
Finally, we have the nonlinear estimates assuming (4.14)
}Wě 2 rgs} L 2 XL 8 À }g} 2 H 1 , }Wě 2 rg 1 s´Wě 2 rg 2 s} L 8 À ps`}g 1 } H 1`}g 2 } H 1 q}g 1´g2 } H 1 .
(4.21)
Proof of Lemma 4.3. We start from the formula (4.18). For (4.19), it suffices to show that
} ż R`| δ α rL s s| dα α } L 8`} ż R`| xyyB y δ α rL s s| dα α } L 8 À s 2 ,(4.22)
which follows from (4.2) and the bounds
}L s } L 8`}xyyB y L s } L 8 ď s.
For (4.20), inspecting (2.4), and using (4.9) and (4.22), we see that it suffices to show that
} ż R`| δ α rL s , gs| dα α } L 8`} ż R`| δ α rL s , gs|ϕpαq dα α } L 2`} ż R`| B y δ α rL s , gs|p1´ϕpαqq dα α } L 2 À s}g} H 1 ,(4.23)
and
} ż R`| δ α rL s s|¨| ˘α g|ϕpαq dα α } L 2`} ż R`| δ α rL s s|¨|B y ˘α g|p1´ϕpαqq dα α } L 2 À s}g} L 2 .
We start with the above bound. First, we see from (4.2) that s´1|δ α rL s s| À s mint1, a |α|u and we compute that
} ż R`| δ α rL s s|¨| ˘α g|ϕpαq dα α } L 2 À s ij t|t|ď|α|ď2u }gpy`tq} L 2 ϕpαq dα α 3 2 dt À s}g} L 2 ,
and similarly,
} ż R`| δ α rL s s|¨|B y ˘α g|p1´ϕpαqq dα α } L 2 À s ż tαě1{2u " }gpy¯αq} L 2 y`} gpyq} L 2 y ı dα α 2 À s}g} L 2 .
The first term in (4.23) can be controlled through (4.3), the second term can be controlled through (4.4) and the last term through (4.5). Finally, we consider (4.21). Inspecting (2.3), (2.5) and (2.6), we see that we need to show
} ż R`| δ α rg 1 , g 2 s| dα α } L 8 XL 2 À }g 1 } H 1 }g 2 } H 1 , } ż R`| δ α rL s , g 1 s¨ ˘α,0 g 2 | dα α } L 8 XL 2 À s}g 1 } H 1 }g 2 } H 1 , } ż R`| δ α rL s s¨ ˘α,0 g 1¨ ˘α,0 g 2 | dα α } L 8 XL 2 À s 2 }g 1 } H 1 }g 2 } H 1 .
(4.24)
The L 8 bounds follow from (4.3) and (4.22) together with the simple bound }g} L 8 À }g} H 1 . The L 2 bound follows from (4.4) and (4.3) for the first two estimates while for the last, we see that ż
R`| δ α rL s s¨ ˘α,0 g 1¨ ˘α,0 g 2 | dα α À I 1`I2`I3 , I j :"
¡ Sj |δ α rL s spyq|¨|g 1 py`tqg 2 py`uq| dα α 3 dtdu, S 1 :" t|u| ď |t| ď α ď yu, S 2 :" t|u|, y ď |t| ď αu, S 3 :" t|u| ď |t| ď y ď αu, and using (4.2), we can compute that
}I 1 } L 2 À s 2 ij t|u|ď|t|u lnxty txty 2¨} g 1 py`tqg 2 py`uq} L 2 y dtdu À s 2 }g 1 } L 2 }g 2 } L 8 ,
while |I 2 | À s 2 ij xty´2|g 1 py`tqg 2 py`uq|dtdu À s 2 }g 2 } L 2 ij xty´3 2 |g 1 py`tq|dt, |I 3 | À s 2 xyy´2 ij t|u|ď|t|ďyu |g 1 py`tqg 2 py`uq|dudt À s 2 xyy´1}g 1 } L 2 }g 2 } L 2 , and we see that I 2 and I 3 are in L 2 . This finishes the proof.
We now turn to the first semilinear term.
Lemma 4.4. Assume (4.14) and consider R defined in (3.3). There holds that
}RrL s s} L 4 3 XL 2 À s 3 , }R 1 rgs} L 4 3 XL 2 À s 2 }g} H 1 , }R ě2 rg 1 s´R ě2 rg 2 s} L 1 XL 2 À " s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ }g 2´g1 } H 1 . (4.25)
Proof of Lemma 4.4. From (4.16) and (4.17) we deduce that 4pa`b`c`´a´b´c´q " pa`´a´qpb``b´qpc``c´q`pa``a´qpb`´b´qpc``c´q pa``a´qpb``b´qpc`´c´q`pa`´a´qpb`´b´qpc`´c´q. We can use (4.15), (4.26) and (4.16) to decompose
Rrhs "´1 π ż R h 1 py´αq¨p ffl α hq 2´h2 1`p ffl α hq 2 dα α " 1 4π pR 1´2 R 2´R3 q,
where R j " R j rh; hs is given by
R 1 rh 1 ; h 2 s " ż R`B α th 1 py`αq`h 1 py´αqu¨J α rh 2 s¨pF α`F´α q dα α , R 2 rh 1 ; h 2 s " ż R`B α th 1 py`αq´h 1 py´αqu¨δ α rh 2 spyq¨p1`h 2 2 q pF α F´αq dα α , R 3 rh 1 ; h 2 s " ż R`B α th 1 py`αq`h 1 py´αqu¨rδ α rh 2 spyqs 2¨p F α F´αq dα α ,
with the notation
F α :" F p α h 2 q.
From now on, we will denote F α " F p ffl˘α hq without distinction on h. We start with the first estimate in (4.25). First, using (4.2) we observe that
|R 2 rL s ; L s s|`|R 3 rL s ; L s s| À s ż R`x y´αy´2|δ α rL s s| dα α À s 3 xyy´1. (4.27)
In addition, using that
L 1 s py´αq´L 1 s py`αq "
2sκ π 4yα pκ 2`p y´αq 2 qpκ 2`p y`αq 2 q , and (4.10), we see that
|R 1 rL s ; hspyq| À s}h} 2 L 8 y ż dα pκ 2`p y´αq 2 qpκ 2`p y`αq 2 q À s}h} 2 L 8¨yxyy´2.
(4.28)
We now turn to the other estimates in (4.25) and we start with r R j rL s`g s :" R j rL s ; L s`g s. Inspecting (2.4), we see that the linear component follows from (4.27), (4.28) and from the bound
s ż R`x y´αy´2|δ α rL s , gs| dα α À s 2 }g} H 1¨xyy´3 {2 ,
which follows from (4.3). Similarly, inspecting (2.3) and (2.5)-(2.6), we see that the higher order terms can be controlled similarly since we can easily estimate
s ż R`x y´αy´2|δ α rg 1 , g 2 s| dα α À s}g 1 } H 1 }g 2 } H 1¨xyy´3 {2
using (4.3) again.
We now consider R j rg 1 ; L s s. Using Cauchy-Schwarz and (4.2), (4.10), we see that
R 1,a rhs :" ż R`B y thpy`αq´hpy´αqu¨J α rL s s¨pF α`F´α q¨ϕp4xyy´1αq dα α , R 2,a rhs :" ż R`B y thpy`αq`hpy´αqu¨δ α rL s s¨p1`L 2 s qpF α F´αq¨ϕp4xyy´1αq dα α ,
satisfy |R 1,a rhs|`|R 2,a rhs| À s 2 }B y h} L 2¨xyy´3 2 lnxyy, (4.29) which gives an acceptable contribution. On the other hand, integrating by parts and using (4.9), we see that
R 1,b " ż R`B α thpy`αq`hpy´αqu¨J α rL s s¨pF α`F´α q¨p1´ϕp4xyy´1αqq dα α "´ż R`t hpy`αq`hpy´αqu¨αB α " J α rL s s¨pF α`F´α q¨1´ϕ p4xyy´1αq α * dα α satisfies |R 1,b pyq| À s 2 }h} L 2 xyy´3 2
and similarly for R 2,b . R 3 can be treated similarly as R 2 . To finish the proof, it only remains to show }R j rg 1 ; L s`g1 s´R j rg 1 ; L s s´pR j rg 2 ; L s`g2 s´R j rg 2 ; L s sq} L 1 XL 2 À "
s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ }g 1´g2 } H 1 .
We start with the case j " 2. The case j " 3 is similar and will not be detailed. We decompose R 2 rg 1 ; L s`g2 s " R 2,lo rg 1 ; L s`g2 s`R 2,hi rg 1 ; L s`g2 s,
R 2,hi rg 1 ; hs :" ż R`B α tg 1 py`αq´g 1 py´αqu¨δrh, hspyq¨p1`h 2 q pF α F´αq¨ϕpαq dα α , R 2,lo rg 1 ; hs :" ż R`t g 1 py`αq´g 1 py´αqu¨αB α " δrh, hspyq¨p1`h 2 q pF α F´αq¨1´ϕ pαq α * dα α .
Inspecting (2.3) and (2.5)-(2.6), we see that to control R 2,hi , it suffices to show that
} ż R`| g 1 1 py˘αq|¨|δ α rL s spyq|ϕpαq dα α } L 1 XL 2 À s 2 }g 1 1 } L 2 , } ż R`| g 1 1 py˘αq|¨|δrg 2 , L s spyq|ϕpαq dα α } L 1 XL 2 À s}g 1 1 } L 2 }g 2 } H 1 , } ż R`| g 1 1 py˘αq|¨|δrg 2 , g 3 spyq|ϕpαq dα α } L 1 XL 2 À }g 1 1 } L 2 }g 2 } H 1 }g 3 } H 1 .
The first estimate follows by Cauchy-Schwarz as in (4.29); the last two follow using (4.3) and (4.4). Independently,
|R 2,lo ě1 rg 1 ; L s`g2 s| À ÿ rs`}g 2 } L 8 s ż R`| g 1 py˘αq¨"| ˘α,0 g 2 pyq|`|αB α ˘α g 2 pyq| ¨1´ϕ pαq α 2 dα,
so that this term can easily be handled using (4.9). It remains to consider R 1 rg 1 ; L s`g2 s. The broad ideas are the same as for R 2 , but we need to be slightly more careful because J α has less good properties than δ α and we need to take advantage of the difference of derivative in g 1 to compensate for this.
We claim that it suffices to show that for any dyadic number A ą 0, there exists a decomposition R 1 rg 1 ; L s`g2 s " R 1,hi A rg 1 ; L s`g2 s`R 1,lo A rg 1 ; L s`g2 s such that }R 1,˚r g; L s`h s} L 1 XL 2 À Cp˚, A, gq "
s 2`} h} 2 H 1 ‰ , }R 1,˚r g; L s`h2 s´R 1,˚r g; L s`h1 s} L 1 XL 2 À Cp˚, A, gq}h 2´h1 } H 1 rs`}h 1 } H 1`}h 2 } H 1 s ,(4.30)
where Cphi, A, gq :" A´1xAy´1 2 }g 2 } L 2 , Cplo, A, gq :" A}g} L 2 .
(4.31)
Indeed, to obtain (4.25), we only need to replace Cp˚, A, gq by }g} H 1 . To do this, we decompose g " ř B g B using a Littlewood-Paley projection such that
}g B } L 2 À mint1, B´1u}g} H 1 , }g 2 B } L 2 À mintB 2 , Bu}g} H 1 ,
and for each g B , we choose A :" B δ`B1´δ for 0 ă δ ă 1{5 and we compute that ÿ B tCplo, A, g B q`Cphi, A, g B qu À ÿ B mintB δ , B´δu}g} H 1 À }g} H 1 .
It now remains to prove (4.30)-(4.31). We decompose, for A ą 0 dyadic, R 1 rg 1 ; L s`g2 s " R 1,hi A rg 1 ; L sg 2 s`R 1,lo A rg 1 ; L s`g2 s where
R 1,hi A rg 1 , hs :" ż R` g 1 1 py`αq´g 1 1 py´αq (¨J α rhs¨pF α`F´α q¨ϕpAαq dα α , R 1,lo A rg 1 , hs :" ż R`t g 1 py`αq`g 1 py´αqu¨αB α " J α rhs¨pF α`F´α q¨1´ϕ pAαq α * dα α .(4.g 2 pyq|¨ϕpAαq dα α } L 1 XL 2 À s 2 A´1p1`Aq´1}g 2 1 } L 2 }g 2 } H 1 , } ż R`żt|t|ďαu |g 2 1 py`tq|¨|J α rL s , g 2 s|¨ϕpAαq dα α } L 1 XL 2 À sA´1p1`Aq´1 2 }g 2 1 } L 2 }g 2 } H 1 , } ż R`żt|t|ďαu |g 2 1 py`tq|¨|J α rg 2 , g 3 s|¨ϕpAαq dα α } L 1 XL 2 À A´1p1`Aq´1 2 }g 2 1 } L 2 }g 2 } H 1 }g 3 } H 1 .
The first estimate follows from (4.9) and the bound (4.10). The second and third follow from (4.9) and (4.11). We now consider the contribution of R 1,lo A . Inspecting (4.32), it suffices to show that ż
R`} g 1 py˘αq ˘α g 2 pyq r|J α rL s s|`|αB α pJ α rL s sq|s } L 2 XL 1¨p1´ϕpAαqq dα α 2 À s 2 A}g 1 } L 2 }g 2 } H 1 , ż R`} g 1 py˘αq¨ˆαB α ˘α g 2 pyq˙|J α rL s s|} L 2 XL 1¨p1´ϕpAαqq dα α 2 À s 2 A}g 1 } L 2 }g 2 } H 1 , ż R`} g 1 py˘αq r|J α rg 2 , g 3 s|`|αB α pJ α rg 2 , g 3 sq|s } L 2 XL 1¨p1´ϕpAαqq dα α 2 À A}g 1 } L 2 }g 2 } H 1 }g 3 } L 8 ,
where in the last estimate we consider g 3 " L s or g 3 P H 1 . These estimates all follow from (4.9) and direct integration. and }T ě2 rg 2 s´T ě2 rg 1 s}
L 4 3 XL 2 À " s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ }g 2´g1 } H 1 . (4.34)
Besides the same bound hold if we replace T rhs by T rhs " p1`h 2 qT rhs.
Proof of Lemma 4.5. We have that
T rhs " T rh, h; hs "´4 π ż R p∆ α hq¨p∆ α hq¨ α h¨F 2 p α hqdα,
and using that |∆ α L s | À spxαy`xyyq´1, |∆ α g| À mintα´1 2 , α´1u}g} H 1 , we see that a 1`y 2 |T rL s , L s ; gspyq| À s 2 ż a 1`y 2 1`α 2`y2 dα¨}g} L 8 À s 2 }g} L 8 , a 1`y 2 |T rL s , g; hspyq| À s ż a 1`y 2 a 1`α 2`y2 mintα´1 2 , α´1udα¨}g} H 1 }h} L 8 À s}g} H 1 }h} L 8 lnxyy.
and we deduce (4.33). For (4.34) it suffices to also show that } ż R |p∆ α g 1 q¨p∆ α g 2 q|dα} L 4 3 XL 2 À }g 1 } H 1 }g 2 } H 1 , but this follows from
|∆ α gpyq| ď 1 |α| ż t|t|ď|α|u |g 1 py`tq|dt, }∆ α g} L 2 y À |α|´1}g} L 2 ,
which gives }p∆ α g 1 qp∆ α g 2 q} L 1 y XL 2 y À mintα´1 2 , α´3 2 u}g 1 } H 1 }g 2 } H 1 , and the proof is complete.
p 2 π arctanpzqq 2´1 " p1´2 π arctanp 1 z qq 2´1 "´4 π arctanp 1 z q`p 2 π arctanp 1 z qq 2 ,
we see that, for z " p1`s 2 {3qy ě 1,
π s "´s 3 4 π arctanpzq¨2 π arctanp 1 z q`s 3 p 2 π arctanp 1 z qq 2¨2 π arctanpzq,
and in particular, π s is C 8 and π s pyq " O yÑ0 pyq, |B a y π s pyq| " O xÑ8 pxyy´1´aq. In particular, we observe that
}π s } H 1 À s 3 .
In addition, using (4.19), using (4.25) and (4.33) and direct computations, we see that }W rL s sB y L s } L 1 XH 1`}RrL s s} L 4 3 XL 2`} T rL s s} L 4 3 XL 2 À s 3 , and using Lemma 3.2, we see that } p T´1pW rL s sB y L s q} H 1`} p T´1RrL s s} H 1`} p T´1T rL s s} H 1 À s 3 .
4.3.2.
Proof of Lemma 3.5. The proof follows by Neumann series. Using Lemma 3.2, we see that
}pL 2 s´s 2 qg} H 1`} p T´1p|∇|pL 2 s´s 2 {3qgq} H 1 À s 2 }g} H 1 .
In addition, using Lemma 3.2, and Lemma 4.3, we see that
} p T 0 rW rL s sg} H 1 À }W rL s s} L 8 }g} H 1`}B y W rL s s} L 2 }g} L 8 , } p T´1rgB y W rL s ss} H 1 À }g} L 2 XL 8 }B y W rL s s} L 2 , } p T´1rW 1 rgsB y L s s} H 1 À }W 1 rgs} L 8 }B y L s } L 1 XL 2 .
Finally, using Lemma 4.4 and Lemma 4.5, we see that
} p T´1rR 1 rgss} H 1`} p T´1rT 1 rgss} H 1 À s 2 }g} H 1 .
This gives the estimate of A`Id.
Next, we compute that B s Args "´"12s{p3`s 2 q 2`κ B s pL 2
s´s 2 q ‰ g`κ 2 pT´1r|∇|p4s{3}L s g 2 } H 1`}g 3 } H 1`} p T´1 " |∇|pL s g 2`g3 {3q ‰ } H 1 À }L s } L 8 }g} 2 H 1`}g} 3 H 1 ,
and since these expressions are multilinear they extend to differences. In addition, using Lemma 4.3, we see that
}pW ě2 rg 2 s´W ě2 rg 1 sqB y L s } L 1 XL 2 À }W ě2 rg 2 s´W ě2 rg 1 s} L 8 }B y L s } L 1 XL 2 À s " s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ }g 2´g1 } H 1 , }W ě1 rg 1 sB y pg 2´g1 q} L 1 XL 2 À }W ě1 rg 1 s} L 2 XL 8 }B y pg 2´g1 q} L 2 , }pW ě1 rg 2 s´W ě1 rg 1 sqB y g 2 } L 1 XL 2 À }W ě1 rg 2 s´W ě1 rg 1 s} L 2 XL 8 }B y g 2 } L 2 .
Similarly, using Lemma 4.4 and Lemma 4.5,
}R ě2 rg 2 s´R ě2 rg 1 s} L 1 XL 2`}T ě2 rg 2 s´T ě2 rg 1 s} L 1 XL 2 À " s 2`} g 1 } 2 H 1`}g2} 2 H 1 ‰ }g 2´g1 } H 1 ,
and the proof is complete.
Numerical Results
In this section, we describe how to numerically compute the branch of solutions k s starting from the zero solution. Their existence for small s was proved in Theorem 1.1. See Figures 1 and 2 below for different depictions of the solutions.
The main advantage of working with the formulation of equation (1.3), as opposed to a self-similar equation for the function f , is that all the quantities involved are bounded. Nonetheless there are a few technicalities which we outline below.
The first step consists in changing variables and transforming the infinite domain into a finite one. We do so by setting y " tanpzq andkpzq :" kptanpzqq " kpyq so that the domain of definition is mapped into "´π 2 , π 2 ‰ . This change of variables has been used successfully for other problems in fluid mechanics (see for example [40] and references therein).
Moreover, we exploit the symmetry to gain an extra cancellation at π 2 . After performing the change of variables and a lengthy calculation, (1.3) is transformed into
0 " sinpzq cospzqk 1 pzq`1 π ż π 2 0k
1 pzq cospzq 2´k1 pyq cospyq 2 tanpzq´tanpyq secpyq 2 1`p∆ yk pzqq 2 dý 2 π ż π 2 0˜k pzq´kpyq tanpzq´tanpyq¸2∆ yk pzq p1`p∆ yk pzqq 2 q 2 secpyq 2 dỳ pzq`kpyq tanpzq`tanpyq¸2∆´yk pzq p1`p∆´ykpzqq 2 q 2 secpyq 2 dy, We performed continuation in s in increments of ∆s " 0.1 and did 70 iterations, using as initial guess k " ∆s 2 π z (the linear approximation) at the first iteration, and for the subsequent ones the result of the previous iteration plus ∆s 2 π z to ensure that the updated boundary condition at z " π 2 is satisfied. In the range we computed, we did not see any impediment towards advancing in s, other than computation time, although the errors become bigger as s grows and the algorithm may take one or two more iterations to converge for s " 5 than for s " 0.1.
To compute a solution for a fixed s, we used the Levenberg-Marquardt algorithm [39,41]. Our discretization variables consist on the values ofk at gridpoints z i " π 2pN´1q i, where we are using thatk is odd to solve for positive z only. Other strategies such as non-uniform meshes (concentrating points towards 0) or s-dependent compactifications would perhaps improve the performance for large s sincẽ k 1 p0q grows with s (see Figure 1), but we did not explore them here. We took N " 129 and the discrete system we solved was equation (5.1) evaluated at z i , i " 1, . . . , N´1 plus the boundary conditions kp0q " 0,k`π 2˘" s. In order to compute the derivatives we calculated a spline of degree 4 interpolating through the discrete grid and approximated the derivatives ofk by the derivatives of the spline. To perform the integration, we integrated in (5.1) in the variable y using trapezoidal integration and a grid of 10N " 1290 points. We also tried finer grids and saw virtually no difference with respect to the results. In order to get stable results, we took care of splitting the domain into 3 regions pz " π 2 , z " y and the restq and computing separately the integrand in each of them, carefully computing the limit. For example, note that despite being bounded at z " π 2 , there is a strong instability coming from the fact that the integrand is of the form 8´8 if not dealt with properly. The inner integrals in (5.1) (i.e. the∆ integrals) were computed using an adaptive Gauss-Kronrod quadrature of 15 points, also taking care of the limits at π 2 and at y. Figure 1. Plot of the difference between the numerically computed solutionskpzq and the linear approximation 2 π sz " 2 π s tanpyq for s " 1, 2, 4, 7. The branch continues beyond what is calculated.
pg, L s , . . . , L s ; α, yq¨GpL s ,
. 1 .
1The key observation is that 2δ α rf, gs " δrf, gs`δrg, f s where δrf, gs :
s py`tq´L s py´tqu dtˇˇˇˇÀ 1 t0ďαďyu α lnxαy¨xyy´2`1 t0ďyďαu mintα, 1u,
(
dtˇˇˇˇÀ α minty, xy´αy´1xyy´2u1 t0ďαďyu`m intα 2 , α´1u1 t0ďyďαu .
. 2 .
2Control on symmetrized operators.
Lemma 4. 3 .
3There holds that for˚P thi, lou,}W˚rL s s} L 8`}xyyB y W˚rL s s} L 8 À s 2 ,(4.19)
Lemma 4. 5 .3
5Assume (4.14) and recall T and T defined in (3.3). There holds that }T rL s s} XL 2 À s 2 }g} H 1 , (4.33)
4. 3 .
3Proof of the main estimates for the fixed-point formulation. 4.3.1. Proof of Lemma 3.4. Starting from
Figure 2 .
2Comparison of the numerically computed solutions kpyq, y P r0, 10s for s " 0.1, 0.2, 0.4, 1, 2, 4, 7. All functions are normalized to value 1 at infinity. The branch continues beyond what is calculated. The curve corresponding to s " 0.1 is the lowest one and monotonically increase with s.
Figure 3 .
3Comparison of the numerically computed solutions ş y 0 kpsqds, for s " 0.1, 0.2, 0.4, 1, 2, 4, 7. Panel (a): y P r´50, 50s, panel (b): Close-up, y P r´0.5, 0.5s. The curves increase with s.
`B s pL 2 s´s 2 qqgs´κ p T 0 rgB s W rL s ss κ p T´1rgB s B y W rL s s´W 1 rgsB y B s L s´By L s B s W 1 rgs`B s R 1 rgss`κ p T´1rB s T 1 rgss and using that B s L s " s´1L s`r , }r} H 1 À s 2 , direct calculations and adaptations of Lemma 4.3, Lemma 4.4 and Lemma 4.5 give the bound on B s A. 4.3.3. Proof of Lemma 3.6. Using the fact that H 1 is an algebra, we see that
Note that since J is not symmetric, Jαrg, hs " Jrg, hs`Jrh, gs.
Acknowledgments. EGJ and JGS were partially supported by the ERC Starting Grant ERC-StG-CAPA-852741. EGJ was partially supported by the ERC Starting Grant project H2020-EU.1.1.-639227. HQN was partially supported by NSF grant DMS-1907776. BP was supported by NSF grant DMS-1700282 and the CY-Advanced Studies fellow program. We thank Princeton University for computing facilities (Polar Cluster). This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement CAMINFLOW No 101031111.
Regularity for a special case of two-phase Hele-Shaw flow via parabolic integro-differential equations. Farhan Abedin, Russell W Schwab, arXiv:2008.01272ArXiv preprintFarhan Abedin and Russell W. Schwab. Regularity for a special case of two-phase Hele-Shaw flow via parabolic integro-differential equations. ArXiv preprint, arXiv:2008.01272, 2020.
Convexity and the Hele-Shaw equation. Thomas Alazard, Water Waves. 3Thomas Alazard. Convexity and the Hele-Shaw equation. Water Waves, 3(1):5-23, 2021.
Paralinearization of the Muskat equation and application to the Cauchy problem. Thomas Alazard, Omar Lazar, Arch. Ration. Mech. Anal. 2372Thomas Alazard and Omar Lazar. Paralinearization of the Muskat equation and application to the Cauchy problem. Arch. Ration. Mech. Anal., 237(2):545-583, 2020.
Lyapunov functions, identities and the Cauchy problem for the Hele-Shaw equation. Thomas Alazard, Nicolas Meunier, Didier Smets, Comm. Math. Phys. 3772Thomas Alazard, Nicolas Meunier, and Didier Smets. Lyapunov functions, identities and the Cauchy problem for the Hele-Shaw equation. Comm. Math. Phys., 377(2):1421-1459, 2020.
Endpoint sobolev theory for the Muskat equation. Thomas Alazard, Quoc-Hung Nguyen, arXiv:2010.06915Arxiv preprintThomas Alazard and Quoc-Hung Nguyen. Endpoint sobolev theory for the Muskat equation. Arxiv preprint, arXiv:2010.06915, 2020.
On the Cauchy problem for the Muskat equation. II: Critical initial data. Thomas Alazard, Quoc-Hung Nguyen, Ann. PDE. 712021Thomas Alazard and Quoc-Hung Nguyen. On the Cauchy problem for the Muskat equation. II: Critical initial data. Ann. PDE, 7(1):Paper No. 7, 25, 2021.
On the Cauchy problem for the Muskat equation with non-lipschitz initial data. Thomas Alazard, Quoc-Hung Nguyen, doi.org/10.1080/03605302.2021.1928700Commun. Partial Differ. Equ. Thomas Alazard and Quoc-Hung Nguyen. On the Cauchy problem for the Muskat equation with non-lipschitz initial data. Commun. Partial Differ. Equ., doi.org/10.1080/03605302.2021.1928700, 2021.
Well-posedness of two-phase Hele-Shaw flow without surface tension. David M Ambrose, European J. Appl. Math. 155David M. Ambrose. Well-posedness of two-phase Hele-Shaw flow without surface tension. European J. Appl. Math., 15(5):597-607, 2004.
The two-phase Hele-Shaw problem with a nonregular initial interface and without surface tension. B V Bazaliy, N Vasylyeva, Zh. Mat. Fiz. Anal. Geom. 101155B. V. Bazaliy and N. Vasylyeva. The two-phase Hele-Shaw problem with a nonregular initial interface and without surface tension. Zh. Mat. Fiz. Anal. Geom., 10(1):3-43, 152, 155, 2014.
The Muskat problem with surface tension and a nonregular initial interface. V Borys, Nataliya Bazaliy, Vasylyeva, Nonlinear Anal. 7417Borys V. Bazaliy and Nataliya Vasylyeva. The Muskat problem with surface tension and a nonregular initial interface. Nonlinear Anal., 74(17):6074-6096, 2011.
Global well-posedness for the two-dimensional Muskat problem with slope less than 1. Stephen Cameron, Anal. PDE. 124Stephen Cameron. Global well-posedness for the two-dimensional Muskat problem with slope less than 1. Anal. PDE, 12(4):997-1022, 2019.
Global well-posedness for the 3D Muskat problem with medium size slope. Stephen Cameron, arXiv:2002.00508Arxiv preprintStephen Cameron. Global well-posedness for the 3D Muskat problem with medium size slope. Arxiv preprint, arXiv:2002.00508, 2020.
Rayleigh-Taylor breakdown for the Muskat problem with applications to water waves. Á Castro, D Córdoba, C Fefferman, F Gancedo, M López-Fernández, Ann. of Math. 1752Á. Castro, D. Córdoba, C. Fefferman, F. Gancedo, and M. López-Fernández. Rayleigh-Taylor breakdown for the Muskat problem with applications to water waves. Ann. of Math. (2), 175:909-948, 2012.
Breakdown of Smoothness for the Muskat Problem. Ángel Castro, Diego Córdoba, Charles Fefferman, Francisco Gancedo, Arch. Ration. Mech. Anal. 2083Ángel Castro, Diego Córdoba, Charles Fefferman, and Francisco Gancedo. Breakdown of Smoothness for the Muskat Problem. Arch. Ration. Mech. Anal., 208(3):805-909, 2013.
Splash Singularities for the One-Phase Muskat Problem in Stable Regimes. Ángel Castro, Diego Córdoba, Charles Fefferman, Francisco Gancedo, Arch. Ration. Mech. Anal. 2221Ángel Castro, Diego Córdoba, Charles Fefferman, and Francisco Gancedo. Splash Singularities for the One-Phase Muskat Problem in Stable Regimes. Arch. Ration. Mech. Anal., 222(1):213-243, 2016.
Some free boundary problems recast as nonlocal parabolic equations. A Héctor, Nestor Chang-Lara, Russell W Guillen, Schwab, Nonlinear Anal. 18911538Héctor A. Chang-Lara, Nestor Guillen, and Russell W. Schwab. Some free boundary problems recast as nonlocal parabolic equations. Nonlinear Anal., 189:11538, 60, 2019.
The Muskat problem with C 1 data. Ke Chen, Quoc-Hung Nguyen, Yiran Xu, arXiv:2010.06915Arxiv preprintKe Chen, Quoc-Hung Nguyen, and Yiran Xu. The Muskat problem with C 1 data. Arxiv preprint, arXiv:2010.06915, 2021.
Well-posedness of the Muskat problem with H 2 initial data. C H , Arthur Cheng, Rafael Granero-Belinchón, Steve Shkoller, Adv. Math. 286C. H. Arthur Cheng, Rafael Granero-Belinchón, and Steve Shkoller. Well-posedness of the Muskat problem with H 2 initial data. Adv. Math., 286:32 -104, 2016.
Regularity for the one-phase Hele-Shaw problem from a Lipschitz initial surface. Sunhi Choi, David Jerison, Inwon Kim, Amer. J. Math. 1292Sunhi Choi, David Jerison, and Inwon Kim. Regularity for the one-phase Hele-Shaw problem from a Lipschitz initial surface. Amer. J. Math., 129(2):527-582, 2007.
On the global existence for the Muskat problem. Peter Constantin, Diego Córdoba, Francisco Gancedo, Robert M Strain, J. Eur. Math. Soc. 151JEMS)Peter Constantin, Diego Córdoba, Francisco Gancedo, and Robert M. Strain. On the global existence for the Muskat problem. J. Eur. Math. Soc. (JEMS), 15(1):201-227, 2013.
Global regularity for 2D Muskat equations with finite slope. Peter Constantin, Francisco Gancedo, Roman Shvydkoy, Vlad Vicol, Ann. Inst. H. Poincaré Anal. Non Linéaire. 344Peter Constantin, Francisco Gancedo, Roman Shvydkoy, and Vlad Vicol. Global regularity for 2D Muskat equations with finite slope. Ann. Inst. H. Poincaré Anal. Non Linéaire, 34(4):1041-1074, 2017.
Interface evolution: the Hele-Shaw and Muskat problems. Antonio Córdoba, Diego Córdoba, Francisco Gancedo, Ann. of Math. 1732Antonio Córdoba, Diego Córdoba, and Francisco Gancedo. Interface evolution: the Hele-Shaw and Muskat problems. Ann. of Math. (2), 173(1):477-542, 2011.
Contour dynamics of incompressible 3-D fluids in a porous medium with different densities. Diego Córdoba, Francisco Gancedo, Comm. Math. Phys. 2732Diego Córdoba and Francisco Gancedo. Contour dynamics of incompressible 3-D fluids in a porous medium with different densities. Comm. Math. Phys., 273(2):445-471, 2007.
A maximum principle for the Muskat problem for fluids with different densities. Diego Córdoba, Francisco Gancedo, Comm. Math. Phys. 2862Diego Córdoba and Francisco Gancedo. A maximum principle for the Muskat problem for fluids with different densities. Comm. Math. Phys., 286(2):681-696, 2009.
A note on stability shifting for the Muskat problem. Diego Córdoba, Javier Gómez-Serrano, Andrej Zlatoš, Philos. Trans. Roy. Soc. A. 37310Diego Córdoba, Javier Gómez-Serrano, and Andrej Zlatoš. A note on stability shifting for the Muskat problem. Philos. Trans. Roy. Soc. A, 373(2050):20140278, 10, 2015.
A note on stability shifting for the Muskat problem, II: From stable to unstable and back to stable. Diego Córdoba, Javier Gómez-Serrano, Andrej Zlatoš, Anal. PDE. 102Diego Córdoba, Javier Gómez-Serrano, and Andrej Zlatoš. A note on stability shifting for the Muskat problem, II: From stable to unstable and back to stable. Anal. PDE, 10(2):367-378, 2017.
Global well-posedness for the 2D stable Muskat problem in H 3{2. Diego Córdoba, Omar Lazar, arXiv:1803.07528Ann. Sci. Éc. Norm. Supér. To appearDiego Córdoba and Omar Lazar. Global well-posedness for the 2D stable Muskat problem in H 3{2 . Ann. Sci. Éc. Norm. Supér, arXiv:1803.07528, 2018. To appear.
On the two-dimensional Muskat problem with monotone large initial data. Fan Deng, Zhen Lei, Fanghua Lin, Comm. Pure Appl. Math. 706Fan Deng, Zhen Lei, and Fanghua Lin. On the two-dimensional Muskat problem with monotone large initial data. Comm. Pure Appl. Math., 70(6):1115-1145, 2017.
Global well-posedness for the one-phase Muskat problem. Hongjie Dong, Francisco Gancedo, Huy Q Nguyen, arXiv:2010.06915Arxiv preprintHongjie Dong, Francisco Gancedo, and Huy Q. Nguyen. Global well-posedness for the one-phase Muskat problem. Arxiv preprint, arXiv:2010.06915, 2021.
Modelling and analysis of the Muskat problem for thin fluid layers. Joachim Escher, Anca-Voichita Matioc, Bogdan-Vasile Matioc, J. Math. Fluid Mech. 142Joachim Escher, Anca-Voichita Matioc, and Bogdan-Vasile Matioc. Modelling and analysis of the Muskat problem for thin fluid layers. J. Math. Fluid Mech., 14(2):267-277, 2012.
On the parabolicity of the Muskat problem: well-posedness, fingering, and stability results. Joachim Escher, Bogdan-Vasile Matioc, Z. Anal. Anwend. 302Joachim Escher and Bogdan-Vasile Matioc. On the parabolicity of the Muskat problem: well-posedness, fingering, and stability results. Z. Anal. Anwend., 30(2):193-218, 2011.
Classical solutions for Hele-Shaw models with surface tension. Joachim Escher, Gieri Simonett, Adv. Differential Equations. 24Joachim Escher and Gieri Simonett. Classical solutions for Hele-Shaw models with surface tension. Adv. Differential Equations, 2(4):619-642, 1997.
Surface tension stabilization of the Rayleigh-Taylor instability for a fluid layer in a porous medium. Francisco Gancedo, Rafael Granero-Belinchón, Stefano Scrobogna, Ann. Inst. H. Poincaré Anal. Non Linéaire. 376Francisco Gancedo, Rafael Granero-Belinchón, and Stefano Scrobogna. Surface tension stabilization of the Rayleigh- Taylor instability for a fluid layer in a porous medium. Ann. Inst. H. Poincaré Anal. Non Linéaire, 37(6):1299-1343, 2020.
Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem. Francisco Gancedo, Robert M Strain, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA111Francisco Gancedo and Robert M. Strain. Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem. Proc. Natl. Acad. Sci. USA, 111(2):635-639, 2014.
Uniqueness and existence results on the Hele-Shaw and the Stefan problems. Inwon Kim, Arch. Ration. Mech. Anal. 1684Inwon Kim. Uniqueness and existence results on the Hele-Shaw and the Stefan problems. Arch. Ration. Mech. Anal., 168(4):299-328, 2003.
Long time regularity of solutions of the Hele-Shaw problem. Inwon Kim, Nonlinear Anal. 6412Inwon Kim. Long time regularity of solutions of the Hele-Shaw problem. Nonlinear Anal., 64(12):2817-2831, 2006.
Regularity of the free boundary for the one phase Hele-Shaw problem. Inwon Kim, J. Differential Equations. 2231Inwon Kim. Regularity of the free boundary for the one phase Hele-Shaw problem. J. Differential Equations, 223(1):161-184, 2006.
Self-similarity in a thin film Muskat problem. Philippe Laurençot, Bogdan-Vasile Matioc, SIAM J. Math. Anal. 494Philippe Laurençot and Bogdan-Vasile Matioc. Self-similarity in a thin film Muskat problem. SIAM J. Math. Anal., 49(4):2790-2842, 2017.
A method for the solution of certain non-linear problems in least squares. Kenneth Levenberg, Quart. Appl. Math. 2Kenneth Levenberg. A method for the solution of certain non-linear problems in least squares. Quart. Appl. Math., 2:164-168, 1944.
Collapse vs. blow up and global existence in the generalized Constantin-Lax-Majda equation. Pavel Lushnikov, Denis A Silantyev, Michael Siegel, arXiv:2010.01201arXiv preprintPavel Lushnikov, Denis A. Silantyev, and Michael Siegel. Collapse vs. blow up and global existence in the generalized Constantin-Lax-Majda equation. arXiv preprint arXiv:2010.01201, 2020.
An algorithm for least-squares estimation of nonlinear parameters. Donald W Marquardt, J. Soc. Indust. Appl. Math. 11Donald W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Indust. Appl. Math., 11:431-441, 1963.
The Muskat problem in two dimensions: equivalence of formulations, well-posedness, and regularity results. Bogdan-Vasile Matioc, Anal. PDE. 122Bogdan-Vasile Matioc. The Muskat problem in two dimensions: equivalence of formulations, well-posedness, and regularity results. Anal. PDE, 12(2):281-332, 2019.
Global solutions for the Muskat problem in the scaling invariant besov space 9 B 1. Q Huy, Nguyen, arXiv:2103.1453581 . Arxiv preprintHuy Q. Nguyen. Global solutions for the Muskat problem in the scaling invariant besov space 9 B 1 8,1 . Arxiv preprint arXiv:2103.14535, 2021.
A paradifferential approach for well-posedness of the Muskat problem. Q Huy, Benoît Nguyen, Pausader, Arch. Ration. Mech. Anal. 2371Huy Q. Nguyen and Benoît Pausader. A paradifferential approach for well-posedness of the Muskat problem. Arch. Ration. Mech. Anal., 237(1):35-100, 2020.
Global existence, singular solutions, and ill-posedness for the Muskat problem. Michael Siegel, Russel E Caflisch, Sam Howison, Comm. Pure Appl. Math. 5710Michael Siegel, Russel E. Caflisch, and Sam Howison. Global existence, singular solutions, and ill-posedness for the Muskat problem. Comm. Pure Appl. Math., 57(10):1374-1411, 2004.
Local classical solution of Muskat free boundary problem. Fahuai Yi, J. Partial Differ. Equ. 91Fahuai Yi. Local classical solution of Muskat free boundary problem. J. Partial Differ. Equ., 9(1):84-96, 1996.
Global classical solution of Muskat free boundary problem. Fahuai Yi, J. Math. Anal. Appl. 2882Fahuai Yi. Global classical solution of Muskat free boundary problem. J. Math. Anal. Appl., 288(2):442-461, 2003. :,;
Gran Via de les Corts Catalanes 585. 8007Barcelona, Spain; Kassar House, 151 Thayer Street, Providence, RI 02912, USADepartament de Matemàtiques i Informàtica, Universitat de Barcelona ; Department of Mathematics, Brown UniversityEmail address: : [email protected] Email address: ; [email protected]. Email address: ; [email protected] Email address:˚[email protected] Email address: § [email protected] de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain. Email address: : [email protected] Email address: ; [email protected] ;,˚, § Department of Mathematics, Brown University, Kassar House, 151 Thayer Street, Providence, RI 02912, USA. Email address: ; [email protected] Email address:˚[email protected] Email address: § [email protected]
|
[] |
[
"Frequency driven inversion of tunnel magnetoimpedance in magnetic tunnel junctions",
"Frequency driven inversion of tunnel magnetoimpedance in magnetic tunnel junctions"
] |
[
"Subir Parui [email protected] \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n",
"Mário Ribeiro \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n",
"Ainhoa Atxabal \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n",
"Amilcar Bedoya-Pinto \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n\nMax Planck Institute of Microstructure Physics\nD-06120HalleGermany\n",
"Xiangnan Sun \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n\nNational Center for Nanoscience and Technology\n100190BeijingP. R. China\n",
"Roger Llopis \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n",
"Fèlix Casanova \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n\nBasque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain\n",
"Luis E Hueso [email protected] \nCIC nanoGUNE\n20018Donostia-San SebastianSpain\n\nBasque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain\n"
] |
[
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"Max Planck Institute of Microstructure Physics\nD-06120HalleGermany",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"National Center for Nanoscience and Technology\n100190BeijingP. R. China",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"Basque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain",
"CIC nanoGUNE\n20018Donostia-San SebastianSpain",
"Basque Foundation for Science\nIKERBASQUE\n48011BilbaoSpain"
] |
[] |
Magnetic tunnel junctions (MTJs) are basic building blocks for devices such as magnetic random access memories (MRAMs). The relevance for modern computation of non-volatile high-frequency memories makes ac-transport measurements of MTJs crucial for exploring this regime. Here we demonstrate a frequency-mediated effect in which the tunnel magnetoimpedance reverses its sign in a classical Co/Al 2 O 3 /NiFe MTJ, whereas we only observe a gradual decrease of tunnel magnetophase. Such effects are explained by the capacitive coupling of a parallel resistor and capacitor in the equivalent circuit model of the MTJ. Furthermore, we report a positive tunnel magnetocapacitance effect, suggesting the presence of a spin-capacitance at the two ferromagnet/tunnel-barrier interfaces. Our results are important for understanding spin transport phenomena at the high frequency regime, in which the spinpolarized charge accumulation at the two interfaces plays a crucial role.
|
10.1063/1.4960202
|
[
"https://arxiv.org/pdf/1603.00933v1.pdf"
] | 118,547,904 |
1603.00933
|
bd6e0fe347cd7cafc13acd61269ef1c17ff5b1e6
|
Frequency driven inversion of tunnel magnetoimpedance in magnetic tunnel junctions
Subir Parui [email protected]
CIC nanoGUNE
20018Donostia-San SebastianSpain
Mário Ribeiro
CIC nanoGUNE
20018Donostia-San SebastianSpain
Ainhoa Atxabal
CIC nanoGUNE
20018Donostia-San SebastianSpain
Amilcar Bedoya-Pinto
CIC nanoGUNE
20018Donostia-San SebastianSpain
Max Planck Institute of Microstructure Physics
D-06120HalleGermany
Xiangnan Sun
CIC nanoGUNE
20018Donostia-San SebastianSpain
National Center for Nanoscience and Technology
100190BeijingP. R. China
Roger Llopis
CIC nanoGUNE
20018Donostia-San SebastianSpain
Fèlix Casanova
CIC nanoGUNE
20018Donostia-San SebastianSpain
Basque Foundation for Science
IKERBASQUE
48011BilbaoSpain
Luis E Hueso [email protected]
CIC nanoGUNE
20018Donostia-San SebastianSpain
Basque Foundation for Science
IKERBASQUE
48011BilbaoSpain
Frequency driven inversion of tunnel magnetoimpedance in magnetic tunnel junctions
1Tunnel barrierspintronicsmagnetic tunnel junction
Magnetic tunnel junctions (MTJs) are basic building blocks for devices such as magnetic random access memories (MRAMs). The relevance for modern computation of non-volatile high-frequency memories makes ac-transport measurements of MTJs crucial for exploring this regime. Here we demonstrate a frequency-mediated effect in which the tunnel magnetoimpedance reverses its sign in a classical Co/Al 2 O 3 /NiFe MTJ, whereas we only observe a gradual decrease of tunnel magnetophase. Such effects are explained by the capacitive coupling of a parallel resistor and capacitor in the equivalent circuit model of the MTJ. Furthermore, we report a positive tunnel magnetocapacitance effect, suggesting the presence of a spin-capacitance at the two ferromagnet/tunnel-barrier interfaces. Our results are important for understanding spin transport phenomena at the high frequency regime, in which the spinpolarized charge accumulation at the two interfaces plays a crucial role.
I. Introduction
Magnetic random access memory (MRAM) devices based on magnetic tunnel junctions (MTJ) are very attractive technologically, mostly due to their low energy consumption and fast data processing [1][2][3][4][5]. The two-terminal geometry of the MTJs is based on a thin dielectric layer (I) acting as the tunnel barrier, sandwiched between two ferromagnetic electrodes (FM), in a FM/I/FM structure [1][2][3][4][5]. MTJs typically show low (/high) resistance states when the magnetic orientation of the two ferromagnetic electrodes is parallel (/antiparallel), providing a positive tunnel magnetoresistance (TMR) [1][2][3][4][5][6][7]. Beyond this conventional positive TMR effect there are several nontrivial effects that give rise to negative TMR, where the low (/high) resistance states now correspond to the antiparallel (/parallel) orientation of the electrodes magnetization, namely spin-polarized resonance [8], quantum size effects [9], modulation of the density-of-states (DOS) of the ferromagnetic electrodes [10], inversion of tunneling spin polarization at the FM/I interface [11], and the formation of a tunneling standing wave [12]. These mechanisms are usually addressed by either dc (direct current) or ac (alternating current)-transport measurements at low frequencies [12,13]. For ac measurements, the impedance (Z), i.e., the total dynamic resistance of the circuit converges to the resistance measured only with dc-voltages when the phase shift (θ) is negligible. Phenomena such as spin polarized resonant tunneling, in which the TMR changes from positive to negative, is crucial for the development of resonanttunneling spin transistors and quantum information devices [8]. However, so far this effect has been shown to occur only when the thickness of a metallic insertion layer (Cu) between the FM and I layers is varied within a certain range [8]. Accordingly, it would be important to have alternative ways for controlling the magnetoimpedance (or magnetoresistance) externally that does not depend on the structural characteristics of the device.
Here, we demonstrate a controlled positive-to-negative inversion of the tunnel magnetoimpedance with increasing frequencies in a classical Co/Al 2 O 3 /NiFe MTJ by using impedance-spectroscopy measurements. The magnitude of the inversion depends on the individual resistance change between parallel and antiparallel magnetic states and the change in the spin-capacitance [14] of the FM/I interfacial layer. Simultaneously, the impedance-spectroscopy measurements not only help us to determine the presence of spin capacitance but also to characterize the speed of the device operation, and to estimate the tunnel magnetocapacitance effect [15][16][17][18][19]. We report a positive tunnel magnetocapacitance for the first time using a simple parallel equivalent circuit of a resistor and a capacitor of the MTJ, and by measuring the impedance and phase at set frequencies. In the regime of frequencies where the equivalent parallel circuit model is reliable, the tunnel magnetoresistance measured by ac-method comes in agreement with the standard dc measurements. Our results open the possibility to exploit the impedance-spectroscopy measurements for the optimization of functional spintronic devices.
II. Experimental details
A vertical cross-bar geometry of Co (16 nm)/Al 2 O 3 (x nm)/Ni 80 Fe 20 (16 nm) (from bottom to top, see Figure 1) was deposited in situ through different shadow masks. The junction stack is fabricated on a Si/SiO 2 (150 nm) substrate. Initially, several 16-nmthick Co bars are deposited by e-beam evaporation in an ultrahigh vacuum (UHV) chamber at the rate of 1Å/s. Afterwards, the Al 2 O 3 tunnel barrier is obtained evaporating Al (2.5 or 3 nm) all over the device, followed by plasma oxidation. Two steps of plasma oxidation at the oxygen pressure of 10 -1 mbar are employed in order to obtain a robust Al 2 O 3 tunnel barrier [20]. Finally, a 16-nm-thick NiFe top electrode was deposited at the rate of 1Å/s through a different shadow mask. The active area for tunneling is considered to be confined to the overlapping area between the Co bar and the NiFe bar (here we report results from two different device areas of 0.5 and 0.2 mm 2 ). The Al 2 O 3 dielectric sandwiched between the two ferromagnetic electrodes also provides a two terminal capacitor. Once fabricated, the device is transferred into a variable-temperature probe station (Lakeshore) for electrical measurements with a Keithley-4200 CVU semiconductor analyzer, which is capable to carry out both the impedance-spectroscopy measurement in the frequency range of 10 3 -10 7 Hz and the pure dc measurements. Using a two-probe configuration we measure both types of transport characteristics using the same two probes to avoid any circuit complication in the high frequency operation [19]. In Figure 1, the dotted line (red) shows the standard dc measurement schematics and the solid line (black) represents the schematics of the impedance-spectroscopy (ac + dc) measurement.
III. Results and discussions
A. DC-transport characteristics
We first focus on the dc-transport measurements of the MTJs. In case of a thin tunnel barrier, the s-and d-orbital electrons that tunnel from one ferromagnet to the other conserve the spin information via the single-step tunnel process [21]. The transport can be described as having two independent spin channels of two different resistances. In general, a low resistance state is observed for parallel alignment (R P ) of the two ferromagnets, and a high resistance state (R AP ) is observed when they are anti-parallel. The associated tunneling magnetoresistance (TMR DC ) ratio is defined as
(1)
The electric resistance of the parallel and anti-parallel magnetic states of the device is measured by two-probe method with the applied dc voltage up to ±600 mV. Figure 2a shows the TMR DC data for a representative Co/Al 2 O 3 (oxidized from 3 nm Al)/Ni 80 Fe 20 MTJ. When a large enough magnetic field is applied in-plane, the magnetizations of the NiFe and Co electrodes are aligned in parallel. Sweeping the magnetic field, causes the magnetization of NiFe to switch at a much lower field than that of Co, and an antiparallel state is achieved. The magnetization of Co and NiFe can thus be aligned either parallel (P) or antiparallel (AP), and two distinct resistance states are observed as can be seen in the measurement. The TMR DC in the MTJ reaches 20% at the lowest temperatures, at +10 mV of dc bias ( Figure 2a). This value is comparable to previous literature [1,7,22], confirming the good quality of our MTJs.
FIG. 2 (a) Spin-valve effect at several temperatures for the Co/Al 2 O 3 /NiFe MTJ at a dc bias of +10 mV. The bias is applied at the Co electrode with respect to the grounded NiFe electrode in two-probe geometry. (b) TMR DC as a function of the applied bias at several temperatures. (c) Zero bias resistance (ZBR) vs temperature for P and AP magnetization orientations of the two ferromagnets of the MTJ. The ZBR exhibits modest temperature dependence, confirming direct tunneling.
Furthermore, we have obtained the measurements of TMR DC by sweeping the bias voltage and measuring individually the resistance corresponding to either in the P-state or in the AP-state. Figure 2(b) shows that the magnitude of the TMR DC decreases with increasing bias in a monotonic, nonlinear fashion at all temperatures from 300 K to 10 K. This is a standard behavior for MTJs that relies on the basic physics of the spinpolarized tunneling mechanism. Increasing the bias voltage increases the spin relaxation rate, which decreases the TMR DC as the electrons tunnel into empty states of the receiving electrode with an excess energy, generating phonons and magnons [22,23]. The asymmetric bias dependence of TMR DC reflects the different DOS of the Co and NiFe metals used. At the same time, decreasing the temperature reduces the phonon and magnon scattering, leading to an increase in the TMR DC . Figure 2(c) presents the temperature dependence of the zero bias resistance (ZBR) of the tunnel barrier, defined as R(T)/R(300 K). It exhibits modest temperature dependence, characteristics of the direct tunneling process between the two ferromagnetic materials.
B. AC-transport characteristics
In order to investigate the tunnel magnetoimpedance effect, we focus on the impedancespectroscopy measurement combining an ac bias with an applied dc bias. From the actransport measurement, we obtain the impedance and the phase (Z, θ) as the fundamental parameters. From Z and θ, we can obtain | | and | | ; where and | | √ . We must note that both Z and Re(Z) converge to the dc value of the resistance in case of θ→0 at the low frequency regime, so that |Z| ≈ Re(Z) ≈ R DC . Figure 3 represents (Z, θ) and [Re(Z), Im(Z)] as a function of the magnetic field for two different frequencies, 2 kHz and 30 kHz, respectively, for the same MTJ as discussed in the dc-transport measurement. The black (left axis) and the red (right axis) arrows are used to indicate Z, Re(Z) and θ, Im(Z) respectively. Comparing these two frequencies, we see that the switching of both Z and Re(Z) (black curves) becomes inverted, whereas θ and X (red curves) remain consistent with their usual switching behavior. It is worth noting that the overall Z and Re(Z) value decreases at 30 kHz compared to the frequency at 2 kHz. However, an increase of θ and Im(Z) suggests the presence of the capacitive coupling in the device. Nevertheless, the switching behaviors of Z and Re(Z) with respect to the magnetic field clearly confirm the frequency-driven inversion of tunnel magnetoimpedance in the MTJ. In order to identify the different frequency regimes for positive and negative tunnel magnetoimpedance, we performed frequency scans of Z and θ for parallel and antiparallel magnetic orientations, respectively. Figure 4 shows the frequencydependence of Z, θ, TMZ, and TMθ respectively. We calculate TMZ and TMθ by using the same formula as in equation (1) but by changing the respective parameters (Z and θ). Figure 4(c) shows that the TMZ converges to the TMR DC values in the low frequency regime of ~1 kHz. Due to the capacitive coupling that originates from the charge accumulation at the two interfaces between the ferromagnets and the Al 2 O 3 dielectric, both the TMZ and TMθ values decrease with increasing frequency. The damping of the frequency response depends on the frequency independent resistance and the capacitance of the MTJ, which can also be further controlled by engineering the MTJs, as will be discussed below. Nevertheless, it is worth noting that the TMZ values [see Figure 4(c)] flip their sign in the frequency range of ~210 4 to ~210 5 Hz before reaching zero. We propose that the inversion of TMZ is related to the quicker damping of Z in the AP state relative to the P state as a function of the frequency. This can be understood on the basis of different resistances and capacitances in the AP and P state resulting in different time constants, which eventually control the damping of Z and of its inversion. It is possible to interpret the impedance-spectroscopy measurements via an equivalent circuit model. In MTJs, the resistance and capacitance are coupled in a parallel equivalent circuit [15][16][17][18][19]. Figure 5(a) represents the Cole-Cole diagrams. The plots between Re(Z) and Im(Z) show semicircular dependences, which validate the parallel circuit model [18,19]. In order to simplify the circuit analysis and to determine the respective frequency (f), independent resistance (R f ) and capacitance (C f ), the magnetic field independent part during ac-measurement can be eliminated by taking the difference of the impedances in the P and AP states [18]. This difference can be written as ΔZ = ΔRe(Z) + iΔIm(Z), where (2) and (3), and are listed in the Table I. From these parameters we observe that the positive tunnel magnetoresistance (~23% at 10 K) is consistent with the dc measurements, whereas almost no tunnel magnetocapacitance effect is extracted. This result coincides with a previous report [17], although it is fair to admit the conflicting studies in this regard [15,16,18,19].
( ),(2)
The temperature independent capacitance values extracted from these fittings amount to 12.7 nF. By considering the parallel plate capacitor device structure of our MTJs, we calculate the geometrical-capacitance ( ) to be 10.2 nF, where A is the area of the junction (0.5 mm 2 ), d is the thickness (3-nm-thick Al yields an Al 2 O 3 tunnel barrier of 3.9 nm as [24]), and ε is the dielectric constant of Al 2 O 3 ( , = 8.85 × 10 -12 F/m).The capacitance extracted from the fit is slightly larger than the geometrical capacitance, which suggests the presence of an additional leaky capacitance in parallel to the geometrical capacitance [18], making the model unreliable for the detection of interface spin capacitance. Below we discuss a straightforward approach to detect the influence of the two interface spin-capacitances which are in principle in series with the geometrical capacitance [16,18], and give rise to the positive tunnel magnetocapacitance. In a parallel resistor-capacitor circuit model, the time constant is determined as the product of the corresponding resistance and capacitance. Such time constant of MTJs plays an important role for high-speed applications. From our large area tunnel junction, we determine the time constants to be of the order of µs. However, the time constant of the circuit changes in two different magnetic configurations (P and AP) as well as in different temperatures due to the change in resistance in the circuit. A cutoff frequency can thus be obtained. For example, the cut-off frequencies at 10 K are 7.3 kHz (AP-state) and 8.9 kHz (P-state) respectively. Interestingly, these cut-off frequencies are quite close to the frequency regime above which we observe the inversion of TMZ as can be seen in figures 4(c). Further, we perform a more straightforward analysis to determine the resistance and the capacitance [15,16]. Considering the MTJ as a parallel network of r and c, we calculate their values directly at each set frequency from the (Z, θ) measurements. The direct dependence can be expressed as follows, √ and ⁄ , where is the frequency of the ac signal. Figure 6(a) shows the typical c and r dependence with varying magnetic fields. Our measurement shows a positive TMc value, smaller than the positive TMr values. The positive value of TMc indicates that there is higher capacitance in the AP-state than the P-state. To the best of our knowledge, this is the first time where a positive TMc is observed in a MTJ [15,16,18,19]. We suggest that one of the possible mechanisms behind the positive TMc is the different spin-dependent screening length of the spin polarized charge accumulation at the two different FM/I interfaces (Co/Al 2 O 3 and NiFe/Al 2 O 3 ). Figure 6(b) reveals the dc bias dependence of the normalized TMr and TMc during the impedance-spectroscopy measurements. We observe a decrease with increasing bias for both magnitudes, which is consistent with standard dc measurements. We also observe that the TMc is relatively less sensitive to the applied dc bias than the TMr, indicating a low leakage current in the MTJ [18]. Figures 6(c) and 6(d) show the frequency dependence of the TMr and TMc at different temperatures. In an ideal case, TMr and TMc are expected to be frequency independent. However, TMr and TMc decrease very sharply after a certain frequency, and we also notice a weak inversion in TMr. Nevertheless, in the low frequency regime we observe an almost temperature independent TMc, whereas the TMr is very consistent with the standard dcmeasurements at different temperatures.
An in-plane magnetic field dependence of the capacitance of a parallel plate capacitor can be induced via a combination of the Lorentz force and quantum confinement effects [25,26] or by spin-dependent Zeeman splitting [27,28]. Such phenomena are named as magnetocapacitance effect in a system without any ferromagnetic electrode. However, the origin of tunnel magnetocapacitance in MTJs can only be attributed to the presence of spin-capacitance at the two FM/I interface that appears due to the capacitive accumulation of spin-polarized carriers at the interface [14]. In order to have further control over the inversion phenomena in the tunnel magnetoimpedance, it is thus also important to have a different frequency response of the MTJs. Since the tunnel resistance is bias dependent, it is expected a change in the frequency response and also in the amplitude of inversion of the tunnel magnetoimpedance. Figures 7(a) and 7(b) show the frequency response of TMZ and TMθ at several dc-biases, whereas the ac-voltage is kept constant. Plots of the TMZ show a sharply damped oscillation and an increase in the frequency response of damping with increasing dc-bias. However, none of the TMθ plots show inversion. Furthermore, a modification in the thickness of the tunnel barrier and/or the area of the overlapping electrodes in the MTJ can be used to tune the damping frequency. Figures 7(c) and 7(d) show the frequency response of TMZ and TMθ of two different tunnel barrier thicknesses, which are oxidized from 3 nm (T1A1) and 2.5 nm (T2A1) thick Al, but with same overlapping area between Co and NiFe electrodes. In case of the thinner tunnel barrier, the resistance of the junction decreases exponentially with decreasing thickness, whereas the capacitance increases, since it is inversely proportional to the thickness between the two electrodes. Thus, for the tunnel barrier from 2.5-nm-thick Al, we observe the cut off frequency of the inversion increases compared to the junction with the tunnel barrier from 3-nm-thick Al and our observation of frequency shift is in agreement with the previous literature [16]. The figures also include the frequency response of the MTJs with different areas of the two overlapping electrodes (T1A1: 0.5 mm 2 and T1A2: 0.2 mm 2 ) for the same thickness of the tunnel barrier (oxidized from 3nm-thick Al). Both the tunnel resistance and the capacitance change with the effective device area and we observe higher frequency shift of the inversion with decreasing device area. Consequently, we confirm that the inversion in tunnel magnetoimpedance is very robust and easily tunable by applied dc bias and/or by engineering the junction geometrical parameters.
IV. Conclusions
In conclusion, we demonstrate an alternative mechanism to achieve inversion in the tunnel magnetoimpedance of MTJs, based on the frequency of the applied ac-signal. Ac-transport measurements of the impedance and corresponding phase allow the extraction of other fundamental parameters, such as resistance and capacitance. From the straightforward determination of the equivalent resistance and capacitance, we determine not only a positive tunnel magnetoresistance but also a positive tunnel magnetocapacitance effect. Moreover, in order to establish the robustness of the inversion phenomena, we report the dependence of the frequency response of the magnetoimpedance with the dc bias-voltage, thickness of the tunnel barrier and area of the two overlapping electrodes of the MTJs. Our results are encouraging to understand the spin-polarized charge accumulation as well as spin transport phenomena in spintronic devices at high frequencies involving any thin dielectric tunnel barrier, including ferromagnetic insulators. In particular, the inversion phenomena in MTJs will help extending the current use of Al 2 O 3 tunnel barriers in hot electron transistors [29,30] as well as encourage the development of highly functional spin transport devices.
FIG. 1
1Optical microscopy image of the device and the schematic diagram of the MTJ of Co/Al 2 O 3 /NiFe stacked on a Si/SiO 2 substrate. The dotted red-line represents the standard two-probe dc measurement schematics and the solid black-line represents the schematics of the impedance-spectroscopy measurement. Sweeping direction of the applied magnetic field B is also shown.
FIG. 3
3(a), (b) Magnetic field dependent (Z, θ), and [Re(Z), Im(Z)], respectively, at 2 kHz. Measurements are performed individually at +10 mV of dc bias, 30 mV of RMS ac bias and at 10 K. (c), (d) Similar measurements as in (a) and (b) are now performed at 30 kHz. At this frequency, the switching of Z and Re(Z) are inverted.
FIG. 4
4(a) Impedance and (b) phase as a function of frequency at 10 K for the P and AP orientations of the magnetization of the two FM electrodes. All the two-probe impedance-spectroscopy measurements are carried out at dc bias of +10 mV and at RMS ac bias of 30 mV. (c) Tunneling magnetoimpedance (TMZ) and (d) tunneling magnetophase (TMθ) as a function of frequency at different temperatures.
FIG. 5
5(a) Cole-Cole plots of the MTJ for the frequency range of 1 kHz-2 MHz in P and AP-state. The semicircular fits (solid black line) match very well with the experimental data. (b) ΔRe(Z) is shown as a function of the frequency. The solid black line is the fit by using equation (2) and the respective fitting parameters are listed in theTable I. (c) ΔIm(Z) is shown as a function of the frequency. The solid black line is the fit by using equation(3)with the same fitting parameters as before.
c) show the frequency dependence of the ΔRe(Z) and the ΔIm(Z) of the impedance. The frequency independent fitting parameters R f (AP), R f (P), C f (AP), and C f (P) are extracted by using both equations
FIG. 6
6(a) The magnetic field dependence of resistance (r) and capacitance (c) of the MTJ by considering the parallel equivalent circuit. The measurements are done at dc bias of +10 mV and at RMS ac bias of 30 mV. (b) Normalized bias dependence of both TMr and TMc of the device. Frequency dependence of the TMr (c) and TMc (d) are shown at different temperatures.
FIG. 7
7(a), (b) TMZ and TMθ at various dc bias voltages as a function of frequency at RMS ac bias of 30 mV. (c), (d) TMZ and TMθ as a function of frequency for two different thickness of Al 2 O 3 (T1A1: 3 nm and T2A1: 2.5 nm but with same junction areas) as well as for two different device areas (T1A1: 0.5 mm 2 and T1A2: 0.2 mm 2 ) with the same thickness of the Al 2 O 3 . All measurements are performed at room temperature.
TABLE I .
IExtracted fitting parameters from Figures 5(b) and 5(c), and the corresponding time constants.T
10 K
1725±10 Ω
1400±9 Ω
12.7±0.5 nF 12.7±0.5 nF 21.9±0.9 µs
17.8±0.7 µs
200 K
1604±8 Ω
1365±8 Ω
12.7±0.4 nF
12.7±0.4 nF 20.3±0.6 µs
17.3±0.5 µs
300 K
1448±9 Ω
1270±8 Ω
12.7±0.4 nF
12.7±0.4 nF 18.4±0.6 µs
16.1±0.5 µs
AcknowledgementsWe are grateful to Prof. Hanan Dery for useful discussions. We also acknowledge financial support from the European Union's 7thFramework
. J S Moodera, L R Kinder, T M Wong, R Meservey, Phys. Rev. Lett. 743273J.S. Moodera, L.R. Kinder, T.M. Wong, and R. Meservey, Phys. Rev. Lett. 74, 3273 (1995).
S S P Parkin, Proc. IEEE 91. IEEE 91661S.S.P. Parkin et al., Proc. IEEE 91, 661 (2003).
. S S P Parkin, Nat. Mater. 3862S. S. P. Parkin et al., Nat. Mater. 3, 862 (2004).
. S Yuasa, Nat. Mater. 3868S. Yuasa et al., Nat. Mater. 3, 868 (2004).
. S A Wolf, Science. 2941488S.A. Wolf et al., Science 294, 1488 (2001).
. E Cobas, A L Friedman, O M J Van't Erve, J T Robinson, B T Jonker, Nano Lett. 123000E. Cobas, A.L. Friedman, O.M.J. van't Erve, J.T. Robinson, and B.T. Jonker, Nano Lett. 12, 3000 (2012).
. A Bedoya-Pinto, M Donolato, M Gobbi, L E Hueso, P Vavassori, Appl. Phys. Lett. 10462412A. Bedoya-Pinto, M. Donolato, M. Gobbi, L. E. Hueso, and P. Vavassori, Appl. Phys. Lett. 104, 062412 (2014).
. S Yuasa, Science. 297S. Yuasa et al., Science 297, 234, (2002).
. J S Moodera, Phys. Rev. Lett. 833029J. S. Moodera et al., Phys. Rev. Lett. 83, 3029, (1999).
. P Leclair, B Hoex, H Wieldraajjer, J T Kohlhepp, H J M Swagten, W J M De Jonge, Phys. Rev. B. 64100406P. LeClair, B. Hoex, H. Wieldraajjer, J. T. Kohlhepp, H. J. M. Swagten, and W. J. M. de Jonge, Phys. Rev. B 64, 100406(R) (2001).
. I J Vera Marún, F M Postma, J C Lodder, R Jansen, Phys. Rev. B. 7664426I. J. Vera Marún, F. M. Postma, J. C. Lodder, and R. Jansen, Phys. Rev. B 76, 064426 (2007).
. C W Miller, Z. -P Li, I K Schuller, R W Dave, J M Slaughter, J Åkerman, Phys. Rev. Lett. 9947206C. W. Miller, Z. -P. Li, I. K. Schuller, R. W. Dave, J. M. Slaughter, and J. Åkerman, Phys. Rev. Lett. 99, 047206 (2007).
. L Gao, X Jiang, S H Yang, J D Burton, E Y Tsymbal, S S P Parkin, Phys. Rev. Lett. 99226602L. Gao, X. Jiang, S. H. Yang, J. D. Burton, E. Y. Tsymbal, and S. S. P. Parkin, Phys. Rev. Lett. 99, 226602 (2007).
. J M Rondinelli, M Stengel, N A Spaldin, Nature Nanotech. 346J. M. Rondinelli, M. Stengel, and N. A. Spaldin, Nature Nanotech. 3, 46 (2008).
. H Kaiju, S Fujita, T Morozumi, K Shiiki, J. Appl. Phys. 917430H. Kaiju, S. Fujita, T. Morozumi, and K. Shiiki, J. Appl. Phys. 91, 7430 (2002).
. P Padhan, P Leclair, A Gupta, K Tsunekawa, D Djayaprawira, Appl. Phys. Lett. 90142105P. Padhan, P. LeClair, A. Gupta, K. Tsunekawa, and D. Djayaprawira, Appl. Phys. Lett. 90, 142105 (2007).
. S Ingvarsson, M Arikan, M Carter, W F Shen, Gang Xiao, Appl. Phys. Lett. 96232506S. Ingvarsson, M. Arikan, M. Carter, W. F. Shen, and Gang Xiao, Appl. Phys. Lett. 96, 232506 (2010).
. A M Sahadevan, K Gopinadhan, C S Bhatia, H Yang, Appl. Phys. Lett. 101162404A. M. Sahadevan, K. Gopinadhan, C. S. Bhatia, and H. Yang, Appl. Phys. Lett. 101, 162404 (2012).
. Y.-M Chang, K.-S Li, H Huang, M.-J Tung, S.-Y Tong, M.-T Lin, J. Appl. Phys. 10793904Y.-M. Chang, K.-S. Li, H. Huang, M.-J. Tung, S.-Y. Tong, and M.-T. Lin, J. Appl. Phys. 107, 093904 (2010).
. P Leclair, J T Kohlhepp, A A Smits, H J M Swagten, B Koopmans, W , P. LeClair, J. T. Kohlhepp, A. A. Smits, H. J. M. Swagten, B. Koopmans, and W. J.
. M De Jonge, J. Appl. Phys. 857803M. de Jonge, J. Appl. Phys. 85, 7803 (1999);
. K S Yoon, J H Park, J H Choi, J , K. S. Yoon, J. H. Park, J. H. Choi, J. Y.
. C H Yang, C O Lee, J P Kim, T W Hong, Kang, Appl. Phys. Lett. 791160Yang, C. H. Lee, C. O. Kim, J. P. Hong, and T. W. Kang, Appl. Phys. Lett. 79, 1160 (2001).
. M Jullière, Phys. Lett. 54225M. Jullière, Phys. Lett. 54A, 225 (1975).
. B G Park, T Banerjee, J C Lodder, R Jansen, Phys. Rev. Lett. 99217206B. G. Park, T. Banerjee, J. C. Lodder, and R. Jansen, Phys. Rev. Lett. 99, 217206 (2007).
. S Zhang, P M Levy, A C Marley, S S P Parkin, Phys. Rev. Lett. 793744S. Zhang, P. M. Levy, A. C. Marley, and S. S. P. Parkin, Phys. Rev. Lett. 79, 3744 (1997).
. G Landry, Y Dong, J Du, X Xiang, John Q Xiao, Appl. Phys. Lett. 78501G. Landry, Y. Dong, J. Du, X. Xiang, and John Q. Xiao, Appl. Phys. Lett. 78, 501 (2001).
. J Hampton, J Eisenstein, L Pfeiffer, K West, Solid State Commun. 94559J. Hampton, J. Eisenstein, L. Pfeiffer, and K. West, Solid State Commun. 94, 559 (1995).
. T Jungwirth, L Smrčka, Phys. Rev. B. 5110181T. Jungwirth and L. Smrčka, Phys. Rev. B 51, 10181 (1995).
. S Zhang, Phys. Rev. Lett. 83640S. Zhang, Phys. Rev. Lett. 83, 640 (1999).
. K T Mccarthy, A F Hebard, S B Arnason, Phys. Rev. Lett. 90117201K. T. McCarthy, A. F. Hebard, and S. B. Arnason, Phys. Rev. Lett. 90, 117201 (2003).
. M Gobbi, L Pietrobon, A Atxabal, A Bedoya, X Pinto, F Sun, R Golmar, F Llopis, L E Casanova, Hueso, Nat. Commun. 54161M. Gobbi, L. Pietrobon, A. Atxabal, A. Bedoya Pinto, X. Sun, F. Golmar, R. Llopis, F. Casanova, and L. E. Hueso, Nat. Commun. 5, 4161 (2014).
. S Parui, A Atxabal, M Ribeiro, A Bedoya-Pinto, X Sun, R Llopis, F Casanova, L E Hueso, Appl. Phys. Lett. 107183502S. Parui, A. Atxabal, M. Ribeiro, A. Bedoya-Pinto, X. Sun, R. Llopis, F. Casanova and L. E. Hueso, Appl. Phys. Lett. 107, 183502 (2015).
|
[] |
[
"Comment on \"Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities\"",
"Comment on \"Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities\""
] |
[
"Yavor Kamer \nSwiss Seismological Service\nETH Zürich\nSwitzerland\n\nChair of Entrepreneurial Risks\nDepartment of Management, Technology and Economics\nETH Zürich\n\n"
] |
[
"Swiss Seismological Service\nETH Zürich\nSwitzerland",
"Chair of Entrepreneurial Risks\nDepartment of Management, Technology and Economics\nETH Zürich\n"
] |
[] |
Switzerland[Tormann et al., 2014]propose a distance exponential weighted (DEW) b-value mapping approach as an improvement to previous methods of constant radius and nearest neighborhood.To test the performance of their proposed method the authors introduce a score function:where N is the total number of grid nodes, n is the number of non-empty nodes, b true and b est are the true and estimated b-values. This score function is applied on a semi-synthetic earthquake catalog to make inference on the parameters of the method. In this comment we argue that the proposed methodology cannot be applied to seismic analysis since it requires a priori knowledge of the spatial b-value distribution, which it aims to reveal.The score function given in Equation 1 seeks to minimize the absolute difference between the generating (true) and estimated b-values. Like any statistical parameter, the estimation error of the b-value decreases with increasing sample size. However, since the b-value is a measure of slope on a log-log plot, for any given sample size the estimation error is not only asymmetric but also its amplitude is dependent on the b-value itself.To make a pedagogical analogy; conducting b-value analysis by limiting the sample size (Tormann et al. use 150) is similar to measuring temperature with a peculiarly short thermometer. For small sample sizes such a thermometer would measure lower temperatures more precisely than higher ones, to the extent that it would be impossible to distinguish between values on the upper end. Only when the sample size, and hence the resolution, is increased does
|
10.1002/2014jb011147
|
[
"https://arxiv.org/pdf/1403.7423v2.pdf"
] | 118,413,335 |
1403.7423
|
e454733a748c56d48770daf1158e20ec398e4884
|
Comment on "Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities"
Yavor Kamer
Swiss Seismological Service
ETH Zürich
Switzerland
Chair of Entrepreneurial Risks
Department of Management, Technology and Economics
ETH Zürich
Comment on "Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities"
Switzerland[Tormann et al., 2014]propose a distance exponential weighted (DEW) b-value mapping approach as an improvement to previous methods of constant radius and nearest neighborhood.To test the performance of their proposed method the authors introduce a score function:where N is the total number of grid nodes, n is the number of non-empty nodes, b true and b est are the true and estimated b-values. This score function is applied on a semi-synthetic earthquake catalog to make inference on the parameters of the method. In this comment we argue that the proposed methodology cannot be applied to seismic analysis since it requires a priori knowledge of the spatial b-value distribution, which it aims to reveal.The score function given in Equation 1 seeks to minimize the absolute difference between the generating (true) and estimated b-values. Like any statistical parameter, the estimation error of the b-value decreases with increasing sample size. However, since the b-value is a measure of slope on a log-log plot, for any given sample size the estimation error is not only asymmetric but also its amplitude is dependent on the b-value itself.To make a pedagogical analogy; conducting b-value analysis by limiting the sample size (Tormann et al. use 150) is similar to measuring temperature with a peculiarly short thermometer. For small sample sizes such a thermometer would measure lower temperatures more precisely than higher ones, to the extent that it would be impossible to distinguish between values on the upper end. Only when the sample size, and hence the resolution, is increased does
[Tormann et al., 2014]
propose a distance exponential weighted (DEW) b-value mapping approach as an improvement to previous methods of constant radius and nearest neighborhood.
To test the performance of their proposed method the authors introduce a score function:
2 ( ) n true est i N score abs b b n (1)
where N is the total number of grid nodes, n is the number of non-empty nodes, b true and b est are the true and estimated b-values. This score function is applied on a semi-synthetic earthquake catalog to make inference on the parameters of the method. In this comment we argue that the proposed methodology cannot be applied to seismic analysis since it requires a priori knowledge of the spatial b-value distribution, which it aims to reveal.
The score function given in Equation 1 seeks to minimize the absolute difference between the generating (true) and estimated b-values. Like any statistical parameter, the estimation error of the b-value decreases with increasing sample size. However, since the b-value is a measure of slope on a log-log plot, for any given sample size the estimation error is not only asymmetric but also its amplitude is dependent on the b-value itself.
To make a pedagogical analogy; conducting b-value analysis by limiting the sample size (Tormann et al. use 150) is similar to measuring temperature with a peculiarly short thermometer. For small sample sizes such a thermometer would measure lower temperatures more precisely than higher ones, to the extent that it would be impossible to distinguish between values on the upper end. Only when the sample size, and hence the resolution, is increased does the thermometer become reliable across the whole measurement range. For an illustration in Figure 1a we show the confidence intervals for 6 b-values, accordingly we divide our thermometer into 6 intervals. The length of each interval is scaled with 1/σ N as a proxy for resolution, where σ N is the standard deviation for N number of samples. The analogous thermometers obtained for N=150, 300 and 1500 are shown in Figure 1b.
Intersecting optimization procedure by setting N max =150, in the application to real seismicity the authors set N max =∞; effectively designating the limiting parameter as R max =7.5. This parameter flip does not change our following conclusion, however, since for any spatial distribution N max and R max are coupled via its fractal dimension [Hirabayashi et al., 1992]. Nevertheless we repeat the synthetic test setting N max =∞ and varying R max in the range of [1-100km]. The score surfaces are presented in Figure 2b. We observe that the minima vary as a function of the synthetic input: structures with b=1 (which is one of the most well established empirical laws in seismology [Gutenberg and Richter, 1954] [Shi and Bolt, 1982;Frohlich and Davis, 1993;Kagan, 1999Kagan, , 2002Kagan, , 2010Amorese et al., 2010]. We remind the reader that in this comment we are not concerned with the question whether the b-value on a certain fault is uniform or not, rather we are argue that arbitrary parameter choices and improper optimization schemes can indeed lead to under sampling.
To illustrate the sensitivity of the published results to parameter choices we consider the observed Parkfield seismicity. We apply the DEW method to obtain two b-value maps; a) using the same parameter values as Tormann et al. (2014) R max = 7.5, λ=0.7 b) using the parameters that would have been obtained if the input synthetic b-value was chosen as b=1 rather than b=0.5; R max = 40km, λ=0.01 (Figure 3). It is important to note that Tormann et al. (2014) use these maps to calculate annual probabilities of M6 or larger event to occur. Figure 3 suggest that the emergence of high and low b-value anomalies is a mere artifact of under sampling. These artifacts lead to differences of up to two orders of magnitude in the recurrence times thus it would be precarious to use such maps for assessment of probabilistic seismic hazard on faults.
Since one cannot know in advance what b-value the real data features and thus choose parameters accordingly, we maintain that the approach presented by the Tormann et al. (2014) cannot be used on real datasets as its results depend on the assumed input b-values chosen to derive its parameters. Our conclusion applies also to the previously used similar b-value mapping methods of constant radius and nearest neighborhood.
We remind the reader that spatial/temporal b-value mapping has been previously tackled with likelihood based approaches that take into account model complexities [Imoto, 1987;Ogata and Katsura, 1993]
the confidence interval curves of different b-values one can derive that the crudest measurement in the range [0.6-1.5] would require a resolution of Δb=0.3 that is achieved with at least ~170 samples. An analysis in this constrained setting can correspond only to a mere classification of b-values as low (0.70±0.10), medium (1.00±0.13) or high (1.30±0.17). Such a graphical inference is, however, an oversimplification because determining if two GR distributions are significantly different (for a given p-value) would require nested-hypothesis tests (see Supporting Document) Compared to the previous sampling techniques the proposed DEW method features two additional parameters; number of maximum events (N max ) and coefficient of exponential magnitude weight decay with distance (λ). The remaining parameters, common with the previous approaches, are set to the following values: minimum number of events (N min =50), maximum search radius (R max =7.5) and minimum search radius to find closest event (R min =2.5km). The authors propose a synthetic Monte-Carlo approach to indentify which parameter values are superior in retrieving the input structure. Their synthetic catalog features three different b-value regions (0.5, 1.3 and 1.8) embedded in a background of b=1. The input b-value of the dominant region (both in terms of number of events and non-empty nodes) is chosen as 0.5. The authors find the best value for λ to be 0.7 while they set the N max to a constant value of 150. Considering the confidence intervals for different b-values (Figure 1a) such small sample sizes and large weight decay may be sufficient for small b-values, however, for larger b-values the same parameter values would be sub-optimal. In order to illustrate this we obtain the same dataset used by Tormann et. al. and conduct the same numerical test varying the input b-value of the dominant region from 0.5 to 1.25 (Figure 2). We vary the parameters N max and λ in the ranges of [150-1950] and [0.01-1]. For each parameter pair the score function was averaged over 500 realizations of the synthetic catalog. The score function surfaces for the 4 different input b-values are presented in Figure 2a. Although in their synthetic test Tormann et. al. conduct their
) require much larger N max (i.e R max ) and lower λ values to be retrieved correctly. This indicates that choosing parameter values based on a synthetic input with low bvalues will lead to under sampling of regions with higher b-values. Felzer (2006) observed that for a uniform b-value of b=1 such under sampling leads to emergence of artifacts with high or low b-values. Similar concerns regarding the under sampling of the Gutenberg-Richter's distribution have been brought up numerous times
FIGURESFigure 1 .
1a) Confidence intervals of 5% and 95% for b-value estimations with increasing sample size. b) Analogous thermometers corresponding to sample sizes N=150, 300 and 1500. The length of each thermometer is representative of its absolute resolution. When present, confidence interval overlaps between gradations are shown by sawtooth waves with amplitudes scaled accordingly.
Figure 2 .Figure 3 .
23Variations of the minimum (red dots) of the score function for different input b-values as a function of a) weight decay Lambda and the maximum allowed sample size N max b) Lambda and the maximum search radius R max Two b-value maps obtained from the same M≥1.3 Parkfield seismicity of the last 30 years: Using the same parameters as Tormann et. al. (top panel: 0.50<b<1.86) and the parameters minimizing the score function for a synthetic input of b=1 (center panel: 0.80<b<0.96). The observed seismicity is superimposed as gray circles scaled according to magnitude. Notice that in top panel the extreme b-values are observed mainly in regions with low seismic density. Bottom panel shows the ratio of the expected recurrence times for a magnitude M6 or greater event.
Chair of Entrepreneurial Risks, Department of Management, Technology and Economics, ETH Zürich,Yavor Kamer 1, 2
1 Swiss Seismological Service, ETH Zürich, Switzerland
2 Switzerland
. To put the term "model complexities" into perspective we note that for the 4174 complete events in Parkfield, Tormann et al. report ~1000 spatially varying b-values saturated in the range [0.50-1.50]. Whereas Imoto (1987), using penalized likelihood optimization for a comparable sized dataset (3035 complete events in New Zealand), obtains only 28 b-values in the range of [0.94-1.08].
AcknowledgementsThe MATLAB codes and the Parkfield earthquake catalog used to generateFigure 2andFigure 3can be downloaded from the following link:http://www.mathworks.co.uk/matlabcentral/fileexchange/46016. We encourage the reader to try different parameter sets and explore the large variety of b-value maps one can obtain from a single dataset in the absence of an objective criteria.
Variability in the power-law distributions of rupture events. D Amitrano, 10.1140/epjst/e2012-01571-9Eur. Phys. J. Spec. Top. 2051Amitrano, D. (2012), Variability in the power-law distributions of rupture events, Eur. Phys. J. Spec. Top., 205(1), 199-215, doi:10.1140/epjst/e2012-01571-9.
On varying b -values with depth: results from computer-intensive tests for Southern California. D Amorese, P A J.-R. Grasso, Rydelek, 10.1111/j.1365-246X.2009.04414.xGeophys. J. Int. 1801Amorese, D., J.-R. Grasso, and P. a. Rydelek (2010), On varying b -values with depth: results from computer-intensive tests for Southern California, Geophys. J. Int., 180(1), 347-360, doi:10.1111/j.1365-246X.2009.04414.x.
Calculating the Gutenberg-Richter b value. K R Felzer, AGU Fall Meet. Abstr. Felzer, K. R. (2006), Calculating the Gutenberg-Richter b value, AGU Fall Meet. Abstr., S42C- 08.
Teleseismic b values; or, much ado about 1.0. C Frohlich, S Davis, J. Geophys. Res. 98Frohlich, C., and S. Davis (1993), Teleseismic b values; or, much ado about 1.0, J. Geophys. Res., 98, 631-644.
Seismicity of the earth and associated phenomena. B Gutenberg, C F Richter, Princeton University PressPrinceton N.J2d. ed.Gutenberg, B., and C. F. Richter (1954), Seismicity of the earth and associated phenomena, [2d. ed.]., Princeton University Press, Princeton N.J.
Multifractal analysis of earthquakes. T Hirabayashi, K Ito, T Yoshii, Pure Appl. Geophys. 1384Hirabayashi, T., K. Ito, and T. Yoshii (1992), Multifractal analysis of earthquakes, Pure Appl. Geophys., 138(4), 591-610.
A Bayesian method for estimating earthquake magnitude distribution and changes in the distribution with time and space in New Zealand, New Zeal. M Imoto, 10.1080/00288306.1987.10422177J. Geol. Geophys. 302Imoto, M. (1987), A Bayesian method for estimating earthquake magnitude distribution and changes in the distribution with time and space in New Zealand, New Zeal. J. Geol. Geophys., 30(2), 103-116, doi:10.1080/00288306.1987.10422177.
Universality of the Seismic Moment-frequency Relation. Y Y Kagan, 10.1007/s000240050277Pure Appl. Geophys. 1552-4Kagan, Y. Y. (1999), Universality of the Seismic Moment-frequency Relation, Pure Appl. Geophys., 155(2-4), 537-573, doi:10.1007/s000240050277.
Seismic moment distribution revisited: I. Statistical results. Y Y Kagan, 10.1046/j.1365-246x.2002.01594.xGeophys. J. Int. 1483Kagan, Y. Y. (2002), Seismic moment distribution revisited: I. Statistical results, Geophys. J. Int., 148(3), 520-541, doi:10.1046/j.1365-246x.2002.01594.x.
Earthquake size distribution: Power-law with exponent beta=1/2?. Y Y Kagan, 10.1016/j.tecto.2010.04.034Tectonophysics. 4901-2Kagan, Y. Y. (2010), Earthquake size distribution: Power-law with exponent beta=1/2?, Tectonophysics, 490(1-2), 103-114, doi:10.1016/j.tecto.2010.04.034.
Analysis of temporal and spatial heterogeneity of magnitude frequency distribution inferred from earthquake catalogues. Y Ogata, K Katsura, 10.1111/j.1365-246X.1993.tb04663.xGeophys. J. Int. 1133Ogata, Y., and K. Katsura (1993), Analysis of temporal and spatial heterogeneity of magnitude frequency distribution inferred from earthquake catalogues, Geophys. J. Int., 113(3), 727- 738, doi:10.1111/j.1365-246X.1993.tb04663.x.
The standard error of the magnitude-frequency b value. Y Shi, B Bolt, Bull. Seismol. Soc. Am. 725Shi, Y., and B. Bolt (1982), The standard error of the magnitude-frequency b value, Bull. Seismol. Soc. Am., 72(5), 1677-1687.
Systematic survey of high-resolution b -value imaging along Californian faults: inference on asperities. T Tormann, S Wiemer, A Mignan, 10.1002/2013JB010867J. Geophys. Res. Solid Earth. Tormann, T., S. Wiemer, and A. Mignan (2014), Systematic survey of high-resolution b -value imaging along Californian faults: inference on asperities, J. Geophys. Res. Solid Earth, n/a- n/a, doi:10.1002/2013JB010867.
|
[] |
[
"Electryo, In-person Voting with Transparent Voter Verifiability and Eligibility Verifiability",
"Electryo, In-person Voting with Transparent Voter Verifiability and Eligibility Verifiability"
] |
[
"Peter B Rønne [email protected] \nUniversity of Luxembourg Esch-sur-Alzette\nLuxembourg\n",
"] \nUniversity of Luxembourg Esch-sur-Alzette\nLuxembourg\n",
"Peter Y A Ryan [email protected] \nUniversity of Luxembourg Esch-sur-Alzette\nLuxembourg\n",
"Marie-Laure Zollinger [email protected] \nUniversity of Luxembourg Esch-sur-Alzette\nLuxembourg\n"
] |
[
"University of Luxembourg Esch-sur-Alzette\nLuxembourg",
"University of Luxembourg Esch-sur-Alzette\nLuxembourg",
"University of Luxembourg Esch-sur-Alzette\nLuxembourg",
"University of Luxembourg Esch-sur-Alzette\nLuxembourg"
] |
[] |
Selene is an e-voting protocol that allows voters to directly check their individual vote, in cleartext, in the final tally via a tracker system, while providing good coercion mitigation. This is in contrast to conventional, end-to-end verifiable schemes in which the voter verifies the presence of an encryption of her vote on the bulletin board. The Selene mechanism can be applied to many e-voting schemes, but here we present an application to the polling station context, resulting in a voter-verifiable electronic tally with a paper audit trail. The system uses a smartcard-based public key system to provide the individual verification and universal eligibility verifiability. The paper record contains an encrypted link to the voter's identity, requiring stronger assumptions on ballot privacy than normal paper voting, but with the benefit of providing good auditability and dispute resolution as well as supporting (comparison) risk limiting audits.
| null |
[
"https://arxiv.org/pdf/2105.14783v1.pdf"
] | 86,583,247 |
2105.14783
|
01130967ae04f9717bb7502d8294246f352c7d87
|
Electryo, In-person Voting with Transparent Voter Verifiability and Eligibility Verifiability
Peter B Rønne [email protected]
University of Luxembourg Esch-sur-Alzette
Luxembourg
]
University of Luxembourg Esch-sur-Alzette
Luxembourg
Peter Y A Ryan [email protected]
University of Luxembourg Esch-sur-Alzette
Luxembourg
Marie-Laure Zollinger [email protected]
University of Luxembourg Esch-sur-Alzette
Luxembourg
Electryo, In-person Voting with Transparent Voter Verifiability and Eligibility Verifiability
Selene is an e-voting protocol that allows voters to directly check their individual vote, in cleartext, in the final tally via a tracker system, while providing good coercion mitigation. This is in contrast to conventional, end-to-end verifiable schemes in which the voter verifies the presence of an encryption of her vote on the bulletin board. The Selene mechanism can be applied to many e-voting schemes, but here we present an application to the polling station context, resulting in a voter-verifiable electronic tally with a paper audit trail. The system uses a smartcard-based public key system to provide the individual verification and universal eligibility verifiability. The paper record contains an encrypted link to the voter's identity, requiring stronger assumptions on ballot privacy than normal paper voting, but with the benefit of providing good auditability and dispute resolution as well as supporting (comparison) risk limiting audits.
Introduction
In this paper, we propose combining the highly transparent counted-as-intended verification mechanism of the e-voting scheme Selene [26] with paper ballot, in-person voting. The aim is to keep the vote casting experience close to paper ballot voting with optical scanning, while enabling the intuitive voter-verification of the Selene scheme. The resulting scheme provides improved dispute resolution and supports risk limiting audits.
For most end-to-end verifiable schemes the voter verifies the presence of an encryption of her vote in the input to the tally on the bulletin board. In contrast, Selene lets a voter check that her vote appears correctly, in the clear, in the final tally via a tracking number system. This provides a highly transparent and intuitive verification, but, if naively implemented, could lead to vote-selling and coercion.
The main idea of Selene is to mitigate the coercion threats by notifying the voters of their tracking number only after the full list of tracking numbers and votes has been published. Coerced voters can then simply choose a tracker pointing to the required vote and claim it as theirs. The notification provides the voter high assurance that it is the correct, i.e. unique, tracker while being deniable in the event of coercion.
In a paper ballot election the voters enjoy ballot secrecy thanks to the isolation of the voting booth at the polling station -giving good resistance against coercion and vote-buying attempts. Normally, but with UK as a prominent counter-example, the ballots are also anonymous and unmarked, extending the ballot secrecy to the tally phase. However, the integrity of the election relies on trust assumptions for the talliers, and many real attacks and errors are known, as shown in [16]. In Germany, for example, the tally process is public [3] and at least gives the voters the possibility to oversee the tally ceremony, however considerable trust is still required in the chain of custody of ballots.
To improve on this situation we propose here introducing the Selene mechanism to allow voters to verify that their vote is counted-as-intended. This requires the introduction of a carefully protected link between the ballot and the voter. The vote casting experience of the system is close to the optical scan paper ballot systems with the difference that the paper ballots will have a (QR-)code printed onto them which contains an encryption of the voter's identity. 1 We assume that voters have smartcards to store and prove their ID. Before getting into the details we recall the key elements of Selene.
The Essence of Selene
Selene revisits the old idea of enabling verification by posting the votes in the clear on the BB along with a private tracking numbers. The new twist is that voters are only notified of their tracker some time after the vote/tracker pairs have been publicly posted, giving a coerced voter the opportunity to choose an alternative tracker that will placate the coercer. Notification of the trackers is carefully designed to provide assurance that it is the correctly assigned tracker, i.e. unique to the voter, while being deniable. The key goals of Selene are:
-Ensure that each voter is assigned a unique tracker number.
-Notify the voter of her tracker after the vote/tracker pairs have been published in a manner that provides high assurance and yet is deniable in the event of coercion. This is achieved, in essence, by publishing a list of trackers, n i , verifiably encrypting and shuffling these and assigning them to the voters under trapdoor commitments according to the secret permutation π resulting from the shuffles. The commitment for the ith voter takes the form:
C i := pk ri i · g n π(i) Where pk i is the voter's public trapdoor key. C i is a Pedersen commitment to the tracker but can also be thought of as the second term (β) of an exponential ElGamal encryption of the tracker under the ith voter's trapdoor public key pk i . The corresponding first term (α = g ri ) is not published, but is communicated to the voter over a private channel at notification time. On receipt of the α-term, the voter can combine this with the β-term and decrypt using her trapdoor key.
If she is coerced, she can choose an alternative tracker that will satisfy the coercer and compute, using her trapdoor key, an alternative α. Without the trapdoor, it is intractable to compute an α that will decrypt to a given tracker. This observation simultaneously underpins the assurance that the tracker is correct, and removes the need to authenticate the α as communicated to the voter.
The Essence of Electryo
The key innovation of Electryo is to introduce a protected link between the paper ballot and the voter ID, in such a way as to guarantee the integrity and the secrecy of the link. This link is used to associate the encrypted vote, scanned from the paper ballot, with the voter ID on the BB, thus enabling the Selene mechanism to kick in. An additional feature is that at the time of scanning the ballot, a fresh, random receipt code is generated and printed for the voter to retain. This is required later to access the tracker number, providing an extra layer of privacy, as explained in detail later. Now that voters are able to verify their vote in the clear, we can omit the usual checks required in cryptographic, end-to-end verifiable schemes: Benaloh challenges and correct posting to the BB of the encrypted vote. A corollary of this last observation is that the voter does not need to retain a copy of the encrypted vote, just the receipt code, which helps ensure receipt-freeness.
The voting system provides individual verifiability via the Selene check, allows universal verifiability of the setup phase and of the tally as well as eligibility via the digital signatures. The paper record provides a basis for dispute resolution, while risk-limiting audits will strengthen the link between the paper and digital record -all of this while preserving a good measure of coercion-mitigation.
The outline of the paper is as follows: Below we give a brief overview of related work. In section 2 we list the parties involved, as well as the primitives used. Section 3 will give details of the voting ceremony from the voter's point of view. In section 4, we will give further details of the scheme including the cryptography. Section 5 gives a brief analysis of the scheme, describing some potential attacks and counter-measures.
Related Work Several in-person voting protocols mix paper ballots or a paperaudit trail with a public digital record of the votes:
Prêt-à-Voter [25] is a paper-based voting scheme with voter-verifiability, a version of which has been trialled in a state election [13]. Contrary to the present scheme, these schemes does not provide transparent verification or directly support RLAs, see however [20] for a version with a human-readable paper-audit trail.
Wombat [6] combines paper-ballot voting with cryptographic tabulation and end-to-end verifiability. A voting machine delivers a paper ballot containing a plaintext vote as well as the encrypted version as a QR-code. The voter can check the correctness of the plaintext vote before putting it in a ballot box. The encrypted version is scanned and posted to a BB and the paper copy is kept by the voter as a receipt.
Another polling station e-voting scheme is STAR-Vote [5] which combines electronic voting machines (DREs) with a paper trail to achieve end-to-end verifiability and allow for efficient risk limiting audits (RLAs). The correctness of the encryption of the vote, can be tested by the voter by a sort of Benaloh challenge, where discarded ballots are decrypted in public. We will give a more detailed comparison in the long version of this paper (to appear). Note that it was not a design goal of STAR-vote to have eligibility verifiability.
All of these schemes do not provide the transparent verification of the plaintext vote in the final tally, as we do here.
We also note that there are other schemes based on trackers, specifically sElect [18] and the boardroom scheme analysed in [4], but they are not paperbased and do not provide the level of coercion-resistance and receipt-freeness that we aim for here.
Participants and Primitives
The main participants of the protocols are:
-The voters V i . We assume that they are provided with electronic ID cards 2 , e.g. as part of a national electronic ID infrastructure like in Estonia [1]. 3 The card stores a secret signing key together with the ID which is associated with the corresponding public verification key vk i . We assume that the card can perform an encryption of the ID with the election key and sign input using the secret signing key. Further the voter has public and secret key pair, pk i , sk i for the Selene mechanism, where the latter is stored in a Vote Supporting Device (VSD) e.g. a smartphone or a computer and perhaps also on the smartcard. -The Election Authority EA is managing the election and protocol setup.
-The Tally Tellers TT create the public election key P K T and threshold share the secret key. They also facilitate a Mixnet M which is used to ensure privacy, and performs parallel verifiable re-encryption mixes see e.g. [27]. -A public web Bulletin Board BB is used for verifiable communication, and will be assumed to be append-only and have a consistent public view. -The Tracker Retrieval Authority (TRA) is responsible for relaying communication between the voters and T T . Specifically TRA will send the so-called α term to the voter, which can be turned into a tracker for her vote using the secret key sk i . -Registration Clerks and Talliers assisting at the polling station.
-Printers with Card Readers. These print a ballot code, bcode, onto the paper ballots in the form of a QR-code, containing the re-encrypted ID and digital signature of the voter.
-Optical Scanners with a Receipt Printer. The Scanner reads out the voter's choice on the paper-ballot and the bcode, and sends an encryption of the vote to BB together with a re-encryption of the ballot code and an encrypted receipt code. It delivers a ballot proof to the voter, that contains the receipt code in plain text together with a digital signature for accountability.
Some primitives used are -Encryption. We assume an IND-CPA secure homomorphic encryption scheme allowing re-encryption and verifiable mixing. To be explicit we choose ElGamal encryption which was used in Selene, and the homomorphic properties are needed for the Selene mechanism. We denote encryption with the key P K {·} P K and re-encryption is denoted {·} P K . For some parts the homomorphic properties are not necessary and we use a RCCA secure scheme instead, i.e. the only malleability of the ciphertext is the ability to re-encrypt which is necessary for privacy and mixnets. To be explicit we can use the OAEP 3-round transformation [23,22] of ElGamal. A single ciphertext then basically consists of two ElGamal ciphertexts and is RCCA secure under the Gap Diffie-Hellman assumption. We denote this encryption {·} OAEP,P K . The parallel mixing is easily adapted to this encryption scheme since it basically consists of two ElGamal ciphertexts. -Zero-Knowledge Proofs. We use zero-knowledge proofs, and proofs of knowledge, as well as signatures to ensure universal verifiability. For non-malleability the strong form [11] of the Fiat-Shamir transform [15] is used for obtaining non-interactive proofs, and we further include election identifiers in the hash to avoid malleability across elections. -Plaintext equivalence tests (PETs). A PET [17] produces a public verifiable test whether two ciphertexts are an encryption of a same plaintext message, without revealing the plaintexts to anybody. The test requires a threshold set of the Tellers TT . -QR-codes. A QR-code is a matrix barcode containing information for reliable and easy scanning. The encryption schemes used here can be based on elliptic curves requiring in the order of 512 bit strings. A ciphertext could then e.g. be stored in a QR-code version 6 (up to 1088 bits) or a version 10 for two OAEP ciphertexts.
Voting Experience
In this section we describe the protocol from the voter's perspective. The votecasting ceremony is close to a paper-ballot election with optical scanning of the ballots. The entire voting experience is described in figure 1 and more cryptographic details will be given in next section.
Registration
We assume that all voters are in possession of ID smart-cards, e.g. as part of a national electronic ID infrastructure. The ID card can create signatures and will be used to authenticate voters. The registration of the voters could probably happen automatically if based on a national PKI, alternatively by a company etc. Besides the ID-card, the Selene mechanism assumes that each voter V i holds a secret key, sk i . This may require a registration step by the voters, the details of which we omit, but note that these keys could potentially be used for multiple elections. The tracker authority TRA also needs to know where to contact the voter for the tracker retrieval phase, e.g. an email address. Such confirmed contact data is normal to have in an electronic ID infrastructure. However, for improved usability, we assume that the voter is using an app (e.g. authenticated via the sim card as in Estonia [1]) that accesses sk i e.g. via the smartcard or, if properly authenticated, the program could be the creator of the Selene keys.
Voting phase
On the voting day, the voter presents her ID card to a poll worker to confirm ID and eligibility, as in standard elections the voters showing up can be recorded in a paper log. The printer is equipped with a smart card reader and interacts with the card to retrieve an encryption, by the smart-card, of the ID and signature. It prints an unfilled ballot with a QR-code which encodes a re-encryption of the voter ID and digital signature, confirming the voter was present.
Then the voter enters the booth to fill the ballot, and finally she heads to a ballot box with a scanner/printer. The latter delivers a receipt code RC i on paper, without releasing it, before scanning the ballot. The scanner re-encrypts the ballot code, encrypts the vote and releases the receipt code to the voter. The data is stored and sent to BB after voting ends, and the paper ballot is retained in the ballot box.
Tracker retrieval
After the tally phase, cast votes and corresponding tracking numbers will appear on the BB. After a pause, allowing coerced voters to access this information, the voters will receive their α-term (see introduction) via their support device at randomised times, as in Selene. The device will calculate the voter's unique tracker using the received α-term, the public β-term and the trapdoor key sk.
However, in contrast to Selene the α term will only be sent to the voter if she at some point after election enters a correct receipt code RC i in her device. This adds a layer of privacy as explained later in section 5.3.
Voting in case of coercion
Coerced voters can take steps to mitigate the coercion.
After the tally board is created with votes and corresponding trackers, the voter can choose a tracker pointing to a candidate of the coercer's choice. Further, the voter can calculate a fake α-term using sk that opens to this tracker. The voter can now show the coercer this tracker and α-term, if required.
Further, for improved coercion-resistance, the coerced voter can also contact TRA with authentication and request to not receive the real α-term, but only the fake. Now, even in the case where a coercer or vote buyer controls the interface to receive the α-term, he does not receive any convincing evidence of the cast vote. As mentioned above the essential assumption here is that the voter has access to sk, e.g. via multiple copies or the storage on the ID card.
In a longer version of this paper (to appear), we will present an alternative version of Selene where the coerced voter even before or during voting can contact TRA and request a faked α-term.
Comments on usability
The voting experience is close to a standard optical-scan scheme. As with an optical-scan or STAR-Vote [5], the scanner and ballot box can be combined, so that the ballot will be read before being fed in automatically in the ballot box. The only aspect that might be a bit troubling for some voters is the printing of the QR code on the ballot form. This does not affect usability, it is automatic as far as the voter is concerned, but might be worrying from a privacy perspective.
We avoid the verification steps such as Benaloh challenges [7] of the encryptions. Instead, we have the extra Selene verification phase with the receipt code and tracker check, which we believe is more understandable for voters.
For disabled persons, multi-lingual communities or generally complicated ballots, voting machines could also be used to fill out the ballots. Here the QR code created by the printer is scanned by the voting machine to produce the filled out ballot, which is kept as a paper record. A scanning step is not necessary in this case.
Protocol Description
We now describe the protocol in more technical detail including cryptography.
Pre-Election Setup
Let us recall Selene's set-up phase that we will also follow here [26].
The Tally Tellers set up a secure group and create the threshold election key P K T for ElGamal encryption (or another homomorphic encryption scheme). We assume that all voters have PKs in the chosen group. Let pk i = g xi be the public key of voter V i , and x i = sk i their secret key. The Election Authority publishes on BB the set of tracking numbers n i . These could just be 1, . . . , n with n the number of eligible voters. Using a verifiable re-encryption mix each voter is associated a unique secret encrypted tracker on BB : (ID i , {g nj } P K T ), where j := π(i), and π is the secret permutation resulting from the mixes. As described in detail in [26], the Tally Tellers T T 1 , . . . , T T t produce a trapdoor commitment C i = pk ri i ·g nj where r i = t k=1 r i,k , along with an α-term α i = g ri that will be kept secret under encryption. Only T T k knows g r i,k . 4 Before vote casting BB displays
(ID i , vk i , pk i , {g nj } P K T , C i )
Here vk i is the verification key for voter V i , and the corresponding secret key is stored along with ID i on the voter's smartcard. The smartcard can produce signatures that can be verified via vk i and we assume the signature scheme to be existentially unforgeable. Further, the smartcard can produce encryptions that can be used in the mixnet construction and decrypted by TT . We here use ElGamal encryption {·} P K T , the OAEP version thereof discussed above, and e.g. Schnorr signatures, but other choices are possible, and since the smartcards are used across elections it might be preferable to use a separate key for this part.
Voting
Voter V i goes to the polling station and is identified and registered by a clerk. If her identity is confirmed and if she has not voted yet, the clerk proceeds to the printing. The ID card is read and delivers an encryption of the voter's ID to the printer. The latter re-encrypts it (to avoid privacy attacks from a colluding ID card and scanner) and delivers a QR-code representing the ballot code bcode = ({ID i } OAEP,P K T , {sign i } OAEP,P K T ). 5 The signature is of the ID and the election ID, but can also include e.g. the date and the printer ID. The clerk should be screened from seeing the printed ballot, but can check that the correct ID card is read in the card reader.
After retrieving her ballot, the voter enters a booth and fills out the ballot with her vote vote i by hand.
The voter now proceeds to a ballot box that contains a scanner. The scanner first prints a receipt code, that is not yet detachable from the ballot box. This ensures that the receipt code does not depend on the vote and thus cannot be used as a subliminal channel. The receipt code is a random short pin, e.g. five digits, with check digits. The voter then puts her ballot in the box, the scanner reads it and releases the receipt code. It processes the data and re-encrypts the ballot code elements, encrypts the vote and receipt code, and publishes on BB (after election, if offline):
{ID i } OAEP,P K T , {sign i } OAEP,P K T , {bcode} P K T , {vote i } P K T , {RC i } P K T , Π i
Here Π i is a zero-knowledge proof of plaintext knowledge for the vote and receipt code and correct message space, for less malleability we suggest to include an AND-proof, proving that the two first encryptions are re-encryptions of the bcode in the third ciphertext. We include the election identifier in the hash of the Fiat-Shamir transform. The proofs will prevent vote copy attacks also across elections. The reason to re-encrypt the ciphertexts in the ballot code is to prevent coercion attacks via taking a picture of the filled-in ballot as a proof of the cast vote. In this case, a coerced voter can fill out a ballot as required by the coercer, photograph it, and go back to the officials for a new ballot and hand the (photographed) one, which is destroyed. They now cast their intended vote using the new ballot form. The re-encryption means that the paper ballot won't be linkable to the public electronic record, which is also important in the RLAs. Finally, {bcode} P K T is an encryption of the ballotcode which is here written in shorthand, but includes several ElGamal ciphertexts. If needed, these can be decrypted and allow crosschecking with the corresponding paper record.
Mix and decryption
These published tuples are now sent through a parallel mixnet (e.g. Verificatum [27]) on the BB after checking the proofs Π i . After decryption of the first term we get back the ID and signature, i.e. we get mixed tuples of
ID i , sign i , {bcode} P K T , {vote i } P K T , {RC i } P K T
Now the signature can be checked for eligibility verification and with the previous data on the BB we construct (suppressing re-encryption for clarity)
ID i , {g n π(i) } P K T , C i , {vote i } P K T
As in Selene, the second and last term, the encrypted tracker and vote, are put through a verifiable parallel mix, after which the Tellers perform a verifiable decryption, to obtain the final tally board containing tracker/vote pairs:
(n π(i) , vote i )
Tracker notification
Receipt verification Before she can check her vote, the voter must enter the receipt code RC i on her device (after log in). The app will encrypt the receipt code and a T T will do a PET between this encryption and the one displayed on the bulletin board. This verification is also done to ensure that the paper ballot and the corresponding electronic record are related to the same voter, i.e. to prevent an attack from the printer putting the wrong ID on the ballot, see section 5 for details.
Tracker retrieval
The public commitment C i and the corresponding α-term can be combined to form an encryption of the tracker under the voter's public key:
(α i , C i ) = (g ri , pk ri i ·g n π(i) ) If the voter has entered the correct receipt code, the α i term will be sent to the voter, and she can then compute the decryption using her secret key and retrieve g n π(i) , and hence her unique tracker n π(i) . The Tracker Retrieval Authority will get the α i shares from each Tally Teller (authenticated for accountability), multiply these together to obtain α i and send this unauthenticated to the voter.
As described in Selene [26] it is computationally hard, without knowing sk i , to calculate an alternative α-term that opens to a valid tracker. Thus the αterms can be transmitted unauthenticated to the voter. On the other hand the voter can efficiently calculate such a fake α-term for any tracker (see [26]), and thus shows this to a coercer in case of coercion.
Risk Limiting Audits
A comparison Risk Limiting Audit (RLA) [19] is a method to confirm (or refute) the outcome of an election to any required confidence, by random sampling of the paper ballots. The digital and paper records of the vote are compared. Typically, for reasonably large margins, a small sample will suffice to achieve a good level of confidence, e.g. 95%. This technique requires a link between the digital and paper copies for every ballot.
The RLA testing, can be used to monitor the behaviour of the scanners. The audit should be performed in both directions, i.e. first start from tuples on BB , decrypt the bcode and find the corresponding paper ballot and check the consistency. In the other direction we can also start from a paper ballot, and the corresponding encrypted ballot can be found via PET tests, or more efficiently via an obfuscation of a part of the ballot code by lifting to a secret power homomorphically and then decrypting.
Preliminary Security Analysis
In this section, we give an overview of the security properties and their corresponding assumptions, and a brief, informal analysis of potential attacks. A more rigorous, formal analysis will be the topic of future work.
As is standard with E2E V schemes, we assume a trustworthy BB giving a consistent view for universal verifiability. This might be implemented as, for example [14], or perhaps using some form of Distributed Ledger Technology.
Verifiability
The current scheme has strong emphasis on verifiability and integrity, and like STAR-vote it has triple assurance by producing three lines of evidence for the result, firstly via tally board for the electronic ballots obtained via secure mixnets, secondly via the Selene mechanism that lets voters check their cast vote in the clear in the tally. Thirdly, the paper record enables comparison RLA checks between the paper and electronic records of the encrypted votes. Furthermore, the plaintext votes in the paper ballots can be hand counted if required. We also have universal eligibility verifiability via the ID-system, and the ID-marked paper ballots provide a solid base for dispute resolution.
The system remains secure even under large-scale collusions: even if the BB is malicious and can simulate views different views to voters, an honest RLA on paper ballots can detect a manipulated result. On the other hand, if all parties are corrupted including auditors, but the BB remains trustworthy and the Selene secret trapdoor keys are not compromised (e.g. stored on malware-free devices), then voters who check their vote can be sure it is counted correctly (assuming a computational assumption see [26]).
Universal Verifiability Universal verifiability allows voters and even third parties to verify the correct tallying of the votes on the BB . Anyone, including an interested voter, can check the proofs of the verifiable mixing and decryption of the encrypted votes and trackers. Further, the cryptographic operations on the BB during the setup phase of the Selene mechanism can to be verified by any observer. This will guarantee that each voter is assigned, and later notified, a unique tracker (assuming that the voter's Selene (trapdoor) keys are kept secret). As is true in general for e-voting schemes, the checks done by an individual voter are not enough to assure her that the overall outcome is correct. This is because she needs to be confident that all the other votes have also been correctly recorded and counted. This is assured if sufficient numbers of other voters have also checked their votes, or other mechanisms guarantee the correct handling of the votes. Here we have the possibility of auditors checking that the paper record is consistent with the digital record of encrypted votes on the BB . As an additional feature, as in STAR-vote, the interested party can also follow the evidence output by the RLAs that puts a risk-limit on the final result.
Eligibility Verifiability Eligibility is firstly checked by a registration clerk at the polling station, maintaining a log of attending voters. However additionally, the signature, produced by the smartcard and included in the ballot code, can be publicly verified on the bulletin board -even by third parties. Assuming secrecy of the signing keys, ideally protected with a pin on the smartcard, this proves the presence of the voter due to the existential unforgeability of the signature scheme, and prevents ballot stuffing. If the ID cards are used often, one concern might be that an adversary before the election maliciously obtains a signature of the data to be signed at the election. This can be prevented if the election ID contains some public random data first obtained briefly before the election.
Note that the tracker retrieval implicitly checks that the voter ID appeared on BB associated with some ballot, and then the check of the tracker on the tally board confirms correct vote. However the presence of their ID can also be checked independently by voters not using the Selene mechanism, or people abstaining. And anybody can check that only valid IDs and corresponding signatures appear giving the universal eligibility verifiability assuming an honest ID-card system.
Individual Verifiability Individual verifiability ensures that voters can check that their vote is recorded as intended. Typically, this is done by allowing a voter to challenge the encryption of the vote and then check the cast ciphertext on BB and then relying on the universal verifiability for the correct inclusion of the vote in the tally. The Selene mechanism used here, on the other hand, gives a more transparent direct verification directly of the cast vote in plaintext in the final tally. We hope that this will help incentivise more voters to do the check.
Attacks in the spirit of the trash attack [9] might appear troublesome. If the printer could see an ID associated to a voter, that is assumed not to perform any checks, it could then choose the ballot code (e.g. the last digits) to give instructions to a colluding scanner that it can maliciously change the electronic output vote. That is one reason, to let the ID card encrypt the ID, such that the printer won't be able to know the voter's identity. This attack thus requires a collusion between the ID card, printer and scanner and to identify a voter not doing the check. Further, the adversary also needs to collude with a tallier to miscount a vote in order to not get a discrepancy between the paper and online record. Further the attack could be caught by the RLAs. We see that the adversary needs a large collusion and has a risk of detection, whereas in normal paper voting the adversary only needs to control the tallier.
As with all E2E V schemes, the verification performed by an individual voter provides assurance that her vote is correctly handled but does not of itself provide assurance that the outcome of the election is correct. For that the voter needs to be confident that all other votes have been correctly handled. Usually this depends on a reasonable number of voters being sufficiently diligent in performing the appropriate checks, in particular checking that their encrypted vote appears correctly on the BB . This is perhaps a questionable assumption, but here, as with [20], we have an independent (paper) record of the cast encrypted votes which allows independent auditors to supplement the voter checks, lessening the dependence on voter diligence. This, along with the RLAs on the plaintext votes, should provide ample assurance to all that the outcome is correct.
Another attack vector is malware infection of the VSD for the Selene check, however it still requires a large collusion to actually change a vote.
Dispute resolution
One of the issues with the original (internet) Selene scheme was how to resolve a complaint when a voter claims that the vote they find on the BB is not the vote that they cast. If this occurs it is important for a judge to be able to determine where the complaint is genuine, arising from a corruption or malfunction of the system, or just a belligerent or forgetful voter. In Electryo, the election authorities can, in camera, request the corresponding ballot code on BB to be decrypted. The corresponding paper ballot can be identified using the ballot code allowing the vote, the ID and the corresponding signature can be checked.
Ballot privacy
Whereas we have high security guarantees for verifiability, the price for this has to be paid in terms of stronger assumptions for privacy. Ballot privacy essentially means that the voting system should not reveal a non-negligible amount of information about an honest voter's vote other than the outcome of the election. For a detailed overview of game-based definitions see [10], and also [24] for a recent work.
Clearly we need to assume that at least one Mix Teller is honest for ballot privacy, and that we do not have a threshold set of corrupted Tally Tellers. Also, if the ID card, the printer and the scanner collude, they can trivially map votes to IDs. However, if one of these are honest, then the adversary cannot link the IDs to the plaintext votes.
Further, each voter's supporting device is trusted for the privacy of the corresponding vote, but only if the voter actually uses the mechanism and enters the receipt code. This gives an extra layer of security for voters with high privacy concerns and who do not wish to trust the device and perform the check. It also safeguards voters with less technical knowledge, for whom the adversary has managed to setup and control the Selene keys and device. Note also that even if the secret Selene trapdoor key is leaked, the adversary still needs access to the α-term to break privacy.
The check of the additional receipt code via a PET is also used to guard against an attack where the adversary uses the Selene mechanism itself to check a vote, i.e. we must make sure that the Selene check is directed to the correct voter. Consider the case where the printer and smartcard are colluding, but the scanner is honest. To find the vote of the attacked voter, the printer uses the ID and signature key for a colluding voter and prints this on the ballot. Via the Selene mechanism this colluding voter can learn the vote of the attacked voter. This would be detectable via a Selene check done by the attacked voter, however the check of the receipt code means that this attack is detected with high probability and before the privacy leak happens, since the malicious voter does not know the correct RC i .
The scanner itself can also carry out attacks on privacy by delivering a wrong output. Consider the case where the scanner sees an interesting vote, and wants to know the corresponding ID. It could then try to put the ciphertext of the ID in two different output tuples, and after the mixing and decryption of the IDs and signatures, it searches for two identical IDs. Likewise it could also change the ciphertext of the signature to produce an invalid signature, and then look for this on BB . Finally if the PETs of the receipt-codes are not done privately it could also provoke an error here, by encrypting a false RC i . However, all of these attacks are immediately detectable with overwhelming probability and will deter a risk-adverse adversary from doing it. The RCCA security of the ciphertexts of the ID and signature means that it cannot do more advanced attacks than replacing or copying ciphertexts. The marking of ciphertexts used in [22] could also be introduced to make sure that the ciphertexts come from the ballot printer. Another worry could be that the scanner uses ciphertexts of votes from earlier elections, where they are publicly related to the IDs. However the ciphertexts in the output needs to come with proofs of plaintext knowledge, so the IND-CPA security prevents attacks of this kind.
Let us finally turn to the paper record. A problem here arises if a registration clerk manages to see the ballot code at registration, and is able to memorize part of it and to communicate this to a colluding tallier to break privacy. This is a reason to use QR-codes that are hard to memorise for humans, and we use printers with a shielding screen such that the clerk cannot see the printed ballot. In standard paper ballot voting, it is also not hard to make almost undetectable marks on the ballot. Alternatively we speculate that high resolution pin-hole cameras could be used by adversaries to photograph the ballots (backlit) before and after voting. The privacy could then be broken using that paper have a unique fiber structure (see e.g. [28]). We can never do better than the ordinary paper ballot scheme in this respect.
Receipt-Freeness and Coercion-Resistance
The receipt-freeness of a voting system says that the voting system should not give the voter any evidence to prove to a third party how she voted. The Selene e-voting protocol was carefully designed not to give such a receipt. However, it was only designed for use in low level coercion settings.
Firstly, the talliers of the paper ballots are trusted for receipt-freeness since the voter could make an almost invisible mark the ballot, a mark which the tallier will then look for. Likewise the scanner also needs to be trusted for receiptfreeness, since a malware could also be triggered with such a mark, something similar applies to the DRE machine of STAR-Vote, where a certain input pattern could alert the malware. However, the scanner could also do more direct attacks, it could e.g. store the mapping between receipt values and votes. This could be repaired by having a separate scanner of the QR code, and let this print the receipt code, and publish the encryptions on BB .
During the verification phase the TRA is trusted since it directly knows the true α-terms. The Tally Tellers however only need to be trusted to not be colluding in a threshold way which would break privacy.
We have also made sure that the ballot code only appears (re)encrypted on BB , and that the voters does not have a receipt showing the bcode.
The RLAs can also endanger receipt-freeness if not performed carefully, e.g. the ID should not be revealed in plain, but only the correct links via plaintextequivalence tests. Also Italian attacks can be countered, see [8].
On the other hand, side channels attacks might occur. A coercer might be sufficiently convinced by a simple ballot selfie, and today little is done to actually guard against these. Alternatively it could also be printed after vote-casting on the filled-in ballot that have been folded to hide the voter's choice.
Finally note that a symbolic model of Selene has been proven receipt-free [12] for the case where we use auxiliary trackers per voter, see the longer version of this paper (to appear), where it is also shown how to request fake trackers before the end of the voting phase. Standard Selene suffers from a mild lack of coercion-resistance from the case where the coerced voter chooses a fake tracker which is the adversary's.
Conclusion
In this paper we propose a polling station voting protocol that combines a paper ballot record with a digital record to provide individual verifiability via the transparent Selene mechanism, universal verifiability and universal eligibility verifiability. Further the paper ballots are marked with the encrypted ID of the voters and provide a strong basis for dispute resolution and risk-limiting audits. However, as privacy and integrity are dual requirements, the strong integrity guarantees, especially for eligibility, comes at the cost of stronger assumptions for ballot privacy.
The hope is that the integrity guarantees and the transparent individual checks and good dispute resolution properties can be used to create trusted election results e.g. in cases where the integrity of earlier elections have been questioned.
For future work, the scheme certainly needs a full security model and corresponding analysis and proofs. Here it would also be natural to use automated verification in a simplified symbolic model, to make sure that attacks have not been overlooked. Cryptographically it would be interesting if the scheme could be improved such that we can move to a stronger version of the covert adversary model for privacy. Right now the adversary can pull off a privacy attack, but it will be detected. We could introduce a second authority which checks the decrypted IDs and signatures on BB before publishing these, such that the adversary would not be able to gain any knowledge from the attempted privacy attack. However, for integrity complicated zero-knowledge proofs are necessary when decryptions are not published, and efficient versions needs to be found.
Another important aspect for future research is the usability and user experience of the scheme, which should also include the officials.
Finally some variants of the scheme would be interesting to explore. Firstly, a postal version of the scheme. Postal voting is an increasingly popular voting form, but with integrity problems that should be addressed. Also, it would be interesting to explore a version of the scheme with everlasting privacy or participation privacy, but maybe milder guarantees for the universal part of the eligibility verifiability. Further, combining Prêtà Voter with the Selene check might have some advantages in privacy compared to the present scheme. Finally, the trusted platform problem is an important issue for the individual verification, and it would be interesting to explore the use of an extra device for the Selene check, e.g. utilizing the ID-card and a card reader.
Fig. 1 :
1Description of the voting phase.(1) The voter enters the polling station and goes to a registration clerk with her ID card to be identified. (2) Her ID card is read and the printer delivers the ballot with the encrypted ID contained in a QR-Code.(3)The voter goes to a booth to fill her ballot. (4) She puts her ballot into a ballot box containing the scanner, (5) that sends the encrypted vote to the bulletin board and prints the voter a take home receipt code.
This might be troublesome in some jurisdictions.
See[2] for an example of a smartcard implementing ElGamal encryption with Elliptic Curves. 3 See[21] for a recently found flaw in that system, demonstrating the importance of a secure implementation of this system.
A difference to[26] is that we do not introduce separate Tracker Tellers, but instead let the Tally Tellers handle this, and we introduce a single separate Tracker Retrieval Authority TRA.5 Cryptographically it would suffice to leave out the encryption of IDi, since it can be determined from testing different vk's.
AcknowledgementsWe would like to thank the Luxembourg National Research Fund (FNR) for funding, in particular PBR was supported by the FNR INTER-Sequoia project which is joint with the ANR project SEQUOIA ANR-14-CE28-0030-01, and MLZ was supported by the INTER-SeVoTe project.
. Estonia Id Card, Estonia id card. http://www.id.ee/, accessed: 2017-11-10
An example of smart card implementing elgamal encryption with elliptic curves. accessedAn example of smart card implementing elgamal encryption with el- liptic curves. https://www.nxp.com/docs/en/data-sheet/P40C040_C072_SMX2_ FAM_SDS.pdf, accessed: 2018-05-11
. German, German elections. http://www.dw.com/en/german-election-volunteers- organize-the-voting-and-count-the-ballots/a-40562388, accessed: 2017-11- 10
Analysis of an electronic boardroom voting system. M Arnaud, V Cortier, C Wiedling, International Conference on E-Voting and Identity. SpringerArnaud, M., Cortier, V., Wiedling, C.: Analysis of an electronic boardroom vot- ing system. In: International Conference on E-Voting and Identity. pp. 109-126. Springer (2013)
Star-vote: A secure, transparent, auditable, and reliable voting system. S Bell, J Benaloh, M D Byrne, D Debeauvoir, B Eakin, G Fisher, P Kortum, N Mcburnett, J Montoya, M Parker, USENIX Journal of Election Technology and Systems (JETS). 11Bell, S., Benaloh, J., Byrne, M.D., DeBeauvoir, D., Eakin, B., Fisher, G., Kortum, P., McBurnett, N., Montoya, J., Parker, M., et al.: Star-vote: A secure, transparent, auditable, and reliable voting system. USENIX Journal of Election Technology and Systems (JETS) 1(1), 18-37 (2013)
A new implementation of a dual (paper and cryptographic) voting system. J Ben-Nun, N Fahri, M Llewellyn, B Riva, A Rosen, A Ta-Shma, D Wikström, Electronic Voting. Ben-Nun, J., Fahri, N., Llewellyn, M., Riva, B., Rosen, A., Ta-Shma, A., Wikström, D.: A new implementation of a dual (paper and cryptographic) voting system. In: Electronic Voting. pp. 315-329 (2012)
Simple verifiable elections. J Benaloh, 6Benaloh, J.: Simple verifiable elections. EVT 6, 5-5 (2006)
SOBA: secrecy-preserving observable ballot-level audit. J Benaloh, D W Jones, E Lazarus, M Lindeman, P B Stark, Electronic Voting Technology Workshop / Workshop on Trustworthy Elections, EVT/WOTE '11. Shacham, H., Teague, V.San Francisco, CA, USAUSENIX AssociationBenaloh, J., Jones, D.W., Lazarus, E., Lindeman, M., Stark, P.B.: SOBA: secrecy-preserving observable ballot-level audit. In: Shacham, H., Teague, V. (eds.) 2011 Electronic Voting Technology Workshop / Workshop on Trustwor- thy Elections, EVT/WOTE '11, San Francisco, CA, USA, August 8-9, 2011. USENIX Association (2011), https://www.usenix.org/conference/evtwote-11/ soba-secrecy-preserving-observable-ballot-level-audit
The trash attack: An attack on verifiable voting systems and a simple mitigation. J Benaloh, E Lazarus, Benaloh, J., Lazarus, E.: The trash attack: An attack on verifiable voting systems and a simple mitigation (2016)
Sok: A comprehensive analysis of game-based ballot privacy definitions. D Bernhard, V Cortier, D Galindo, O Pereira, B Warinschi, Security and Privacy (SP), 2015 IEEE Symposium on. IEEEBernhard, D., Cortier, V., Galindo, D., Pereira, O., Warinschi, B.: Sok: A compre- hensive analysis of game-based ballot privacy definitions. In: Security and Privacy (SP), 2015 IEEE Symposium on. pp. 499-516. IEEE (2015)
How not to prove yourself: Pitfalls of the fiat-shamir heuristic and applications to helios. D Bernhard, O Pereira, B Warinschi, International Conference on the Theory and Application of Cryptology and Information Security. SpringerBernhard, D., Pereira, O., Warinschi, B.: How not to prove yourself: Pitfalls of the fiat-shamir heuristic and applications to helios. In: International Conference on the Theory and Application of Cryptology and Information Security. pp. 626-643. Springer (2012)
Towards a mechanized proof of selene receipt-freeness and vote-privacy. A Bruni, E Drewsen, C Schürmann, International Joint Conference on Electronic Voting. SpringerBruni, A., Drewsen, E., Schürmann, C.: Towards a mechanized proof of selene receipt-freeness and vote-privacy. In: International Joint Conference on Electronic Voting. pp. 110-126. Springer (2017)
CoRR abs/1404. C Culnane, P Y A Ryan, S Schneider, V Teague, vvote: a verifiable voting system (DRAFT). 6822Culnane, C., Ryan, P.Y.A., Schneider, S., Teague, V.: vvote: a verifiable voting system (DRAFT). CoRR abs/1404.6822 (2014), http://arxiv.org/abs/1404. 6822
A peered bulletin board for robust use in verifiable voting systems. C Culnane, S A Schneider, IEEE 27th Computer Security Foundations Symposium. Culnane, C., Schneider, S.A.: A peered bulletin board for robust use in verifiable voting systems. 2014 IEEE 27th Computer Security Foundations Symposium pp. 169-183 (2014)
How to prove yourself: Practical solutions to identification and signature problems. A Fiat, A Shamir, Conference on the Theory and Application of Cryptographic Techniques. SpringerFiat, A., Shamir, A.: How to prove yourself: Practical solutions to identification and signature problems. In: Conference on the Theory and Application of Crypto- graphic Techniques. pp. 186-194. Springer (1986)
Post-election auditing: Effects of procedure and ballot type on manual counting accuracy, efficiency, and auditor satisfaction and confidence. S N Goggin, M D Byrne, J E Gilbert, Goggin, S.N., Byrne, M.D., Gilbert, J.E.: Post-election auditing: Effects of proce- dure and ballot type on manual counting accuracy, efficiency, and auditor satisfac- tion and confidence (2012)
Mix and match: Secure function evaluation via ciphertexts. M Jakobsson, A Juels, International Conference on the Theory and Application of Cryptology and Information Security. SpringerJakobsson, M., Juels, A.: Mix and match: Secure function evaluation via cipher- texts. In: International Conference on the Theory and Application of Cryptology and Information Security. pp. 162-177. Springer (2000)
select: a lightweight verifiable remote voting system. R Küsters, J Müller, E Scapin, T Truderung, Computer Security Foundations Symposium (CSF), 2016 IEEE 29th. IEEEKüsters, R., Müller, J., Scapin, E., Truderung, T.: select: a lightweight verifiable remote voting system. In: Computer Security Foundations Symposium (CSF), 2016 IEEE 29th. pp. 341-354. IEEE (2016)
A gentle introduction to risk-limiting audits. M Lindeman, P B Stark, 10.1109/MSP.2012.56IEEE Security & Privacy. 105Lindeman, M., Stark, P.B.: A gentle introduction to risk-limiting audits. IEEE Security & Privacy 10(5), 42-49 (2012). https://doi.org/10.1109/MSP.2012.56, https://doi.org/10.1109/MSP.2012.56
Human readable paper verification of prêtà voter. D Lundin, P Y A Ryan, 10.1007/978-3-540-88313-5_25Computer Security -ESORICS 2008, 13th European Symposium on Research in Computer Security. Jajodia, S., López, J.Málaga, Spain, October 6-8Springer5283Lundin, D., Ryan, P.Y.A.: Human readable paper verification of prêtà voter. In: Jajodia, S., López, J. (eds.) Computer Security -ESORICS 2008, 13th Eu- ropean Symposium on Research in Computer Security, Málaga, Spain, Octo- ber 6-8, 2008. Proceedings. Lecture Notes in Computer Science, vol. 5283, pp. 379-395. Springer (2008). https://doi.org/10.1007/978-3-540-88313-5 25, https: //doi.org/10.1007/978-3-540-88313-5_25
The return of coppersmith's attack: Practical factorization of widely used rsa moduli. M Nemec, M Sys, P Svenda, D Klinec, V Matyas, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityACMNemec, M., Sys, M., Svenda, P., Klinec, D., Matyas, V.: The return of copper- smith's attack: Practical factorization of widely used rsa moduli. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. pp. 1631-1648. ACM (2017)
Marked mix-nets. O Pereira, R L Rivest, International Conference on Financial Cryptography and Data Security. SpringerPereira, O., Rivest, R.L.: Marked mix-nets. In: International Conference on Finan- cial Cryptography and Data Security. pp. 353-369. Springer (2017)
Oaep 3-round: A generic and secure asymmetric encryption padding. D H Phan, D Pointcheval, International Conference on the Theory and Application of Cryptology and Information Security. SpringerPhan, D.H., Pointcheval, D.: Oaep 3-round: A generic and secure asymmetric en- cryption padding. In: International Conference on the Theory and Application of Cryptology and Information Security. pp. 63-77. Springer (2004)
A short introduction to secrecy and verifiability for elections. E A Quaglia, B Smyth, abs/1702.03168Quaglia, E.A., Smyth, B.: A short introduction to secrecy and verifiability for elections. CoRR abs/1702.03168 (2017), http://arxiv.org/abs/1702.03168
Prêtà voter: a voterverifiable voting system. P Y Ryan, D Bismark, J Heather, S Schneider, Z Xia, IEEE transactions on information forensics and security. 4Ryan, P.Y., Bismark, D., Heather, J., Schneider, S., Xia, Z.: Prêtà voter: a voter- verifiable voting system. IEEE transactions on information forensics and security 4(4), 662-673 (2009)
Selene: Voting with transparent verifiability and coercion-mitigation. P Y Ryan, P B Rønne, V Iovino, International Conference on Financial Cryptography and Data Security. SpringerRyan, P.Y., Rønne, P.B., Iovino, V.: Selene: Voting with transparent verifiability and coercion-mitigation. In: International Conference on Financial Cryptography and Data Security. pp. 176-192. Springer (2016)
User manual for the verificatum mix-net. D Wikström, Wikström, D.: User manual for the verificatum mix-net (2013)
Counterfeit detection using paper puf and mobile cameras. C W Wong, M Wu, Information Forensics and Security (WIFS), 2015 IEEE International Workshop on. IEEEWong, C.W., Wu, M.: Counterfeit detection using paper puf and mobile cameras. In: Information Forensics and Security (WIFS), 2015 IEEE International Workshop on. pp. 1-6. IEEE (2015)
|
[] |
[
"On entropy change measurements around first order phase transitions in caloric materials",
"On entropy change measurements around first order phase transitions in caloric materials"
] |
[
"Luana Caron \nMax Planck Institute for Chemical Physics of Solids\nNöthnitzer Str. 4001187DresdenGermany\n",
"Nguyen Ba Doan \nUniv. Grenoble Alpes\nInst NEEL\nF-38000GrenobleFrance\n\nCNRS\nInst NEEL\nF-38000GrenobleFrance\n",
"Laurent Ranno \nUniv. Grenoble Alpes\nInst NEEL\nF-38000GrenobleFrance\n\nCNRS\nInst NEEL\nF-38000GrenobleFrance\n"
] |
[
"Max Planck Institute for Chemical Physics of Solids\nNöthnitzer Str. 4001187DresdenGermany",
"Univ. Grenoble Alpes\nInst NEEL\nF-38000GrenobleFrance",
"CNRS\nInst NEEL\nF-38000GrenobleFrance",
"Univ. Grenoble Alpes\nInst NEEL\nF-38000GrenobleFrance",
"CNRS\nInst NEEL\nF-38000GrenobleFrance"
] |
[] |
In this work we discuss the measurement protocols for indirect determination of the isothermal entropy change associated with first order phase transitions in caloric materials. The magnetostructural phase transitions giving rise to giant magnetocaloric effects in Cu-doped MnAs and FeRh are used as case studies to exemplify how badly designed protocols may affect isothermal measurements and lead to incorrect entropy change estimations. Isothermal measurement protocols which allow correct assessment of the entropy change around first order phase transitions in both direct and inverse cases are presented.Caloric effects are present in materials where the entropy may be modified by the application of an external field, resulting in a change in temperature 1 . These may arise from the coupling between two different degrees of freedom: magnetocaloric (magneto-structural coupling) and electrocaloric (electro-structural coupling) effects; or from the strong response of a given variable, as is the case of the mechanocaloric effects where entropy and temperature changes are solely due to structural changes with little or no contribution from other degrees of freedom. In the magnetocaloric and electrocaloric cases, magnetic or electric polarizations are strongly influenced by the application of magnetic and electric fields, respectively. Mechanocaloric effects result from the application of stresses and are called barocaloric in the isotropic case and elastocaloric in the uniaxial case. All caloric effects are characterized by the entropy and temperature changes caused by the application of an external field.Entropy and temperature changes are particularly pronounced around first order phase transitions, making materials presenting large discontinuities in their polarization, volume or strain state as the result of the application of the corresponding conjugated field (the first order derivatives of the Gibbs free energy) the main focus of research in caloric effects.For example, in magnetocaloric materials, a second order phase transition may give rise to entropy changes around 10 J/kgK in a 0-5 T magnetic field change, as is the case of the benchmark material, Gd 2 . Yet, a first order magneto-elastic phase transition yields twice as much in a 0-2 T magnetic field change, as is the case for La(Fe 1-x Si x ) 13 3 and Fe 2 P-based 4 materials. Therefore the latter are considerably more interesting than Gd for ferroic cooling applications, the chief driving force in the research for novel caloric materials.However, discontinuous changes come with a high energetic cost which is manifest in thermal and field hysteresis (for isofield and isothermal measurements, respectively) as well as latent heat. This means that first order phase transitions are always partially irreversible and that not all of the entropy and temperature change observed in a material can be used in a cyclic manner. The issue of reversibility of both quantities upon cycling has been discussed by Basso et al. 5 and more recently by Kaeswurm et al. 6 in the magnetocaloric case.However, hysteresis has a more immediate and fundamental consequence in the way we measure entropy change. In fact, entropy change, or entropy for that matter, cannot be measured. It can be obtained indirectly from specific heat measurements or from isothermal/isofield curves via the Maxwell equations:where S is entropy, T temperature, x i and Y i are general displacements and fields, respectively. In the latter case, the thermodynamical history of the sample prior to measurement, i.e. the measurement protocol may have a profound influence on the calculated entropy change.The controversy first arose within the magnetocalorics community, the oldest and better established of all caloric research lines. In the early 2000's, as the focus shifted from second to first order phase transitions, increasingly higher entropy changes derived from isothermal magnetization measurements started being reported, culminating in the extreme and unphysical report of the "colossal" magnetocaloric effect in MnAs 7 and MnAs-based 8 materials. MnAs is the textbook example of first order magneto-structural phase transition, showing extremely sharp magnetization changes and a pronounced thermal hysteresis. In these materials entropy changes above the magnetic limit ∆S = Rln(2J+1) were reported, raising concerns about the validity of using the Maxwell equations. At the time, the Clausius-Clapeyron relation was put forward as an alternative 9 , and even a geometric argument 10 was proposed to remove the spurious peak.This controversy was resolved in a previous work by one of the present authors 11 . The Clausius-Clapeyron and Maxwell relations are equivalent 12 and valid as long as they are applied to measurements performed between equilibrium states, which is exactly where the measurement protocols used at the time failed. A new protocol
|
10.1088/1361-648x/aa50d1
|
[
"https://arxiv.org/pdf/1609.09831v1.pdf"
] | 46,793,373 |
1609.09831
|
3dd33c563b6d2f398d910d9d932aa951544ced31
|
On entropy change measurements around first order phase transitions in caloric materials
30 Sep 2016
Luana Caron
Max Planck Institute for Chemical Physics of Solids
Nöthnitzer Str. 4001187DresdenGermany
Nguyen Ba Doan
Univ. Grenoble Alpes
Inst NEEL
F-38000GrenobleFrance
CNRS
Inst NEEL
F-38000GrenobleFrance
Laurent Ranno
Univ. Grenoble Alpes
Inst NEEL
F-38000GrenobleFrance
CNRS
Inst NEEL
F-38000GrenobleFrance
On entropy change measurements around first order phase transitions in caloric materials
30 Sep 2016(Dated: October 3, 2016)
In this work we discuss the measurement protocols for indirect determination of the isothermal entropy change associated with first order phase transitions in caloric materials. The magnetostructural phase transitions giving rise to giant magnetocaloric effects in Cu-doped MnAs and FeRh are used as case studies to exemplify how badly designed protocols may affect isothermal measurements and lead to incorrect entropy change estimations. Isothermal measurement protocols which allow correct assessment of the entropy change around first order phase transitions in both direct and inverse cases are presented.Caloric effects are present in materials where the entropy may be modified by the application of an external field, resulting in a change in temperature 1 . These may arise from the coupling between two different degrees of freedom: magnetocaloric (magneto-structural coupling) and electrocaloric (electro-structural coupling) effects; or from the strong response of a given variable, as is the case of the mechanocaloric effects where entropy and temperature changes are solely due to structural changes with little or no contribution from other degrees of freedom. In the magnetocaloric and electrocaloric cases, magnetic or electric polarizations are strongly influenced by the application of magnetic and electric fields, respectively. Mechanocaloric effects result from the application of stresses and are called barocaloric in the isotropic case and elastocaloric in the uniaxial case. All caloric effects are characterized by the entropy and temperature changes caused by the application of an external field.Entropy and temperature changes are particularly pronounced around first order phase transitions, making materials presenting large discontinuities in their polarization, volume or strain state as the result of the application of the corresponding conjugated field (the first order derivatives of the Gibbs free energy) the main focus of research in caloric effects.For example, in magnetocaloric materials, a second order phase transition may give rise to entropy changes around 10 J/kgK in a 0-5 T magnetic field change, as is the case of the benchmark material, Gd 2 . Yet, a first order magneto-elastic phase transition yields twice as much in a 0-2 T magnetic field change, as is the case for La(Fe 1-x Si x ) 13 3 and Fe 2 P-based 4 materials. Therefore the latter are considerably more interesting than Gd for ferroic cooling applications, the chief driving force in the research for novel caloric materials.However, discontinuous changes come with a high energetic cost which is manifest in thermal and field hysteresis (for isofield and isothermal measurements, respectively) as well as latent heat. This means that first order phase transitions are always partially irreversible and that not all of the entropy and temperature change observed in a material can be used in a cyclic manner. The issue of reversibility of both quantities upon cycling has been discussed by Basso et al. 5 and more recently by Kaeswurm et al. 6 in the magnetocaloric case.However, hysteresis has a more immediate and fundamental consequence in the way we measure entropy change. In fact, entropy change, or entropy for that matter, cannot be measured. It can be obtained indirectly from specific heat measurements or from isothermal/isofield curves via the Maxwell equations:where S is entropy, T temperature, x i and Y i are general displacements and fields, respectively. In the latter case, the thermodynamical history of the sample prior to measurement, i.e. the measurement protocol may have a profound influence on the calculated entropy change.The controversy first arose within the magnetocalorics community, the oldest and better established of all caloric research lines. In the early 2000's, as the focus shifted from second to first order phase transitions, increasingly higher entropy changes derived from isothermal magnetization measurements started being reported, culminating in the extreme and unphysical report of the "colossal" magnetocaloric effect in MnAs 7 and MnAs-based 8 materials. MnAs is the textbook example of first order magneto-structural phase transition, showing extremely sharp magnetization changes and a pronounced thermal hysteresis. In these materials entropy changes above the magnetic limit ∆S = Rln(2J+1) were reported, raising concerns about the validity of using the Maxwell equations. At the time, the Clausius-Clapeyron relation was put forward as an alternative 9 , and even a geometric argument 10 was proposed to remove the spurious peak.This controversy was resolved in a previous work by one of the present authors 11 . The Clausius-Clapeyron and Maxwell relations are equivalent 12 and valid as long as they are applied to measurements performed between equilibrium states, which is exactly where the measurement protocols used at the time failed. A new protocol
In this work we discuss the measurement protocols for indirect determination of the isothermal entropy change associated with first order phase transitions in caloric materials. The magnetostructural phase transitions giving rise to giant magnetocaloric effects in Cu-doped MnAs and FeRh are used as case studies to exemplify how badly designed protocols may affect isothermal measurements and lead to incorrect entropy change estimations. Isothermal measurement protocols which allow correct assessment of the entropy change around first order phase transitions in both direct and inverse cases are presented.
Caloric effects are present in materials where the entropy may be modified by the application of an external field, resulting in a change in temperature 1 . These may arise from the coupling between two different degrees of freedom: magnetocaloric (magneto-structural coupling) and electrocaloric (electro-structural coupling) effects; or from the strong response of a given variable, as is the case of the mechanocaloric effects where entropy and temperature changes are solely due to structural changes with little or no contribution from other degrees of freedom. In the magnetocaloric and electrocaloric cases, magnetic or electric polarizations are strongly influenced by the application of magnetic and electric fields, respectively. Mechanocaloric effects result from the application of stresses and are called barocaloric in the isotropic case and elastocaloric in the uniaxial case. All caloric effects are characterized by the entropy and temperature changes caused by the application of an external field.
Entropy and temperature changes are particularly pronounced around first order phase transitions, making materials presenting large discontinuities in their polarization, volume or strain state as the result of the application of the corresponding conjugated field (the first order derivatives of the Gibbs free energy) the main focus of research in caloric effects.
For example, in magnetocaloric materials, a second order phase transition may give rise to entropy changes around 10 J/kgK in a 0-5 T magnetic field change, as is the case of the benchmark material, Gd 2 . Yet, a first order magneto-elastic phase transition yields twice as much in a 0-2 T magnetic field change, as is the case for La(Fe 1-x Si x ) 13 3 and Fe 2 P-based 4 materials. Therefore the latter are considerably more interesting than Gd for ferroic cooling applications, the chief driving force in the research for novel caloric materials.
However, discontinuous changes come with a high energetic cost which is manifest in thermal and field hysteresis (for isofield and isothermal measurements, respectively) as well as latent heat. This means that first order phase transitions are always partially irreversible and that not all of the entropy and temperature change observed in a material can be used in a cyclic manner. The issue of reversibility of both quantities upon cycling has been discussed by Basso et al. 5 and more recently by Kaeswurm et al. 6 in the magnetocaloric case.
However, hysteresis has a more immediate and fundamental consequence in the way we measure entropy change. In fact, entropy change, or entropy for that matter, cannot be measured. It can be obtained indirectly from specific heat measurements or from isothermal/isofield curves via the Maxwell equations:
∂S ∂Y i T = ∂x i ∂T Yi
where S is entropy, T temperature, x i and Y i are general displacements and fields, respectively. In the latter case, the thermodynamical history of the sample prior to measurement, i.e. the measurement protocol may have a profound influence on the calculated entropy change.
The controversy first arose within the magnetocalorics community, the oldest and better established of all caloric research lines. In the early 2000's, as the focus shifted from second to first order phase transitions, increasingly higher entropy changes derived from isothermal magnetization measurements started being reported, culminating in the extreme and unphysical report of the "colossal" magnetocaloric effect in MnAs 7 and MnAs-based 8 materials. MnAs is the textbook example of first order magneto-structural phase transition, showing extremely sharp magnetization changes and a pronounced thermal hysteresis. In these materials entropy changes above the magnetic limit ∆S = Rln(2J+1) were reported, raising concerns about the validity of using the Maxwell equations. At the time, the Clausius-Clapeyron relation was put forward as an alternative 9 , and even a geometric argument 10 was proposed to remove the spurious peak.
This controversy was resolved in a previous work by one of the present authors 11 . The Clausius-Clapeyron and Maxwell relations are equivalent 12 and valid as long as they are applied to measurements performed between equilibrium states, which is exactly where the measurement protocols used at the time failed. A new protocol was proposed that took into account the thermodynamical history of a material and provided physical results. However, that work was left incomplete as the so-called loop process only addresses the transition from low to high magnetization in the conventional magnetocaloric case (where entropy decreases with increasing magnetic field). Moreover, this issue remains relevant to date as, in spite of the development of in-field differential scanning calorimeters (DSC), entropy change is still predominantly calculated from isothermal polarization measurements.
In this work we once more discuss how the isothermal measurement protocol may affect the calculated entropy change leading to spurious and non-physical results. The entropy change due to the magnetocaloric effect in Cudoped MnAs and in FeRh are used as case studies for the direct and inverse caloric effects, respectively. Three isothermal measurement protocols are presented for each case: the commonly used second order protocol, and reset protocols for cooling and heating transitions for both direct and inverse MCE.
Preparation details of the Mn 0.99 Cu 0.01 As sample used here are reported in a previous work and references therein 11 . A Ta (10 nm) / Fe 49.5 Rh 50.5 (250 nm) / Ta (1 nm) trilayer was deposited by magnetron sputtering on a thermally oxidized Si substrate (SiO 2 surface layer thickness = 100 nm). The polycrystalline layers were deposited at room temperature and subsequently ex-situ annealed at 923 K for 90 minutes. All magnetization measurements were performed using a Quantum Design MPMS XL, in fields up to 2 T in the case of Mn 0.99 Cu 0.01 As and up to 5 T for Fe 49.5 Rh 50.5 (applied parallel to the plane of the sample, no background was subtracted), in the reciprocating sample option (RSO).
DIRECT MCE: Mn0.99Cu0.01As
As mentioned before, the measurement protocol initially used to measure isothermal magnetization for entropy change calculation was inherited from the second order case. In their 1999 paper 13 Pecharsky and Gschneidner give a detailed account of the indirect measurement methods for the magnetocaloric effect and explicitly show how the numeric integration of the Maxwell relation may be performed to obtain the entropy change. Albeit that no clear measurement protocol -other than isothermalis presented, a consensus formed in the community where isotherms should be measured while increasing the magnetic field at temperature steps which were measured from low to high temperature. That does not represent a problem when there is only one value of magnetization for each temperature-field pair M(T,H) whatever the magneto-thermal history of your sample, as is the case of second order phase transitions. However, around first order phase transitions there will be two possible magnetization values for every M(T,H) pair, depending on the magneto-thermal history of the material.
First let us analyze what goes wrong when applying a protocol devised for second order processes to a first order To make this analysis easier to grasp, we use Mn 0.99 Cu 0.01 As as an example. This material shows a first order magneto-structural phase transition from a low temperature hexagonal ferromagnetic phase to a high temperature orthorhombic paramagnetic state, therefore displaying a direct MCE. In Fig.1 are the isofield magnetization curves and in Fig. 2 the critical field/temperature diagram derived from the isofield data in Fig.1. In the case of a direct transition such as that of Mn 0.99 Cu 0.01 As the M -m transition (red line in Fig. 2) is crossed on increasing temperature or decreasing field and the m -M transition the reverse, decreasing temperature or increasing field. If the second order protocol is adopted to probe this first order process, isotherms are measured with increasing field from low to high temperature. Notice that, as long as T < 316 K only the m -M transition may be crossed on increasing field. Since the material was already fully in the ferromagnetic (FM) state, the response will be that of simply rotating the magnetic domains and saturating the material, as indicated in Fig. 2 by the blue dashed arrows and can be clearly seen in Fig. 3. However, once the temperature is increased to 316 K at zero field, the sample will transform almost fully to the paramagnetic (PM) state as the Mm transition is crossed in temperature. Once the field is increased at 316 K, the FM fraction of the sample will be saturated giving rise to a plateau while the PM fraction can only transform to the FM phase when it crosses the m -M line at a field around 2.5 T as indicated by the red dashed arrows in Fig. 2.
Notice that, increasing field implies that the m -M transition should be probed, but the second order protocol just followed measured two different processes: the field increase probes the m -M transition, while the temperature increase crosses the M -m transition. This has the effect of overestimating the isothermal entropy change calculated from this data as it concentrates the change in a very narrow temperature range. This gives rise, in extreme cases, to the so-called colossal MCE, which surpasses the theoretical magnetic upper limit for the entropy as can be observed in Fig. 4 (green curve).
When measuring magnetic isotherms in order to calculate the magnetic entropy change, one must make sure to probe only one process: either M -m or m -M. This can be achieved by bringing the material back to the starting point (or state) of the isothermal measurement once it has been performed. Thus one needs to devise a reversible thermodynamical cycle for each isotherm to be measured, literally going around the intrinsic irreversibility manifest in the thermal/field hysteresis. In 2009 one of the authors of the present paper published the so-called loop process. The loop process resets the sample to the PM state in the case of a direct transition. This is done by increasing temperature at zero field far above the transition temperature in between isotherms and then measuring increasing field. This guarantees that only the m -M transition is crossed both in temperature and field. In the Mn 0.99 Cu 0.01 As case, the sample was heated to 350 K at zero field. The results are in stark contrast to those obtained using the second order protocol (please compare Figs. 3 and 5). A well developed field induced transition is observed which moves at a rate of 4 K/T as would be expected from the diagram presented in Fig. 2 while the entropy change calculated from this measurement is well within the theoretical magnetic upper limit (see blue isoT cooling curve in Fig. 4). A similar protocol must be used to probe the Mm transition. In this case the transition is crossed increasing temperature and/or decreasing field. Thus the sample must be taken well below the transition temperature, into the M state at the maximum field being used for measurements. In the case being presented here the Mn 0.99 Cu 0.01 As sample was cooled to 280 K at 2 T in between isotherms, and only then heated to the next measurement temperature where magnetization was recorded upon decreasing magnetic field. The isotherms obtained in Fig. 5 are very similar to those in Fig. 6, shifted by 10 K due to thermal hysteresis, as is the entropy change (see orange isoT heating curve in Fig. 4).
Notice that, the second order measurement protocol, when applied to first order phase transitions, does not produce extra entropy. It was believed that in order to achieve the so-called colossal MCE, it was necessary to tap into other entropy reservoirs, such as the lattice entropy. 14 That is not the case and can be easily verified by calculating the area under the entropy change vs. temperature curve. For all three measurement protocols presented, the entropy content of the curves measured are, within error, the same.
The entropy change was also calculated from isofield measurements (as presented in Fig. 1). Isofield measurements have the advantage of always crossing the phase transition completely and univocally: it either crosses M -m or m -M. The entropy change calculated using isofield curves is shown as continuous lines (blue for cooling and orange for heating) on Fig. 4 and is found to be in excellent agreement with those calculated from the isothermal curves measured using the reset protocols. The inverse transition stands as a very different case.
Here field still has the role of taking the material from low to high magnetization state when increased and vice versa. But the temperature dependence of the magnetization has an inverse behavior compared to that of the direct transition: magnetization increases with increasing temperature. Thus, to cross one or another phase transition the variables temperature and field must be changed in the same direction. To exemplify this case measurements on a thin film sample of Fe 49.5 Rh 50.5 were performed. This material shows an isosymmetric phase transition between low temperature antiferromagnetic and high temperature ferromagnetic states 15 . The temperature dependence of the magnetization under different applied magnetic fields are show in Fig. 7 and the field dependence of the transition temperature in Fig. 8. The transition from low to high magnetization i.e. m -M is crossed either increasing field and/or temperature (in red). Similarly to go from high to lower magnetization the magnetic field must be decreased and/or the temper-ature (in blue). Critical field and temperature for both cooling/decreasing field M -m and heating/increasing field m -M transitions for Fe49.5Rh50.5 derived from the data in Fig. 7. Notice that the dTC/dH = −9.8K/T , which is above the value originally reported by Annaorazov et al.. 15 As with the direct case, first the sample was measured using the standard second order protocol. Interestingly, the way temperature and magnetic field are changed in the second order protocol corresponds with crossing only the heating m -M transition in the inverse case. This may mislead one to think that the protocol correctly probes this transition without any problems with magneto-thermal history of the sample. However, in order to increase the magnetic field at a given temperature, the field must be decreased in between measurements, making the sample cross the M -m cooling transition every time as well and leading to the same problem observed around direct transitions. In this case, since the transition itself is considerably broad, the effect is less clear than in the case of Mn 0.99 Cu 0.01 As, and may easily induce error. The isotherms obtained using the 2 nd order protocol are shown in Fig. 9 (isotherms for all protocols were measured at 4 K steps, as indicated in the figures). While a plateau is observed, changes are much more gradual and the entropy change calculated from this data falls within reasonable values for a giant MCE (see open symbols in Fig. 10). Only a more detailed analysis reveals that the entropy change maximum is not only overestimated, but also has its temperature dependence changed. Analogous to the direct case, in order to probe the m -M transition the isotherms must be measured with increasing field while the reset protocol must take the sample to the m state at low temperature at zero field. Thus the m -M line is always crossed either by increasing the field or by increasing temperature to the next measurement temperature after the reset. In this case the isotherms for the m -M transition were measured on increasing temperature and field while cycling the material well below the cooling transition, at 150 K and zero applied magnetic field in between measurements. The isotherms obtained are shown in Fig. 11.
Here the plateau found in Fig. 9 for the 2 nd order protocol is no longer observed and the entropy change (positive entropy change curves represented using closed symbols in Fig. 10) shows a completely different temperature dependence as well as a lower maximum. The A similar logic is used to design the reset protocol for the the M -m transition, which must be measured with decreasing field steps. The reset protocol must take the sample above the transition at the maximum field being used for measurements to the point of highest magnetization and only then cooled, at the same maximum field, to the next measurement temperature. This guarantees the M -m transition alone is crossed: when decreasing the field to record the isotherm and/or when decreasing the temperature after the temperature loop at maximum field above the transition. For Fe 49.5 Rh 50.5 the sample was cycled at 5 T up to 390 K between isotherms. These isotherms are shown in Fig. 13, where well-developed metamagnetic phase transitions can be observed as in the case of the measurement of the m -M transition using the reset protocol shown in Fig. 11. The entropy change is shown in Fig. 10 (closed symbols, negative entropy change curves). The maximum entropy change here is lower than for the heating transition as the transition is slightly broader, and is shifted in temperature by thermal hysteresis. Finally, the entropy change curves computed from isotherms measured using the reset protocols are in excellent agreement with the entropy change computed from the isofield measurements shown in Fig. 7 (represented as continuous lines on Fig. 10), as expected.
Albeit simple, these protocols ensure that phase transitions are crossed univocally during isothermal measurements, in both direct and inverse cases. In this context, it is worthwhile to mention that isofield and properly reset isothermal magnetization measurements both probe equilibrium states and can be used as input to the Maxwell relation for the calculation of the entropy change. In the case of magnetocaloric materials isofield measurements have the advantage of always probing the phase transition correctly, and are found to be in excellent agreement with isothermal measurements (see Figs. 4 and 10). However, not all caloric materials can be measured at a constant applied field. That is the case of electrocaloric materials where high electric fields would have to be sustained long enough for the temperature to be swept up and down, which may result in voltage breakdown.
Furthermore, by determining how isothermal measurements should be performed, these protocols are a first step towards standardizing entropy change calculation for caloric effects. Isothermal measurements performed using the correct protocol should produce reliable results that can be easily and readily compared in literature, a crucial issue for the development of these materials towards applications.
In summary we have discussed the issue of isothermal measurements for the calculation of entropy changes using the Maxwell relation around first order phase transitions. Magnetocaloric Cu-doped MnAs and Fe 49.5 Rh 50.5 were used as case studies of direct and inverse first order phase transitions, respectively, to exemplify the reset protocols necessary for both heating and cooling transitions.For the first time measurement protocols for heating and cooling in both direct and inverse transition cases have been created and demonstrated. Moreover these protocols apply to all caloric effects, being specially relevant for electrocaloric materials. This work not only aims at discussing and showing how to probe first order phase transitions correctly but also at standardizing the entropy change determination from isothermal measurements, a crucial step towards application of caloric materials.
ACKNOWLEDGMENTS
The FeRh film used in this study was developed in the framework of the European community's FP7 project No. 310748 (DRREAM).
FIG. 1 .
1Temperature dependence of the magnetization at different applied fields. Between 0.5 T and 2 T curves are measured in 0.5 T steps. phase transition. In a first order magnetic phase transition, be it direct or inverse, two transitions are observed: one from the high to the low magnetization state and another from the low to the high magnetization state. Let us call these transitions M -m and m -M, respectively, where M -m and m -M are obviously separated by the intrinsic thermal/field hysteresis.
and temperature for both cooling/increasing field m -M and heating/decreasing field M -m transitions for Mn0.99Cu0.01As.
FIG. 3 .
3Magnetic isotherms measured using the protocol created to probe second order processes in the case of Mn0.99Cu0.01As.
FIG. 4 .
4Entropy change for 0 -1 T (open symbols) and 0 -2 T (closed symbols) field changes using different protocols. Solid dark blue and red lines (without points) are calculated from the isofield curves inFig. 1for 0 -2 T field change, on cooling and heating, respectively. The theoretical magnetic upper limit for the entropy given by Rln(2J+1) is indicated as a dashed line (J ef f = 2 for the Mn atom).
FIG. 5 .
5Magnetic isotherms measured using the loop protocol probing the m -M transition in the case of Mn0.99Cu0.01As.
FIG. 6 .
6Magnetic isotherms measured using the loop protocol probing the M -m transition in the case of Mn0.99Cu0.01As.
FIG. 7. Temperature dependence of the magnetization in 0.1 T and from 0.5 T to 5 T in 0.5 T steps in Fe49.5Rh50.5 upon cooling and heating.
FIG. 9 .
9Magnetization isotherms measured using the 2 nd order protocol for Fe49.5Rh50.5.
FIG. 10 .
10Temperature dependence of the entropy change at the first order phase transition of Fe49.5Rh50.5 computed from isotherms measured using the 2 nd order protocol (open symbols), using reset protocols (closed symbols) for the m -M transition (positive entropy change) and the M -m transition (negative entropy change), and from the isofield curves presented inFig. 7(continuous lines).
FIG. 11 .FIG. 12 .
1112Magnetization isotherms measured using the reset protocol for the m -M (heating) transition for Fe49.5Rh50.5. The sample was cycled down to 150 K at zero applied field between isotherms. differences in these two processes can only be appreciated when curves at the same temperature for both protocols are shown together. InFig. 12nine isotherms are presented for both protocols, closed symbols are used for the 2 nd order protocol while open symbols show isotherms measured using the reset protocol for the m -M (or heating) transition. Magnetization isotherms measured using the 2 nd order protocol (closed symbols) and the reset protocol for the m -M (heating) transition (open symbols) for Fe49.5Rh50.5.
FIG. 13 .
13Magnetization isotherms measured using the reset protocol for the M -m (cooling) transition for Fe49.5Rh50.5. The sample was cycled up to 390 K at 5 T between isotherms.
. L Mañosa, A Planes, M Acet, Journal of Materials Chemistry A. 14925L. Mañosa, A. Planes, and M. Acet, Journal of Materials Chemistry A 1, 4925 (2013).
. S Y Dankov, A M Tishin, V K Pecharsky, K A Gschneidner, Phys. Rev. B. 573478S. Y. Dankov, A. M. Tishin, V. K. Pecharsky, and K. A. Gschneidner, Phys. Rev. B 57, 3478 (1998).
. A Fujita, S Fujieda, Y Hasegawa, K Fukamichi, Phys. Rev. B. 67104416A. Fujita, S. Fujieda, Y. Hasegawa, and K. Fukamichi, Phys. Rev. B 67, 104416 (2003).
. N H Dung, Z Q Ou, L Caron, L Zhang, D T C Thanh, G A De Wijs, R A De Groot, K H J Buschow, E Brück, Adv. Energy Mater. 11215N. H. Dung, Z. Q. Ou, L. Caron, L. Zhang, D. T. C. Thanh, G. A. de Wijs, R. A. de Groot, K. H. J. Buschow, and E. Brück, Adv. Energy Mater. 1, 1215 (2011).
. V Basso, C P Sasso, K P Skokov, O Gutfleisch, V V Khovaylo, Phys. Rev. B. 8514430V. Basso, C. P. Sasso, K. P. Skokov, O. Gutfleisch, and V. V. Khovaylo, Phys. Rev. B 85, 014430 (2012).
. B Kaeswurm, V Franco, K Skokov, O Gutfleisch, J. Magn. Magn. Mater. 406259B. Kaeswurm, V. Franco, K. Skokov, and O. Gutfleisch, J. Magn. Magn. Mater. 406, 259 (2016).
. S Gama, A Coelho, A Campos, A Carvalho, F Gandra, P Von Ranke, N De Oliveira, Phy. Rev. Lett. 93237202S. Gama, A. Coelho, A. de Campos, A. Carvalho, F. Gan- dra, P. von Ranke, and N. de Oliveira, Phy. Rev. Lett. 93, 237202 (2004).
. A De Campos, . L Rocco, A M G Carvalho, L Caron, A A Coelho, S Gama, L M Silva, F C G Gandra, A O Santos, L P Cardoso, P J Von Ranke, N A De Oliveira, Nat. Mater. 5802A. de Campos, l. L. Rocco, A. M. G. Carvalho, L. Caron, A. A. Coelho, S. Gama, L. M. da Silva, F. C. G. Gandra, A. O. dos Santos, L. P. Cardoso, P. J. von Ranke, and N. A. de Oliveira, Nat. Mater. 5, 802 (2006).
. A Giguère, M Foldeaki, B Ravi Gopal, R Chahine, T K Bose, A Frydman, J A Barclay, Phys. Rev. Lett. 832262A. Giguère, M. Foldeaki, B. Ravi Gopal, R. Chahine, T. K. Bose, A. Frydman, and J. A. Barclay, Phys. Rev. Lett. 83, 2262 (1999).
. G J Liu, J R Sun, J Shen, B Gao, H W Zhang, F X Hu, B G Shen, Appl. Phys. Lett. 9032507G. J. Liu, J. R. Sun, J. Shen, B. Gao, H. W. Zhang, F. X. Hu, and B. G. Shen, Appl. Phys. Lett. 90, 032507 (2007).
. L Caron, Z Q Ou, T T Nguyen, D T C Thanh, O Tegus, E Brück, J. Magn. Magn. Mater. 3213559L. Caron, Z. Q. Ou, T. T. Nguyen, D. T. C. Thanh, O. Tegus, and E. Brück, J. Magn. Magn. Mater. 321, 3559 (2009).
. J R Sun, F X Hu, B G Shen, Phys. Rev. Lett. 854191J. R. Sun, F. X. Hu, and B. G. Shen, Phys. Rev. Lett. 85, 4191 (2000).
. V K Pecharsky, J Gschneidner, J. Appl. Phys. 86565V. K. Pecharsky and J. Gschneidner, J. Appl. Phys. 86, 565 (1999).
. P J Ranke, N A De Oliveira, C Mello, A M G Carvalho, S Gama, Phys. Rev. B. 7154410P. J. v. Ranke, N. A. de Oliveira, C. Mello, A. M. G. Carvalho, and S. Gama, Phys. Rev. B 71, 054410 (2005).
. M P Annaorazov, S A Nikitin, A L Tyurin, K A Asatryan, A K Dovletov, Journal of Applied Physics. 791689M. P. Annaorazov, S. A. Nikitin, A. L. Tyurin, K. A. Asatryan, and A. K. Dovletov, Journal of Applied Physics 79, 1689 (1996).
|
[] |
[
"Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise",
"Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise"
] |
[
"Member, IEEEMaxim Raginsky ",
"Sina Jafarpour ",
"Student Member, IEEEZachary T Harmany ",
"Member, IEEERoummel F Marcia ",
"Member, IEEERebecca M Willett ",
"Fellow, IEEERobert Calderbank "
] |
[] |
[] |
This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.
|
10.1109/tsp.2011.2157913
|
[
"https://arxiv.org/pdf/1007.2377v3.pdf"
] | 7,994,718 |
1007.2377
|
e4d56ec1be5603ddb2cd5b9264e365a19dbd5217
|
Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise
Member, IEEEMaxim Raginsky
Sina Jafarpour
Student Member, IEEEZachary T Harmany
Member, IEEERoummel F Marcia
Member, IEEERebecca M Willett
Fellow, IEEERobert Calderbank
Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise
Index Terms-compressive measurementexpander graphsRIP-1photon-limited imagingpacket counters
This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.
However, photon-limited measurements [3] and arrivals/departures of packets at a router [4] are commonly modeled with a Poisson probability distribution, posing significant theoretical and practical challenges in the context of CS. One of the key challenges is the fact that the measurement error variance scales with the true intensity of each measurement, so that we cannot assume constant noise variance across the collection of measurements. Furthermore, the measurements, the underlying true intensities, and the system models are all subject to certain physical constraints, which play a significant role in performance.
Recent works [5]- [8] explore methods for CS reconstruction in the presence of impulsive, sparse or exponential-family noise, but do not account for the physical constraints associated with a typical Poisson setup and do not contain the related performance bounds emphasized in this paper. In previous work [9], [10], we showed that a Poisson noise model combined with conventional dense CS sensing matrices (properly scaled) yielded performance bounds that were somewhat sobering relative to bounds typically found in the literature. In particular, we found that if the number of photons (or packets) available to sense were held constant, and if the number of measurements, m, was above some critical threshold, then larger m in general led to larger bounds on the error between the true and the estimated signals. This can intuitively be understood as resulting from the fact that dense CS measurements in the Poisson case cannot be zero-mean, and the DC offset used to ensure physical feasibility adversely impacts the noise variance.
The approach considered in this paper hinges, like most CS methods, on reconstructing a signal from compressive measurements by optimizing a sparsity-regularized goodnessof-fit objective function. In contrast to many CS approaches, however, we measure the fit of an estimate to the data using the Poisson log-likelihood instead of a squared error term. This paper demonstrates that the bounds developed in previous work can be improved for some sparsity models by considering alternatives to dense sensing matrices with random entries. In particular, we show that deterministic sensing matrices given by scaled adjacency matrices of expander graphs have important theoretical characteristics (especially an 1 version of the restricted isometry property [11]) that are ideally suited to controlling the performance of Poisson CS.
Formally, suppose we have a signal θ * ∈ R n + with known 1 norm θ * 1 (or a known upper bound on θ * 1 ). We aim to find a matrix Φ ∈ R m×n + with m, the number arXiv:1007.2377v3 [cs.IT] 20 May 2011 of measurements, as small as possible, so that θ * can be recovered efficiently from the measured vector y ∈ R m + , which is related to Φθ * through a Poisson observation model. The restriction that elements of Φ be nonnegative reflects the physical limitations of many sensing systems of interest (e.g., packet routers and counters or linear optical systems). The original approach employed dense random matrices [11], [12]. It has been shown that if the matrix Φ acts nearly isometrically on the set of all k-sparse signals, thus obeying what is now referred to as the Restricted Isometry Property with respect to 2 norm (RIP-2) [11], then the recovery of θ * from Φθ * is indeed possible. It has been also shown that dense random matrices constructed from Gaussian, Bernoulli, or partial Fourier ensembles satisfy the required RIP-2 property with high probability [11].
Adjacency matrices of expander graphs [13] have been recently proposed as an alternative to dense random matrices within the compressed sensing framework, leading to computationally efficient recovery algorithms [14]- [16]. It has been shown that variations of the standard recovery approaches such as basis pursuit [2] and matching pursuit [17] are consistent with the expander sensing approach and can recover the original sparse signal successfully [18], [19]. In the presence of Gaussian or sparse noise, random dense sensing and expander sensing are known to provide similar performance in terms of the number of measurements and recovery computation time. Berinde et al. proved that expander graphs with sufficiently large expansion are near-isometries on the set of all k-sparse signals in the 1 norm; this is referred as a Restricted Isometry Property for 1 norm (RIP-1) [18]. Furthermore, expander sensing requires less storage whenever the signal is sparse in the canonical basis, while random dense sensing provides slightly tighter recovery bounds [16].
The approach described in this paper consists of the following key elements:
• expander sensing matrices and the RIP-1 associated with them; • a reconstruction objective function which explicitly incorporates the Poisson likelihood; • a countable collection of candidate estimators; and • a penalty function defined over the collection of candidates, which satisfies the Kraft inequality and which can be used to promote sparse reconstructions.
In general, the penalty function is selected to be small for signals of interest, which leads to theoretical guarantees that errors are small with high probability for such signals. In this paper, exploiting the RIP-1 property and the non-negativity of the expander-based sensing matrices, we show that, in contrast to random dense sensing, expander sensing empowered with a maximum a posteriori (MAP) algorithm can approximately recover the original signal in the presence of Poisson noise, and we prove bounds which quantify the MAP performance. As a result, in the presence of Poisson noise, expander graphs not only provide general storage advantages, but they also allow for efficient MAP recovery methods with performance guarantees comparable to the best k-term approximation of the original signal. Finally, the bounds are tighter than those for specific dense matrices proposed by Willett and Raginsky [9], [10] whenever the signal is sparse in the canonical domain, in that a log term in the bounds in [10] is absent from the bounds presented in this paper.
A. Relationship with dense sensing matrices for Poisson CS
In recent work, the authors established performance bounds for CS in the presence of Poisson noise using dense sensing matrices based on appropriately shifted and scaled Rademacher ensembles [9], [10]. Several features distinguish that work from the present paper:
• The dense sensing matrices used in [9], [10] require more memory to store and more computational resources to apply to a signal in a reconstruction algorithm. The expander-based approach described in this paper, in contrast, is more efficient. • The expander-based approach described in this paper works only when the signal of interest is sparse in the canonical basis. In contrast, the dense sensing matrices used in [9], [10] can be applied to arbitrary sparsity bases (though the proof technique there needs to be altered slightly to accommodate sparsity in the canonical basis). • The bounds in both this paper and [9], [10] reflect a sobering tradeoff between performance and the number of measurements collected. In particular, more measurements (after some critical minimum number) can actually degrade performance as a limited number of events (e.g., photons) are distributed among a growing number of detectors, impairing the SNR of the measurements.
B. Notation
Nonnegative reals (respectively, integers) will be denoted by R + (respectively, Z + ). Given a vector u ∈ R n and a set S ⊆ {1, . . . , n}, we will denote by u S the vector obtained by setting to zero all coordinates of u that are in S c , the complement of S: ∀1 ≤ i ≤ n, u S i = u i 1 {i∈S} . Given some 1 ≤ k ≤ n, let S be the set of positions of the k largest (in magnitude) coordinates of u. Then u (k) = u S will denote the best k-term approximation of u (in the canonical basis of R n ), and
σ k (u) = u − u (k) 1 = i∈S c |u i |
will denote the resulting 1 approximation error. The 0 quasinorm measures the number of nonzero coordinates of u:
u 0 = n i=1 1 {ui =0} . For a subset S ⊆ {1, .
. . , n} we will denote by I S the vector with components 1 {i∈S} , 1 ≤ i ≤ n. Given a vector u, we will denote by u + the vector obtained by setting to zero all negative components of u: for all 1 ≤ i ≤ n, u + i = max{0, u i }. Given two vectors u, v ∈ R n , we will write u v if u i ≥ v i for all 1 ≤ i ≤ n. If u αI {1,...,n} for some α ∈ R, we will simply write u α. We will write instead of if the inequalities are strict for all i.
C. Organization of the paper
This paper is organized as follows. In Section II, we summarize the existing literature on expander graphs applied to compressed sensing and the RIP-1 property. Section III describes how the problem of compressed sensing with Poisson noise can be formulated in a way that explicitly accounts for nonnegativity constraints and flux preservation (i.e., we cannot detect more events than have occurred); this section also contains our main theoretical result bounding the error of a sparsity penalized likelihood reconstruction of a signal from compressive Poisson measurements. These results are illustrated and further analyzed in Section IV, in which we focus on the specific application of efficiently estimating packet arrival rates. Several technical discussions and proofs have been relegated to the appendices.
II. BACKGROUND ON EXPANDER GRAPHS
We start by defining an unbalanced bipartite vertexexpander graph.
Definition II.1. We say that a bipartite simple graph G = (A, B, E) with (regular) left degree 1 d is a (k, )-expander if, for any S ⊂ A with |S| ≤ k, the set of neighbors N (S) of S has size |N (S)| > (1 − )d |S|. Figure 1 illustrates such a graph. Intuitively a bipartite graph is an expander if any sufficiently small subset of its variable nodes has a sufficiently large neighborhood. In the CS setting, A (resp., B) will correspond to the components of the original signal (resp., its compressed representation). Hence, for a given |A|, a "high-quality" expander should have |B|, d, and as small as possible, while k should be as close as possible to |B|. The following proposition, proved using the probabilistic method [20], is well-known in the literature on expanders:
Proposition II.2 (Existence of high-quality expanders). For any 1 ≤ k ≤ n 2 and any ∈ (0, 1), there exists a (k, )expander with left degree d = O log(n/k) and right set size
m = O k log(n/k) 2 .
Unfortunately, there is no explicit construction of expanders from Definition II.1. However, it can be shown that, with high probability, any d-regular random graph with
d = O log(n/k) and m = O k log(n/k) 2
satisfies the required expansion property. Moreover, the graph may be assumed to be right-regular as well, i.e., every node in B will have the same (right) degree D [21]. Counting the number of edges in two ways, we conclude that
|E| = |A|d = |B|D =⇒ D = O n k .
Thus, in practice it may suffice to use random bipartite regular graphs instead of expanders 2 . Moreover, there exists an explicit construction for a class of expander graphs that comes very close to the guarantees of Proposition II.2. This construction, due to Guruswami et al. [23], uses Parvaresh-Vardy codes [24] and has the following guarantees: Expanders have been recently proposed as a means of constructing efficient compressed sensing algorithms [15], [18], [19], [22]. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O k log n k measurements in O n log n k time [15], [19]. It has been also shown that, even in the presence of noise in the measurements, if the noise vector has low 1 norm, expander-based algorithms can approximately recover any k-sparse signal [16], [18], [19]. One reason why expander graphs are good sensing candidates is that the adjacency matrix of any (k, )-expander almost preserves the 1 norm of any k-sparse vector [18]. In other words, if the adjacency matrix of an expander is used for measurement, then the 1 distance between two sufficiently sparse signals is preserved by measurement. This property is known as the "Restricted Isometry Property for 1 norms" or the "RIP-1" property. Berinde et al. have shown that this condition is sufficient for sparse recovery using 1 minimization [18].
The precise statement of the RIP-1 property, whose proof can be found in [15], goes as follows:
Lemma II.4 (RIP-1 property of the expander graphs). Let F be the m × n adjacency matrix of a (k, ) expander graph G. Then for any k-sparse vector x ∈ R n we have:
(1 − 2 )d x 1 ≤ F x 1 ≤ d x 1(1)
The following proposition is a direct consequence of the above RIP-1 property. It states that if, for any almost ksparse vector 3 u, there exists a vector v whose 1 norm is close to that of u, and if F v approximates F u, then v also approximates u. Our results of Section III exploit the fact that the proposed MAP decoding algorithm outputs a vector satisfying the two conditions above, and hence approximately recovers the desired signal.
Proposition II.5. Let F be the adjacency matrix of a (2k, )expander and u, v be two vectors in R n , such that
u 1 ≥ v 1 − ∆ for some ∆ > 0. Then u − v 1 is upper-bounded by u − v 1 ≤ 1 − 2 1 − 6 (2σ k (u) + ∆) + 2 d(1 − 6 ) F u − F v 1 .
In particular, if we let = 1/16, then we get the bound
u − v 1 ≤ 4σ k (u) + 4 d F u − F v 1 + 2∆.
Proof: See Appendix B. For future convenience, we will introduce the following piece of notation. Given n and 1 ≤ k ≤ n/4, we will denote by G k,n a (2k, 1/16)-expander with left set size n whose existence is guaranteed by Proposition II.2. Then G k,n = (A, B, E) has |A| = n, |B| = m = O(k log(n/k)), d = O(log(n/k)).
III. COMPRESSED SENSING IN THE PRESENCE OF POISSON NOISE A. Problem statement
We wish to recover an unknown vector θ * ∈ R n + of Poisson intensities from a measured vector y ∈ Z m + , sensed according to the Poisson model
y ∼ Poisson(Φθ * ),(2)
where Φ ∈ R m×n + is a positivity-preserving sensing matrix 4 . That is, for each j ∈ {1, . . . , m}, y j is sampled independently from a Poisson distribution with mean (Φθ * ) j :
P Φθ * (y) = m j=1 P (Φθ * )j (y j ),(3)
where, for any z ∈ Z + and λ ∈ R + , we have
P λ (z) = λ z z! e −λ if λ > 0 1 {z=0} otherwise ,(4)
where the λ = 0 case is a consequence of the fact that
lim λ→0 λ z z! e −λ = 1 {z=0} .
We assume that the 1 norm of θ * is known, θ * 1 = L (although later we will show that this assumption can be relaxed). We are interested in designing a sensing matrix Φ and an estimatorθ =θ(y), such that θ * can be recovered with small expected 1 risk
R θ , θ * = E Φθ * θ − θ * 1 ,
where the expectation is taken w.r.t. the distribution P Φθ * .
B. The proposed estimator and its performance
To recover θ * , we will use a penalized Maximum Likelihood Estimation (pMLE) approach. Let us choose a convenient 1 ≤ k ≤ n/4 and take Φ to be the normalized adjacency matrix of the expander G k,n (cf. Section II for definitions): Φ = F/d. Moreover, let us choose a finite or countable set Θ L of candidate estimators θ ∈ R n + with θ 1 ≤ L, and a penalty pen :
Θ L → R + satisfying the Kraft inequality 5 θ∈Θ L e −pen(θ) ≤ 1.(5)
For instance, we can impose less penalty on sparser signals or construct a penalty based on any other prior knowledge about the underlying signal.
With these definitions, we consider the following penalized maximum likelihood estimator (pMLE):
θ = argmin θ∈Θ L [− log P Φθ (y) + 2 pen(θ)](6)
One way to think about the procedure in (6) is as a Maximum a posteriori Probability (MAP) algorithm over the set of estimates Θ L , where the likelihood is computed according to the Poisson model (4) and the penalty function corresponds to a negative log prior on the candidate estimators in Θ L . Our main bound on the performance of the pMLE is as follows:
Theorem III.1. Let Φ be the normalized adjacency matrix of G k,n , let θ * ∈ R n + be the original signal compressively sampled in the presence of Poisson noise, and letθ be obtained through (6). Then
R θ , θ * ≤ 4σ k (θ * ) + 8 L min θ∈Θ L [KL(P Φθ * P Φθ ) + 2 pen(θ)], (7) where KL(P g P h ) = y∈Z m + P g (y) log P g (y) P h (y)
is the Kullback-Leibler divergence (relative entropy) between P g and P h [25].
Proof: Sinceθ ∈ Θ L , we have L = θ * 1 ≥ θ 1 . Hence, using Proposition II.5 with ∆ = 0, we can write
θ * −θ 1 ≤ 4σ k (θ * ) + 4 Φ(θ * −θ) 1 .
Taking expectations, we obtain
R θ , θ * ≤ 4σ k (θ * ) + 4E Φθ * Φ(θ * −θ) 1 ≤ 4σ k (θ * ) + 4 E Φθ * Φ(θ * −θ) 2 1 (8)
where the second step uses Jensen's inequality. Using Lemmas C.1 and C.2 in Appendix C, we have
E Φθ * Φ(θ * −θ) 2 1 ≤ 4L min θ∈Θ L
[KL(P Φθ * P Φθ ) + 2 pen(θ)] 5 Many penalization functions can be modified slightly (e.g., scaled appropriately) to satisfy the Kraft inequality. All that is required is a finite collection of estimators (i.e., Θ L ) and an associated prefix code for each candidate estimate in Θ L . For instance, this would certainly be possible for a total variation penalty, though the details are beyond the scope of this paper.
Substituting this into (8), we obtain (7).
The bound of Theorem III.1 is an oracle inequality: it states that the 1 error ofθ is (up to multiplicative constants) the sum of the k-term approximation error of θ * plus √ L times the minimum penalized relative entropy error over the set of candidate estimators Θ L . The first term in (7) is smaller for sparser θ * , and the second term is smaller when there is a θ ∈ Θ L which is simultaneously a good approximation to θ * (in the sense that the distributions P Φθ * and P Φθ are close) and has a low penalty.
Remark III.2. So far we have assumed that the 1 norm of θ * is known a priori. If this is not the case, we can still estimate it with high accuracy using noisy compressive measurements. Observe that, since each measurement y j is a Poisson random variable with mean (Φθ * ) j , j y j is Poisson with mean Φθ * 1 . Therefore, j y j is approximately normally distributed with mean ≈ Φθ * 1 and variance ≈ 1 4 [26, Sec. 6.2]. 6 Hence, Mill's inequality [27,Thm. 4.7] guarantees that, for every positive t,
Pr j y j − Φθ * 1 > t e −2t 2 √ 2πt ,
where is meant to indicate the fact that this is only an approximate bound, with the approximation error controlled by the rate of convergence in the central limit theorem. Now we can use the RIP-1 property of the expander graphs obtain the estimates
j y j − t 2 ≤ Φθ * 1 ≤ θ * 1 , and j y j + t 2 (1 − 2 ) ≥ Φθ * 1 (1 − 2 ) ≥ θ * 1 that hold with (approximate) probability at least 1 − ( √ 2πt) −1 e −2t 2 .
C. A bound in terms of 1 error
The bound of Theorem III.1 is not always useful since it bounds the 1 risk of the pMLE in terms of the relative entropy. A bound purely in terms of 1 errors would be more desirable. However, it is not easy to obtain without imposing extra conditions either on θ * or on the candidate estimators in Θ L . This follows from the fact that the divergence KL(P Φθ * P Φθ ) may take the value +∞ if there exists some y such that P Φθ (y) = 0 but P Φθ * (y) > 0.
One way to eliminate this problem is to impose an additional requirement on the candidate estimators in Θ L : There exists some c > 0, such that
Φθ c, ∀θ ∈ Θ L(9)
Under this condition, we will now develop a risk bound for the pMLE purely in terms of the 1 error.
Theorem III.3. Suppose that all the conditions of Theorem III.1 are satisfied. In addition, suppose that the set Θ L satisfies the condition (9). Then
R θ , θ * ≤ 4σ k (θ * ) + 8 L min θ∈Θ L θ * − θ 2 1 c + 2 pen(θ) .(10)
Proof: Using Lemma C.3 in Appendix C, we get the bound
KL(P Φθ * P Φθ ) ≤ 1 c θ * − θ 2 1 , ∀θ ∈ Θ L .
Substituting this into Eq. (7), we get (10).
Remark III.4. Because every θ ∈ Θ L satisfies θ 1 ≤ L, the constant c cannot be too large. In particular, if (9) holds, then for every θ ∈ Θ L we must have
Φθ 1 ≥ m min j (Φθ) j ≥ mc.
On the other hand, by the RIP-1 property we have Φθ 1 ≤ θ 1 ≤ L. Thus, a necessary condition for (9) to hold is c ≤ L/m. Since m = O(k log(n/k)), the best risk we may hope to achieve under some condition like (9) is on the order of
R θ , θ * ≤ 4σ k (θ * ) + C min θ∈Θ L [k log(n/k) θ − θ * 2 1 + L pen(θ)](11)
for some constant C, e.g., by choosing c ∝ L k log(n/k) . Effectively, this means that, under the positivity condition (9), the 1 error ofθ is the sum of the k-term approximation error of θ * plus √ m = k log(n/k) times the best penalized 1 approximation error. The first term in (11) is smaller for sparser θ * , and the second term is smaller when there is a θ ∈ Θ L which is simultaneously a good 1 approximation to θ * and has a low penalty.
D. Empirical performance
Here we present a simulation study that validates our method. In this experiment, compressive Poisson observations are collected of a randomly generated sparse signal passed through the sensing matrix generated from an adjacency matrix of an expander. We then reconstruct the signal by utilizing an algorithm that minimizes the objective function in (6), and assess the accuracy of this estimate. We repeat this procedure over several trials to estimate the average performance of the method.
More specifically, we generate our length-n sparse signal θ * through a two-step procedure. First we select k elements of {1, . . . , n} uniformly at random, then we assign these elements an intensity I. All other components of the signal are set to zero. For these experiments, we chose a length n = 100,000 and varied the sparsity k among three different choices of 100, 500, and 1,000 for two intensity levels I of 10,000 and 100,000. We then vary the number m of Poisson observations from 100 to 20,000 using an expander graph sensing matrix with degree d = 8. Recall that the sensing matrix is normalized such that the total signal intensity is divided amongst the measurements, hence the seemingly high choices of I.
To reconstruct the signal, we utilize the SPIRAL-1 algorithm [28] which solves (6) when pen(θ) = τ θ 1 . We design the algorithm to optimize over the continuous domain R n + instead of the discrete set Θ L . This is equivalent to the proposed pMLE formulation in the limit as the discrete set of estimates becomes increasingly dense in the set of all θ ∈ R n + with θ 1 ≤ L, i.e., we quantize this set on an ever finer scale, increasing the bit allotment to represent each θ. In this high-resolution limit, the Kraft inequality requirement (5) on the penalty pen(θ) will translate to e −pen(θ) dθ < ∞. If we select a penalty proportional to the negative log of a prior probability distribution for θ, this requirement will be satisfied. From a Bayesian perspective, the 1 penalty arises by assuming each component θ i is drawn i.i.d. from a zero-mean Laplace prior p(θ i ) = e −|θi|/b /2b. Hence the regularization parameter τ is inversely related to the scale parameter b of the prior, as a larger τ (smaller b) will promote solutions with more zero-valued components.
This relaxation results in a computationally tractable convex program over a continuous domain, albeit implemented on a machine with finite precision. The SPIRAL algorithm utilizes a sequence of quadratic subproblems derived by using a secondorder Taylor expansion of the Poisson log-likelihood at each iteration. These subproblems are made easier to solve by using a separable approximation whereby the second-order Hessian matrix is approximated by a scaled identity matrix. For the particular case of the 1 penalty, these subproblems can be solved quickly, exactly, and noniteratively by a softthresholding rule.
After reconstruction, we assess the estimateθ according to the normalized 1 error θ * −θ 1 / θ * 1 . We select the regularization weighting τ in the SPIRAL-1 algorithm to minimize this quantity for each randomly generated experiment indexed by (I, k, m). To assure that the results are not biased in our favor by only considering a single random experiment for each (I, k, m), we repeat this experiment several times. The averaged reconstruction accuracy over 10 trials is presented in Figure 2.
These results show that the proposed method is able to accurately estimate sparse signals when the signal intensity is sufficiently high; however, the performance of the method degrades for lower signal strengths. More interesting is the behavior as we vary the number of measurements. There is a clear phase transition where accurate signal reconstruction becomes possible, however the performance gently degrades with the number of measurements since there is a lower signal-to-noise ratio per measurement. This effect is more pronounced at lower intensity levels, as we more quickly enter the regime where only a few photons are collected per measurement. These findings support the error bounds developed in Section III-B.
IV. APPLICATION: ESTIMATING PACKET ARRIVAL RATES
This section describes an application of the pMLE estimator of Section III: an indirect approach for reconstructing average Average performance (as measured by the normalized 1 error θ * −θ 1 / θ * 1 ) for the proposed expander-based observation method for recovering sparse signals under Poisson noise. In this experiment, we sweep over a range of measurements and consider a few sparsity (k) and intensity (I) levels of the true signal.
packet arrival rates and instantaneous packet counts for a given number of streams (or flows) at a router in a communication network, where the arrivals of packets in each flow are assumed to follow a Poisson process. All packet counting must be done in hardware at the router, and any hardware implementation must strike a delicate balance between speed, accuracy, and cost. For instance, one could keep a dedicated counter for each flow, but, depending on the type of memory used, one could end up with an implementation that is either fast but expensive and unable to keep track of a large number of flows (e.g., using SRAMs, which have low access times, but are expensive and physically large) or cheap and high-density but slow (e.g., using DRAMs, which are cheap and small, but have longer access times) [29], [30].
However, there is empirical evidence [31], [32] that flow sizes in IP networks follow a power-law pattern: just a few flows (say, 10%) carry most of the traffic (say, 90%). Based on this observation, several investigators have proposed methodologies for estimating flows using a small number of counters by either (a) keeping track only of the flows whose sizes exceed a given fraction of the total bandwidth (the approach suggestively termed "focusing on the elephants, ignoring the mice") [29] or (b) using sparse random graphs to aggregate the raw packet counts and recovering flow sizes using a message passing decoder [30].
We consider an alternative to these approaches based on Poisson CS, assuming that the underlying Poisson rate vector is sparse or approximately sparse -and, in fact, it is the approximate sparsity of the rate vector that mathematically describes the power-law behavior of the average packet counts. The goal is to maintain a compressed summary of the process sample paths using a small number of counters, such that it is possible to reconstruct both the total number of packets in each flow and the underlying rate vector. Since we are dealing here with Poisson streams, we would like to push the metaphor further and say that we are "focusing on the whales, ignoring the minnows."
A. Problem formulation
We wish to monitor a large number n of packet flows using a much smaller number m of counters. Each flow is a homogeneous Poisson process (cf. [4] for details pertaining to Poisson processes and networking applications). Specifically, let λ * ∈ R n + denote the vector of rates, and let U denote the random process U = {U t } t∈R+ with sample paths in Z n + , where, for each i ∈ {1, . . . , n}, the ith component of U is a homogeneous Poisson process with the rate of λ i arrivals per unit time, and all the component processes are mutually conditionally independent given λ.
The goal is to estimate the unknown rate vector λ based on y. We will focus on performance bounds for power-law network traffic, i.e., for λ * belonging to the class
Σ α,L0 = λ ∈ R n + : λ 1 = L 0 ; σ k (λ) = O(k −α )(12)
for some L 0 > 0 and α ≥ 1, where the constant hidden in the O(·) notation may depend on L 0 . Here, α is the powerlaw exponent that controls the tail behavior; in particular, the extreme regime α → +∞ describes the fully sparse setting. As in Section III, we assume the total arrival rate λ * 1 to be known (and equal to a given L 0 ) in advance, but this assumption can be easily dispensed with (cf. Remark III.2). As before, we evaluate each candidate estimator λ = λ(y) based on its expected 1 risk,
R λ, λ * = E λ * λ − λ * 1 .
B. Two estimation strategies
We consider two estimation strategies. In both cases, we let our measurement matrix F be the adjacency matrix of the expander G k,n for a fixed k ≤ n/4 (see Section II for definitions). The first strategy, which we call the direct method, uses standard expander-based CS to construct an estimate of λ * . The second is the pMLE strategy, which relies on the machinery presented in Section III and can be used when only the rates are of interest.
1) The direct method: In this method, which will be used as a "baseline" for assessing the performance of the pMLE, the counters are updated in discrete time, every τ time units. Let x = {x ν } ν∈Z+ denote the sampled version of U , where x ν = U ντ . The update takes place as follows. We have a binary matrix F ∈ {0, 1} m×n , and at each time ν let y ν = F x ν . In other words, y is obtained by passing a sampled n-dimensional homogeneous Poisson process with rate vector λ through a linear transformation F .
The direct method uses expander-based CS to obtain an estimate x ν of x ν from y ν = F x ν , followed by letting
λ dir ν = x + ν ντ .(13)
This strategy is based on the observation that x ν /(ντ ) is the maximum-likelihood estimator of λ * . To obtain x ν , we need to solve the convex program minimize u 1 subject to F u = y ν which can be cast as a linear program [33]. The resulting solution x ν may have negative coordinates, 7 hence the use of the (·) + operation in (13). We then have the following result:
Theorem IV.1. R λ dir ν , λ * ≤ 4σ k (λ * ) + (λ * ) 1/2 1 √ ντ ,(14)
where (λ * ) 1/2 is the vector with components λ * i , ∀i. Remark IV.2. Note that the error term in (14) is
O(1/ √ ν),
assuming everything else is kept constant, which coincides with the optimal rate of the 1 error decay in parametric estimation problems.
Proof: We first observe that, by construction, x ν satisfies the relations F x ν = F x ν and x ν 1 ≤ x ν 1 . Hence,
E x ν − ντ λ * 1 ≤ E x ν − x ν 1 + E x ν − ντ λ * 1 ≤ 4Eσ k (x ν ) + E x ν − ντ λ * 1(15)
where the first step uses the triangle inequality, while the second step uses Proposition II.5 with ∆ = 0. To bound the first term in (15), let S ⊂ {1, . . . , n} denote the positions of the k largest entries of λ * . Then, by definition of the best k-term representation,
σ k (x ν ) ≤ x ν − x S ν 1 = i∈S c |x ν,i | = i∈S c x ν,i .
Therefore,
Eσ k (x ν ) ≤ E i∈S c x ν,i = ντ i∈S c λ * i ≡ ντ σ k (λ * ).
To bound the second term, we can use concavity of the square root, as well as the fact that each x ν,i ∼ Poisson(ντ λ * i ), to write
E x ν − ντ λ * 1 = E N i=1 |x n,i − ντ λ * i | = E N i=1 (x n,i − ντ λ * i ) 2 ≤ n i=1 E(x ν,i − ντ λ * i ) 2 = n i=1 ντ λ * i . Now, it is not hard to show that x + ν − ντ λ * 1 ≤ x ν − ντ λ * 1 . Therefore, R λ dir ν , λ * ≤ E x ν − ντ λ * 1 ντ ≤ 4σ k (λ * ) + (λ * ) 1/2 1 √ ντ ,
which proves the theorem.
2) The penalized MLE approach: In the penalized MLE approach the counters are updated in a slightly different manner. Here the counters are still updated in discrete time, every τ time units; however, each counter i ∈ {1, · · · , m} is updated at times ντ + i m τ ν∈Z+ , and only aggregates the packets that have arrived during the time period ντ + i−1 m τ, ντ + i m τ . Therefore, in contrast to the direct method, here each arriving packet is registered by at most one counter. Furthermore, since the packets arrive according to a homogeneous Poisson process, conditioned on the vector λ * , the values measured by distinct counters are independent 8 . Therefore, the vector of counts at time ν obeys
y ν ∼ Poisson(Φθ * ) where θ * = ντ d m λ *
which is precisely the sensing model we have analyzed in Section III. Now assume that the total average arrival rate λ * 1 = L 0 is known. Let Λ be a finite or a countable set of candidate estimators with λ 1 ≤ L 0 for all λ ∈ Λ, and let pen(·) be a penalty functional satisfying the Kraft inequality over Λ. Given ν and τ , consider the scaled set
Λ ν,τ = ντ d m Λ ≡ ντ d m λ : λ ∈ Λ
with the same penalty function, pen ντ d m λ = pen(λ) for all λ ∈ Λ. We can now apply the results of Section III. Specifically, let
λ pMLE ν = mθ ντ d ,
whereθ is the corresponding pMLE estimator obtained according to (6). The following theorem is a consequence of Theorem III.3 and the remark following it:
Theorem IV.3. If the set Λ satisfies the strict positivity condition (9), then there exists some absolute constant C > 0, such that
R λ pMLE ν , λ * ≤ 4σ k (λ * ) + C min λ∈Λ k log(n/k) λ − λ * 2 1 + k L 0 pen(λ) ντ .(16)
We now develop risk bounds under the power-law condition. To this end, let us suppose that λ * is a member of the powerlaw class Σ L0,α defined in (12). Fix a small positive number δ, such that L 0 / √ δ is an integer, and define the set
Λ = λ ∈ R n + : λ 1 ≤ L 0 ; λ i ∈ {s √ δ} L0/ √ δ s=0
, ∀i
These will be our candidate estimators of λ * . We can define the penalty function pen(λ) λ 0 log(δ −1 ). For any λ ∈ Σ α,L0 and any 1 ≤ r ≤ n we can find some λ (r) ∈ Λ, such that λ (r) 0 r and λ − λ (r) 2 1 r −2α + r δ. 8 The independence follows from the fact that if X 1 , · · · , Xm are conditionally independent random variables, then for any choice of functions g 1 , · · · , gm, the random variables g 1 (X 1 ), · · · , gm(Xm) are also conditionally independent.
Here we assume that δ is sufficiently small, so that the penalty term k r log(δ −1 ) ντ dominates the quantization error r δ. In order to guarantee that the penalty function satisfies Kraft's inequality, we need to ensure that n r=1 λ (r) ∈Λ λ (r) 0=r δ r ≤ 1.
For every fixed r, there are exactly n r subspaces of dimension r, and each subspace contains exactly L0 √ δ r distinct elements of Λ. Therefore, as long as
δ ≤ (2n L 0 ) −2 ,(17)then n r=1 n r L 0 √ δ r ≤ n r=0 n L 0 √ δ r ≤ n r=1 1 2 r ≤ 1,
and Kraft's inequality is satisfied.
Using the fact that k log(n/k) = O(kd), we can bound the minimum over λ ∈ Λ in (16) from above by
min 1≤r≤n kdr −2α + r k log(δ −1 ) ντ = O k d 1 2α+1 log(δ −1 ) ντ 2α 2α+1 = O k d 1 2α+1 log n ντ 2α 2α+1
We can now particularize Theorem IV.3 to the power-law case:
Theorem IV.4. sup λ * ∈Σ α,L 0 R λ pMLE ν , λ * = O(k −α ) + O k 1 2 d 1 4α+2 log n ντ α 2α+1 ,
where the constants implicit in the O(·) notation depend on L 0 and α.
Note that the risk bound here is slightly worse than the benchmark bound of Theorem IV.1. However, it should be borne in mind that this bound is based on Theorem III.3, rather than on the potentially much tighter oracle inequality of Theorem III.1, since our goal was to express the risk of the pMLE purely in terms of the 1 approximation properties of the power-law class Σ α,L0 . In general, we will expect the actual risk of the pMLE to be much lower than what the conservative bound of Theorem IV.4 predicts. Indeed, as we will see in Section IV-D, the pMLE approach obtains higher empirical accuracy than the direct method. But first we show how the pMLE can be approximated efficiently with proper preprocessing of the observed counts y ν based on the structure of G k,n .
C. Efficient pMLE approximation
In this section we present an efficient algorithm for approximating the pMLE estimate. The algorithm consists of two phases: (1) first, we preprocess y ν to isolate a subset A 1 of A = {1, . . . , n} which is sufficiently small and is guaranteed to contain the locations of the k largest entries of λ * (the whales); (2) then we construct a set Λ of candidate estimators whose support sets lie in A 1 , together with an appropriate penalty, and perform pMLE over this reduced set.
The success of this approach hinges on the assumption that the magnitude of the smallest whale is sufficiently large compared to the magnitude of the largest minnow. Specifically, we make the following assumption: Let S ⊂ A contain the locations of the k largest coordinates of λ * . Then we require that
min i∈S λ * i > 9D λ * − λ * (k) ∞ .(18)
Recall that D = O nd m = O n k is the right degree of the expander graph. One way to think about (18) is in terms of a signal-to-noise ratio, which must be strictly larger than 9 D. We also require ντ to be sufficiently large, so that
ντ m D λ * − λ * (k) ∞ ≥ log (mn) 2 .(19)
Finally, we perturb our expander a bit as follows: choose an integer k > 0 so that
k ≥ max 16(kd + 1) 15d , 2k .(20)
Then we replace our original (2k, 1/16)-expander G k,n with left-degree d with a (k , 1/16)-expander G k ,n with the same left degree.The resulting procedure, displayed below as Algorithm 1, has the following guarantees:
Algorithm 1 Efficient pMLE approximation algorithm Input: Measurement vector y ν , and the sensing matrix F . Output: An approximation λ Let B 1 consist of the locations of the kd largest elements of y ν and let B 2 = B\B 1 . Let A 2 contain the set of all variable nodes that have at least one neighbor in B 2 and let A 1 = A\A 2 . Construct a candidate set of estimators Λ with support in A 1 and a penalty pen(·) over Λ.
Output the pMLE λ.
Theorem IV.5. Suppose the assumptions (18), (19), and (20) hold. Then with probability at least 1 − 1 n the set A 1 constructed by Algorithm 1 has the following properties: (1) S ⊂ A 1 ; (2) |A 1 | ≤ kd; (3) A 1 can be found in time O(m log m + nd).
Proof: (1) First fix a measurement node j ∈ B. Recall that y ν,j is a Poisson random variable with mean ντ m (F λ * ) j . By the same argument as in Remark III.2, √ y ν,j is approximately normally distributed with mean ≈ ντ m (F λ * ) j , and with variance ≈ 1 4 . Hence, it follows from Mill's inequality and the union bound that for every positive t Pr ∃j :
√ y ν,j − ντ m (F λ * ) j > t me −2t 2 √ 2πt .
If j is a neighbor of S, then (F λ * ) j ≥ min i∈S λ * i ; whereas if j is not connected to S, then
(F λ * ) j ≤ D λ * − λ * (k) ∞ .
Hence, by setting t = log(mn) 2 (where w.l.o.g. we assume that t ≥ 1), we conclude that, with probability at least 1 − 1 n , for every measurement node j the following holds:
• If j is a neighbor of S, then √ y ν,j ≥ ντ m min i∈S λ * i − log (mn) 2 .
• If j is not connected to S, then
√ y ν,j ≤ ντ m D λ * − λ * (k) ∞ + log (mn) 2 .
Consequently, by virtue of (18) and (19), with probability at least 1 − 1 n every element of y ν that is a neighbor of S has larger magnitude than every element of y ν that is not a neighbor of S.
(2) Suppose, to the contrary, that |A 1 | > kd. Let A 1 ⊆ A 1 be any subset of size kd + 1. Now, Lemma 3.6 in [34] states that, provided ≤ 1−1/d, then every ( , )-expander with left degree d is also a ( (1− )d, 1−1/d)-expander with left degree d. We apply this result to our (k , 1/16)-expander, where k satisfies (20), to see that it is also a (kd+1, 1−1/d)-expander. Therefore, for the set A 1 we must have |N (A 1 )| ≥ |A 1 | = kd + 1. On the other hand, N (A 1 ) ⊂ B 1 , so |N (A 1 )| ≤ kd. This is a contradiction, hence we must have |A 1 | ≤ kd.
(3) Finding the sets B 1 and B 2 can be done in O(m log m) time by sorting y ν . The set A 1 can then can be found in time O(nd), by sequentially eliminating all nodes connected to each node in B 2 .
Having identified the set A 1 , we can reduce the pMLE optimization only to those candidates whose support sets lie in A 1 . More precisely, if we originally start with a sufficiently rich class of estimatorsΛ, then the new feasible set can be reduced to
Λ = λ ∈Λ : Supp(λ) ⊂ A 1 .
Hence, by extracting the set A 1 , we can significantly reduce the complexity of finding the pMLE estimate. If |Λ| is small, the optimization can be performed by brute-force search in O(|Λ|) time. Otherwise, since |A 1 | ≤ kd, we can use the quantization technique from the preceding section with quantizer resolution √ δ to construct a Λ of size at most (L 0 / √ δ) kd . In this case, we can even assign the uniform penalty pen(λ) = log |Λ| = O k log(n/k) log(δ −1 ) , which amounts to a vanilla MLE over Λ.
D. Empirical performance
Here we compare penalized MLE with 1 -magic [35], a universal 1 minimization method, and with SSMP [36], an alternative method that employs combinatorial optimization.
1 -magic and SSMP both compute the "direct" estimator. The pMLE estimate is computed using Algorithm 1 above. For the ease of computation, the candidate set Λ is approximated by the convex set of all positive vectors with bounded 1 norm, and the CVX package [37], [38] is used to directly solve the pMLE objective function with pen(θ) = θ 1 . Figures 3(a) through 5(b) report the results of numerical experiments, where the goal is to identify the k largest entries in the rate vector from the measured data. Since a random graph is, with overwhelming probability, an expander graph, each experiment was repeated 30 times using independent sparse random graphs with d = 8.
We also used the following process to generate the rate vector. First, given the power-law exponent α, the magnitudes of the k whales were chosen according to a power-law distribution with parameter α. The positions of the k whales were then chosen uniformly at random. Finally the n − k minnows were sampled independently from a N (0, 10 −6 ) distribution (negative samples were replaced by their absolute values). Thus, given the locations of the k whales, their magnitudes decay according to a truncated power law (with the cutoff at k), while the magnitudes of the minnows represent a noisy background. Figure 3 shows the relative 1 error ( λ − λ ν 1 / λ 1 ) of the three above algorithms as a function of k. Note that in all cases α = 1, α = 1.5, and α = 2, the pMLE algorithm provides lower 1 errors. Similarly, Figure 4 reports the probability of exact recovery as a function of k. Again, it turns out that in all three cases the pMLE algorithm has higher probability of exact support recovery compared to the two direct algorithms.
We also analyzed the impact of changing the number of updates on the accuracy of the three above algorithms. The results are demonstrated in Figure 5. Here we fixed the number of whales to k = 30, and changed the number of updates from 10 to 200. It turned out that as the number of updates ν increases, the relative 1 errors of all three algorithms decrease and their probability of exact support recovery consistently increase. Moreover, the pMLE algorithm always outperforms the 1 -magic (LP), and SSMP algorithms.
V. CONCLUSIONS
In this paper we investigated expander-based sensing as an alternative to dense random sensing in the presence of Poisson noise. Even though the Poisson model is essential in There are k whales whose magnitudes are assigned according to a power-law distribution with α = 1, and the remaining entries are minnows with magnitudes determined by a N (0, 10 −6 ) random variable. some applications, it presents several challenges as the noise is not bounded, or even as concentrated as Gaussian noise, and is signal-dependent. Here we proposed using normalized adjacency matrices of expander graphs as an alternative construction of sensing matrices, and we showed that the binary nature and the RIP-1 property of these matrices yield provable consistency for a MAP reconstruction algorithm.
The compressed sensing algorithms based on Poisson observations and expander-graph sensing matrices provide a useful mechanism for accurately and robustly estimating a collection of flow rates with relatively few counters. These techniques have the potential to significantly reduce the cost of hardware required for flow rate estimation. While previous approaches assumed packet counts matched the flow rates exactly or that flow rates were i.i.d., the approach in this paper accounts for the Poisson nature of packet counts with relatively mild assumptions about the underlying flow rates (i.e., that only a small fraction of them are large).
The "direct" estimation method (in which first the vector of flow counts is estimated using a linear program, and then the underlying flow rates are estimated using Poisson maximum likelihood) is juxtaposed with an "indirect" method (in which the flow rates are estimated in one pass from the compressive Poisson measurements using penalized likelihood estimation).
The methods in this paper, along with related results in this area, are designed for settings in which the flow rates are sufficiently stationary, so that they can be accurately estimated in a fixed time window. Future directions include extending these approaches to a more realistic setting in which the flow rates evolve over time. In this case, the time window over which packets should be counted may be relatively short, but this can be mitigated by exploiting estimates of the flow rates in earlier time windows.
APPENDIX A OBSERVATION MODELS IN POISSON INVERSE PROBLEMS
In (2) and all the subsequent analysis in this paper, we assume y ∼ Poisson(Φθ * ).
However, one might question how accurately this models the physical systems of interest, such as a photon-limited imaging system or a router. In particular, we may prefer to think of only a small number of events (e.g., photons or packets) being incident upon our system, and the system then rerouting those events to a detector. In this appendix, we compare the statistical properties of these two models. Let z j,i denote the number of events traveling from location i in the source (θ * ) to location j on the detector. Also, in this appendix let us assume Φ is a stochastic matrix, i.e., each column of Φ sums to one; in general, most elements of Φ are going to be less than one. Physically, this assumption means that every event incident on the system hits some element of the detector array. Armed with these assumptions, we can think of Φ j,i as the probability of events from location i in θ * being transmitted to location j in the observation vector y.
We consider two observation models:
Model A: z j,i ∼ Poisson(Φ j,i θ * i ) y j = n i=1 z j,i Model B: w ∼ Poisson(θ * ) {z j,i } n i=1 ∼ Multinomial(w i , {Φ j,i } n i=1 ) y j = n i=1 z j,i ,
where in both models all the components z j,i of z are mutually conditionally independent given the appropriate parameters. Model A roughly corresponds to the model we consider throughout the paper; Model B corresponds to considering Poisson realizations with intensity θ * (denoted w) incident upon our system and then redirected to different detector elements via Φ. We model this redirection process with a multinomial distribution. While the model y ∼ Poisson(Φθ * ) is slightly different from Model A, the following analysis will provide valuable insight into discrete event counting systems. We now show that the distribution of z is the same in Models A and B. First note that
y j ≡ n i=1 z j,i and w i ≡ m j=1 z j,i .(21)
Under Model A, we have
p(z|θ * ) = n i=1 m j=1 e −Φj,iθ * i (Φ j,i θ * i ) zj,i z j,i ! = n i=1 m j=1 Φ zj,i j,i z j,i ! e − m j=1 Φj,iθ * i (θ * i ) m j=1 zj,i = n i=1 m j=1 Φ zj,i j,i z j,i ! e −θ * i (θ * i ) wi(22)
where in the last step we used (21) and the assumption that
m j=1 Φ j,i = 1. Under Model B, we have p(z|w) = n i=1 w i ! m j=1 Φ zj,i j,i z j,i ! , if m j=1 z j,i = w i ∀i 0, otherwise p(w|θ * ) = n i=1 e −θ * i (θ * i ) wi w i ! p(z|θ * ) = v∈Z m + : j zj,i=vi p(z|v)p(v|θ * ) = n i=1 m j=1 w i ! Φ zj,i j,i z j,i ! e −θ * i (θ * i ) wi w i ! = n i=1 m j=1 Φ zj,i j,i z j,i ! e −θ * i (θ * i ) wi .(23)
The fourth line uses (21). Since (22) and (23) Let y = u−v, let S ⊂ {1, . . . , n} denote the positions of the k largest (in magnitude) coordinates of y, and enumerate the complementary set S c as i 1 , i 2 , . . . , i n−k in decreasing order of magnitude of |y ij |, j = 1, . . . , n − k. Let us partition the set S c into adjacent blocks S 1 , . . . , S t , such that all blocks (but possibly S t ) have size k. Also let S 0 = S. LetF be a submatrix of F containing rows from N (S). Then, following the argument of Berinde et al. [18], which also goes back to Sipser and Spielman [21], we have the following chain of inequalities:
F y 1 ≥ F y 1 ≥ F y S 1 − t i=1 (j,l)∈E:j∈Si, l∈N (S) |y j | ≥ d(1 − 2 ) y S 1 − t i=1 (j,l)∈E:j∈Si, l∈N (S) y Si−1 1 k ≥ d(1 − 2 ) y S 1 − 2kd t i=1 y Si−1 1 k ≥ d(1 − 2 ) y S 1 − 2d y 1 .
Most of the steps are straightforward consequences of the definitions, the triangle inequality, or the RIP-1 property. The fourth inequality follows from the following fact. Since we are dealing with a (2k, )-expander and since |S ∪S i | ≤ 2k for every i = 0, . . . , t, we must have |N (S ∪S i )| ≥ d(1− )|S ∪S i |. Therefore, at most 2kd edges can cross from each S i to N (S). From the above estimate, we obtain
F u − F v 1 + 2d y 1 ≥ (1 − 2 )d y S 1 .(24)
Using the assumption that u 1 ≥ v 1 − ∆, the triangle inequality, and the fact that u S c 1 = σ k (u), we obtain
u 1 ≥ v 1 − ∆ = u − y 1 − ∆ = (u − y) S 1 + (u − y) S c 1 − ∆ ≥ u S 1 − y S 1 + u S c 1 − y S c 1 − ∆ = u 1 − 2 u S c 1 + y 1 − 2 y S 1 − ∆ = u 1 − 2σ k (u) + y 1 − 2 y S 1 − ∆,
which yields y 1 ≤ 2σ k (u) + 2 y S 1 + ∆.
Using (24) to bound y S 1 , we further obtain
y 1 ≤ 2σ k (u) + 2 F u − F v 1 + 4d y 1 (1 − 2 )d + ∆.
Rearranging this inequality completes the proof. Proof: From Lemma II.4 it follows that
Φθ 1 ≤ θ 1 ≤ L, ∀θ ∈ Θ L .(25)
Let β * = Φθ * and β = Φθ. Then
β * − β 2 1 = m i=1 |β * i − β i | 2 = m i=1 β * 1/2 i − β 1/2 i . β * 1/2 i +β 1/2 i 2 ≤ m i,j=1 β * 1/2 i − β 1/2 i 2 . β * 1/2 j + β 1/2 j 2 ≤ 2 m i=1 β * 1/2 i − β 1/2 i 2 . m j=1 β * j + β j = 2 m i=1 β * 1/2 i − β 1/2 i 2 . ( β * 1 + β 1 ) ≤ 4L m i=1 β * 1/2 i − β 1/2 i 2 . = 4L m i=1 (Φf * ) 1/2 i − (Φf ) 1/2 i 2 .
The first and the second inequalities are by Cauchy-Schwarz, while the third inequality is a consequence of Eq. (25).
Lemma C.2. Letθ be a minimizer in Eq. (6). Then .
Clearly P Φθ * (y)P Φθ (y)dν(y) = E Φθ * P Φθ (y) P Φθ * (y) .
We now provide a bound for this expectation. Letθ be a minimizer of KL(P Φθ * P Φθ ) + 2 pen(θ) over θ ∈ Θ L . Then, by definition ofθ, we have P Φθ (y)e −pen(θ) ≥ P Φθ (y)e −pen(θ)
for every y. Consequently, Since E Φθ * log P Φθ * (y) P Φθ (y)
= KL(P Φθ * P Φθ ), we obtain which proves the lemma.
Lemma C.3. If the estimators in Θ L satisfy the condition (9), then following inequality holds:
KL(P Φθ * P Φθ ) ≤ 1 c θ * − θ 2 1 , ∀θ ∈ Θ L .
Proof: By definition of the KL divergence, KL(P Φθ * P Φθ ) = E Φθ * log P Φθ * (y) P Φθ (y)
= m j=1 E (Φθ * )j y j log (Φθ * ) j (Φθ) j − m j=1 E (Φθ * )j [(Φθ * ) j − (Φθ) j ] = m j=1 (Φθ * ) j log (Φθ * ) j (Φθ) j − (Φθ * ) j + (Φθ) j ≤ m j=1 (Φθ * ) j (Φθ * ) j (Φθ) j − 1 − (Φθ * ) j + (Φθ) j = m j=1 1 (Φθ) j (Φθ * − Φθ) j 2 ≤ 1 c Φ(θ * − θ) 2 2 ≤ 1 c Φ(θ * − θ) 2 1 ≤ 1 c θ * − θ 2 1 .
The first inequality uses log t ≤ t − 1, the second is by (9), the third uses the fact that the 1 norm dominates the 2 norm, and the last one is by the RIP-1 property (Lemma II.4).
Lemma C.4. Given two Poisson parameter vectors g, h ∈ R m + , the following equality holds: Taking logs, we obtain the lemma.
Fig. 1 .
1A (k, )-expander. In this example, the green nodes correspond to A, the blue nodes correspond to B, the yellow oval corresponds to the set S ⊂ A, and the orange oval corresponds to the set N (S) ⊂ B. There are three colliding edges.
Proposition II. 3 (
3Explicit construction of high-quality expanders). For any positive constant β, and any n, k, , there exists a deterministic explicit construction of a (k, )-expander graph with d = O log n 1+β β and m = O(d 2 k 1+β ).
Fig. 2. Average performance (as measured by the normalized 1 error θ * −θ 1 / θ * 1 ) for the proposed expander-based observation method for recovering sparse signals under Poisson noise. In this experiment, we sweep over a range of measurements and
Fig. 3 .
3Relative 1 error as a function of number of whales k, for 1 -magic (LP), SSMP and pMLE for different choices of the power-law exponent α. The number of flows n = 5000, the number of counters m = 800, and the number of updates is 40.
Fig. 4 .
4Probability of successful support recovery as a function of number of whales k, for 1 -magic (LP), SSMP and pMLE for different choices of the power-law exponent α. The number of flows n = 5000, the number of counters m = 800, and the number of updates is 40.
of successful support recovery as a function of number of updates ν.
Fig. 5 .
5Performance of 1 -magic, SSMP and pMLE algorithms as a function of the number of updates ν. The number of flows n = 5000, the number of counters m = 800, and the number of whales is k = 30.
.
Any θ ∈ Θ L satisfies the bound Φ(θ * − θ)
[EP
KL(P Φθ * P Φθ ) + 2 pen(θ)] .(26)Proof: Using Lemma C.4 below with g = Φθ * and h = Φθ we have Φθ Φθ * (y)P Φθ (y)dν(y)
≤
KL(P Φθ * P Φθ ) + 2 pen(θ) = min θ∈Θ L [KL(P Φθ * P Φθ ) + 2 pen(θ)] ,
(g j h j ) yj /2 y j ! e −(gj hj ) 1
1gj −2(gj hj ) 1/2 +hj ) P (gj hj ) 1/2 (y j )dν j (y j )
are the same, we have shown that Models A and B are statistically equivalent. While Model B may be more intuitively appealing based on our physical understanding of how these systems operate, using Model A for our analysis and algorithm development is just as accurate and mathematically more direct. APPENDIX B PROOF OF PROPOSITION II.5
1 E
1Φθ * P Φθ (y)e −pen(θ) E Φθ * P Φθ (y)e −pen(θ)P Φθ (y)e −pen(θ) E Φθ * P Φθ (y)e −pen(θ) P Φθ * (y)E Φθ *We show that the third term is always nonpositive, which completes the proof. Using Jensen's inequality,P Φθ (y)e −pen(θ) P Φθ * (y)E Φθ * P Φθ (y)e −pen(θ) P Φθ * (y)E Φθ * P Φθ (y) P Φθ * (y)P Φθ (y)e −pen(θ) P Φθ * (y)E Φθ *P Φθ (y)
P Φθ * (y)
≤
P Φθ (y)e −pen(θ)
P Φθ (y)
P Φθ * (y)
,
We can split the quantity
2E Φθ *
log
P Φθ (y)
P Φθ * (y)
into three terms:
E Φθ * log
P Φθ * (y)
P Φθ (y)
+ 2 pen(θ)
+ 2E
log
P Φθ (y)
P Φθ * (y)
E
log
P Φθ (y)
P Φθ * (y)
≤ log
E
.
Now
E
P Φθ (y)
P Φθ * (y)
≤
θ∈Θ L
e −pen(θ) ≤ 1.
That is, each node in A has the same number of neighbors in B. 2 Briefly, we can first generate a random left-regular graph with left degree d (by choosing each edge independently). That graph is, with overwhelming probability, an expander graph. Then, given an expander graph which is only left-regular, a paper by Guruswami et al.[22] shows how to construct an expander graph with almost the same parameters, which is both left-regular and right-regular.
By "almost sparsity" we mean that the vector has at most k significant entries.
Our choice of this observation model as opposed to a "shot-noise" model based on Φ operating on Poisson observations of θ * is discussed in Appendix A.
This observation underlies the use of variance-stabilizing transforms.
Khajehnejad et al.[34] have recently proposed the use of perturbed adjacency matrices of expanders to recover nonnegative sparse signals.
ACKNOWLEDGMENTThe authors would like to thank Piotr Indyk for his insightful comments on the performance of the expander graphs, and the anonymous referees whose constructive criticism and numerous suggestions helped improve the quality of the paper.
Compressed sensing. D Donoho, IEEE Trans. Inform. Theory. 524D. Donoho, "Compressed sensing," IEEE Trans. Inform. Theory, vol. 52, no. 4, pp. 1289-1306, April 2006.
Stable signal recovery from incomplete and inaccurate measurements. E Candès, J Romberg, T Tao, Commun. Pure Appl. Math. 598E. Candès, J. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Commun. Pure Appl. Math., vol. 59, no. 8, pp. 1207-1223, 2006.
Image recovery from data acquired with a charge-coupled-device camera. D Snyder, A Hammond, R White, J. Opt. Soc. Amer. A. 10D. Snyder, A. Hammond, and R. White, "Image recovery from data acquired with a charge-coupled-device camera." J. Opt. Soc. Amer. A, vol. 10, pp. 1014-1023, 1993.
. D Bertsekas, R Gallager, Data Networks, Prentice-HallD. Bertsekas and R. Gallager, Data Networks. Prentice-Hall, 1992.
Sparse signal recovery with exponentialfamily noise. I Rish, G Grabarnik, Allerton Conference on Communication, Control, and Computing. I. Rish and G. Grabarnik, "Sparse signal recovery with exponential- family noise," in Allerton Conference on Communication, Control, and Computing, 2009.
Dequantizing compressed sensing with non-gaussian constraints. L Jacques, D K Hammond, M J Fadili, Proc. of ICIP09. of ICIP09L. Jacques, D. K. Hammond, and M. J. Fadili, "Dequantizing com- pressed sensing with non-gaussian constraints," in Proc. of ICIP09, 2009.
Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. R E Carrillo, K E Barner, T C , IEEE Journal of Selected Topics in Signal Processing. 42R. E. Carrillo, K. E. Barner, and T. C. Aysal, "Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise," IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 392-408, 2010.
Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. J N Laska, M A Davenport, R G Baraniuk, 43rd Asilomar Conference on Signals, Systems and Computers. J. N. Laska, M. A. Davenport, and R. G. Baraniuk, "Exact signal recovery from sparsely corrupted measurements through the pursuit of justice," in 43rd Asilomar Conference on Signals, Systems and Computers, 2009.
Performance bounds on compressed sensing with Poisson noise. R Willett, M Raginsky, Proc. IEEE Int. Symp. on Inform. Theory. IEEE Int. Symp. on Inform. TheorySeoul, KoreaR. Willett and M. Raginsky, "Performance bounds on compressed sensing with Poisson noise," in Proc. IEEE Int. Symp. on Inform. Theory, Seoul, Korea, Jun/Jul 2009, pp. 174-178.
Compressed sensing performance bounds under Poisson noise. M Raginsky, Z Harmany, R Marcia, R Willett, IEEE Trans. Signal Process. 58M. Raginsky, Z. Harmany, R. Marcia, and R. Willett, "Compressed sensing performance bounds under Poisson noise," IEEE Trans. Signal Process., vol. 58, pp. 3990-4002, August 2010.
Near optimal signal recovery from random projections: Universal encoding strategies. E Candès, T Tao, IEEE Trans. Inform. Theory. 5212E. Candès and T. Tao, "Near optimal signal recovery from random projections: Universal encoding strategies," IEEE Trans. Inform. Theory, vol. 52, no. 12, pp. 5406-5425, December 2006.
Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. E Candès, J Romberg, T Tao, IEEE Trans. Inform. Theory. 522E. Candès, J. Romberg, and T. Tao, "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information," IEEE Trans. Inform. Theory, vol. 52, no. 2, pp. 489-509, 2006.
Expander Graphs and their Applications. S Hoory, N Linial, A Wigderson, Bull. Amer. Math. Soc. (New Series). 43S. Hoory, N. Linial, and A. Wigderson, "Expander Graphs and their Applications," Bull. Amer. Math. Soc. (New Series), vol. 43, 2006.
Sparse recovery using sparse random matrices. R Berinde, P Indyk, MITTechnical ReportR. Berinde and P. Indyk, "Sparse recovery using sparse random matri- ces," Technical Report, MIT, 2008.
Efficient and robust compressed sensing using optimized expander graphs. S Jafarpour, W Xu, B Hassibi, R Calderbank, IEEE Trans. Inform. Theory. 559S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, "Efficient and robust compressed sensing using optimized expander graphs," IEEE Trans. Inform. Theory, vol. 55, no. 9, pp. 4299-4308, September 2009.
Practical near-optimal sparse recovery in the 1 norm. R Berinde, P Indyk, M Ruzic, 46th Annual Allerton Conf. on Comm., Control, and Computing. R. Berinde, P. Indyk, and M. Ruzic, "Practical near-optimal sparse recovery in the 1 norm," 46th Annual Allerton Conf. on Comm., Control, and Computing, 2008.
Signal recovery from random measurements via orthogonal matching pursuit. J Tropp, A Gilbert, IEEE Trans. Inform. Theory. 5312J. Tropp and A. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Trans. Inform. Theory, vol. 53, no. 12, pp. 4655-4666, December 2007.
Combining geometry and combinatorics: a unified approach to sparse signal recovery. R Berinde, A Gilbert, P Indyk, H Karloff, M Strauss, 46th Annual Allerton Conference on Communication, Control, and Computing. R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss, "Combining geometry and combinatorics: a unified approach to sparse signal recov- ery." 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 798-805, September 2008.
Near-optimal sparse recovery in the 1 norm. P Indyk, M Ruzic, Proc. 49th Ann. IEEE Symp. on Foundations of Computer Science (FOCS). 49th Ann. IEEE Symp. on Foundations of Computer Science (FOCS)P. Indyk and M. Ruzic, "Near-optimal sparse recovery in the 1 norm," Proc. 49th Ann. IEEE Symp. on Foundations of Computer Science (FOCS), pp. 199-207, 2008.
The Probabilistic Method. N Alon, J Spencer, Wiley-InterscienceN. Alon and J. Spencer, The Probabilistic Method. Wiley-Interscience, 2000.
Expander Codes. M Sipser, D Spielman, IEEE Trans. Inform. Theory. 426M. Sipser and D. Spielman, "Expander Codes," IEEE Trans. Inform. Theory, vol. 42, no. 6, pp. 1710-1722, 1996.
Almost euclidean subspaces of 1 via expander codes. V Guruswami, J Lee, A Razborov, Proceedings of the 19th annual ACM-SIAM Symposium on Discrete Algorithms (SODA). the 19th annual ACM-SIAM Symposium on Discrete Algorithms (SODA)V. Guruswami, J. Lee, and A. Razborov, "Almost euclidean subspaces of 1 via expander codes," Proceedings of the 19th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 353-362, January 2008.
Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes. V Guruswami, C Umans, S Vadhan, IEEE Conference on Computational Complexity (CCC). V. Guruswami, C. Umans, and S. Vadhan., "Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes." IEEE Conference on Computational Complexity (CCC), 2007.
Correcting errors beyond the Guruswami-Sudan radius in polynomial time. F Parvaresh, A Vardy, Proc. 46th Ann. IEEE Symp. on Foundations of Computer Science (FOCS). 46th Ann. IEEE Symp. on Foundations of Computer Science (FOCS)F. Parvaresh and A. Vardy, "Correcting errors beyond the Guruswami- Sudan radius in polynomial time," Proc. 46th Ann. IEEE Symp. on Foundations of Computer Science (FOCS), pp. 285-294, 2005.
T M Cover, J A Thomas, Elements of Information Theory. Wiley2nd ed.T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. Wiley, 2006.
P Mccullagh, J Nelder, Generalized Linear Models. Chapman and Hall2nd ed. LondonP. McCullagh and J. Nelder, Generalized Linear Models, 2nd ed. London: Chapman and Hall, 1989.
All of Statistics: A Concise Course in Statistical Inference. L Wasserman, SpringerL. Wasserman, All of Statistics: A Concise Course in Statistical Infer- ence. Springer, 2003.
Sparse Poisson intensity reconstruction algorithms. Z Harmany, R Marcia, R Willett, Proc. IEEE Stat. Sig. Proc. Workshop. IEEE Stat. Sig. . WorkshopZ. Harmany, R. Marcia, and R. Willett, "Sparse Poisson intensity reconstruction algorithms," in Proc. IEEE Stat. Sig. Proc. Workshop, 2009.
New directions in traffic measurement and accounting: focusing on the elephants, ignoring the mice. C Estan, G Varghese, ACM Trans. Computer Sys. 213C. Estan and G. Varghese, "New directions in traffic measurement and accounting: focusing on the elephants, ignoring the mice," ACM Trans. Computer Sys., vol. 21, no. 3, pp. 270-313, 2003.
Counter Braids: a novel counter architecture for per-flow measurement. Y Lu, A Montanari, B Prabhakar, S Dharmapurikar, A Kabbani, Proc. ACM SIGMETRICS. ACM SIGMETRICSY. Lu, A. Montanari, B. Prabhakar, S. Dharmapurikar, and A. Kabbani, "Counter Braids: a novel counter architecture for per-flow measure- ment," in Proc. ACM SIGMETRICS, 2008.
Inter-AS traffic patterns and their implications. W Fang, L Peterson, Proc. IEEE GLOBECOM. IEEE GLOBECOMW. Fang and L. Peterson, "Inter-AS traffic patterns and their implica- tions," in Proc. IEEE GLOBECOM, 1999.
Deriving traffic demands for operational IP networks: methodology and experience. A Feldmann, A Greenberg, C Lund, N Reingold, J Rexford, F True, Proc. ACM SIGCOMM. ACM SIGCOMMA. Feldmann, A. Greenberg, C. Lund, N. Reingold, J. Rexford, and F. True, "Deriving traffic demands for operational IP networks: method- ology and experience," in Proc. ACM SIGCOMM, 2000.
Combining geometry and combinatorics: a unified approach to sparse signal recovery. R Berinde, A Gilbert, P Indyk, H Karloff, M Strauss, Proc. Allerton Conf. Allerton ConfR. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss, "Com- bining geometry and combinatorics: a unified approach to sparse signal recovery," in Proc. Allerton Conf., September 2008, pp. 798-805.
Sparse recovery of positive signals with minimal expansion. M A Khajehnejad, A G Dimakis, W Xu, B Hassibi, SubmittedM. A. Khajehnejad, A. G. Dimakis, W. Xu, and B. Hassibi, "Sparse recovery of positive signals with minimal expansion," Submitted, 2009.
1 -MAGIC: Recovery of Sparse Signals via Convex Programming. E Candès, J Romberg, E. Candès and J. Romberg, " 1 -MAGIC: Recovery of Sparse Signals via Convex Programming," available at http://www.acm.caltech.edu/l1magic, 2005.
Sequential Sparse Matching Pursuit. R Berinde, P Indyk, AllertonR. Berinde and P. Indyk, "Sequential Sparse Matching Pursuit," Allerton, 2009.
CVX: Matlab software for disciplined convex programming, version 1. M Grant, S Boyd, 21M. Grant and S. Boyd, "CVX: Matlab software for disciplined convex programming, version 1.21," http://cvxr.com/cvx, Jan. 2011.
Graph implementations for nonsmooth convex programs. Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences. V. Blondel, S. Boyd, and H. KimuraSpringer-Verlag Limited--, "Graph implementations for nonsmooth convex programs," in Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds. Springer-Verlag Limited, 2008, pp. 95-110, http://stanford.edu/ ∼ boyd/ graph dcp.html.
|
[] |
[
"Nonequilibrium Keldysh Formalism for Interacting Leads - Application to Quantum Dot Transport Driven by Spin Bias",
"Nonequilibrium Keldysh Formalism for Interacting Leads - Application to Quantum Dot Transport Driven by Spin Bias"
] |
[
"Yuan Li \nDepartment of Electronic and Computer Engineering\nInformation Storage Materials Laboratory\nNational University of Singapore\n1 Engineering Drive 3117576Singapore\n\nDepartment of Physics\nHangzhou Dianzi University\n310018HangzhouP. R. China\n",
"M B A Jalil \nDepartment of Electronic and Computer Engineering\nInformation Storage Materials Laboratory\nNational University of Singapore\n1 Engineering Drive 3117576Singapore\n\nData Storage Institute\nDSI Building\n5 Engineering Drive 1\n\nNational University of Singapore\n117608SingaporeSingapore\n",
"Seng Ghee Tan \nData Storage Institute\nDSI Building\n5 Engineering Drive 1\n\nNational University of Singapore\n117608SingaporeSingapore\n"
] |
[
"Department of Electronic and Computer Engineering\nInformation Storage Materials Laboratory\nNational University of Singapore\n1 Engineering Drive 3117576Singapore",
"Department of Physics\nHangzhou Dianzi University\n310018HangzhouP. R. China",
"Department of Electronic and Computer Engineering\nInformation Storage Materials Laboratory\nNational University of Singapore\n1 Engineering Drive 3117576Singapore",
"Data Storage Institute\nDSI Building\n5 Engineering Drive 1",
"National University of Singapore\n117608SingaporeSingapore",
"Data Storage Institute\nDSI Building\n5 Engineering Drive 1",
"National University of Singapore\n117608SingaporeSingapore"
] |
[] |
The conductance through a mesoscopic system of interacting electrons coupled to two adjacent leads is conventionally derived via the Keldysh nonequilibrium Green's function technique, in the limit of noninteracting leads [see Y. Meir et al., Phys. Rev. Lett. 68, 2512]. We extend the standard formalism to cater for a quantum dot system with Coulombic interactions between the quantum dot and the leads. The general current expression is obtained by considering the equation of motion of the time-ordered Green's function of the system. The nonequilibrium effects of the interacting leads are then incorporated by determining the contour-ordered Green's function over the Keldysh loop and applying Langreth's theorem. The dot-lead interactions significantly increase the height of the Kondo peaks in density of states of the quantum dot. This translates into two Kondo peaks in the spin differential conductance when the magnitude of the spin bias equals that of the Zeeman splitting. There also exists a plateau in the charge differential conductance due to the combined effect of spin bias and the Zeeman splitting. The low-bias conductance plateau with sharp edges is also a characteristic of the Kondo effect. The conductance plateau disappears for the case of asymmetric dot-lead interaction.
|
10.1016/j.aop.2012.01.003
|
[
"https://arxiv.org/pdf/1103.4920v2.pdf"
] | 119,166,830 |
1103.4920
|
7d9f4db062a459476f125a1e9079b77e34f974de
|
Nonequilibrium Keldysh Formalism for Interacting Leads - Application to Quantum Dot Transport Driven by Spin Bias
19 Mar 2012
Yuan Li
Department of Electronic and Computer Engineering
Information Storage Materials Laboratory
National University of Singapore
1 Engineering Drive 3117576Singapore
Department of Physics
Hangzhou Dianzi University
310018HangzhouP. R. China
M B A Jalil
Department of Electronic and Computer Engineering
Information Storage Materials Laboratory
National University of Singapore
1 Engineering Drive 3117576Singapore
Data Storage Institute
DSI Building
5 Engineering Drive 1
National University of Singapore
117608SingaporeSingapore
Seng Ghee Tan
Data Storage Institute
DSI Building
5 Engineering Drive 1
National University of Singapore
117608SingaporeSingapore
Nonequilibrium Keldysh Formalism for Interacting Leads - Application to Quantum Dot Transport Driven by Spin Bias
19 Mar 2012arXiv:1103.4920v2 [cond-mat.str-el]Kondo effectSpin transportDifferential conductance
The conductance through a mesoscopic system of interacting electrons coupled to two adjacent leads is conventionally derived via the Keldysh nonequilibrium Green's function technique, in the limit of noninteracting leads [see Y. Meir et al., Phys. Rev. Lett. 68, 2512]. We extend the standard formalism to cater for a quantum dot system with Coulombic interactions between the quantum dot and the leads. The general current expression is obtained by considering the equation of motion of the time-ordered Green's function of the system. The nonequilibrium effects of the interacting leads are then incorporated by determining the contour-ordered Green's function over the Keldysh loop and applying Langreth's theorem. The dot-lead interactions significantly increase the height of the Kondo peaks in density of states of the quantum dot. This translates into two Kondo peaks in the spin differential conductance when the magnitude of the spin bias equals that of the Zeeman splitting. There also exists a plateau in the charge differential conductance due to the combined effect of spin bias and the Zeeman splitting. The low-bias conductance plateau with sharp edges is also a characteristic of the Kondo effect. The conductance plateau disappears for the case of asymmetric dot-lead interaction.
Introduction
With the advancement in nanotechnology, a small number of interacting electrons can be confined in a quantum dot, to mimic the behavior of impurity atoms in a metal, thus enabling the study of many-body correlations between electrons. One such many-body phenomenon is the Kondo effect, the discovery of which in quantum dot systems [1,2], has generated tremendous theoretical [3,4,5,6] and experimental [7,8] interest. Early works on the Kondo effect had largely focused on the effect of charge bias which applied to the two leads. Later, attentions have shifted to the Kondo effects of the systems including ferromagnetic leads [9,10,11,12] and the leads with spin accumulation [13]. Spin accumulation is the nonequilibrium split in the chemical potentials of spin-up and spin-down electrons. Accordingly, a spin bias can be realized experimentally, for instance, by controlling the spin accumulation at the biased contacts between ferromagnetic and nonmagnetic leads, and can in turn, generate a pure spin current without any accompanying charge current [14,15]. Recent experimental progress has opened new possibilities for the study of spin-bias-induced transport in strongly correlated systems [16,17,18]. However the effect of the spin bias on the Kondo effect remains largely unexplored. More importantly, the Kondo effect in the presence of Coulombic interactions between the quantum dot and two adjacent leads has not been studied.
Such dot-lead interactions warrant an investigation owing to the increasing influence of leads with the reduction in the lead-dot separation. The nonequilibrium (bias-driven) transport through a interacting mesoscopic system is usually modeled by the Keldysh nonequilibrium Green's function (NEGF) method. The standard Keldysh NEGF method for a central quantum dot system coupled to two biased leads, was first developed by Meir et al. [19], and widely applied thereafter [20,21,22,23,24].
In this paper, we derive the general analytical expression for the current in a quantum dot system with interacting leads. The dot-lead Coulombic interactions are then incorporated by determining the contour-ordered Green's function over the Keldysh loop and applying Langreth's theorem. Based on the current formula, and the retarded Green's function of the quantum dot system, we study the low-temperature spin transport and the Kondo physics induced by a spin bias, in the presence of dot-lead Coulombic interactions. Two sharp peaks occur in the density of states in the Kondo regime. This translates into Kondo peaks occur in the spin differential conductance when the magnitude of the spin bias becomes equal to that of the Zeeman split of the quantum dot energy levels. There also exists a conductance plateau in the charge differential conductance at low bias, due to the combined effect of the spin bias and the Zeeman splitting. Since the Kondo effect primarily arises from intra-dot Coulomb interactions between electrons of opposite spins, the Kondo peaks and hence the position of the conductance peaks and plateau are largely unaffected by the strength of the lead-dot Coulombic interactions. The conductance plateau disappears when the symmetry between the dot-lead interactions in the left and right junctions is broken.
The organization of the rest of the paper is as follows. In Sec. 2, the Hamiltonian of the system is introduced, and the current formula and corresponding Green's functions are derived. In Sec. 3, we present the results of our numerical calculation of density of states and the differential conductance in the presence of the dot-lead Coulombic interactions. Finally, a brief summary is given in Sec. 4.
Model and formulation
The Hamiltonian of the quantum dot system can be written as H = H c + H cen + H T + H ld . The first term is the Hamiltonian of the contacts
H c = kσα∈L,R ε αkσ c † αkσ c αkσ ,(1)
while the Hamiltonian of the central region is described by:
H cen = σ ε σ d † σ d σ + Ud † ↑ d ↑ d † ↓ d ↓ ,(2)
where c † αkσ (c αkσ ) and d † σ (d σ ) are the creation (annihilation) operators of an electron with spin σ(=↑, ↓) in the lead α(= L, R) and the quantum dot, respectively. The second term in H cen is the Anderson term where U refers to the on-site Coulombic repulsion, while ε αkσ and ε σ are the energy levels of the two leads and the quantum dot, respectively. For simplicity, we consider only a single pair of levels with energies ε σ = ε 0 + σ∆ε/2. The third term represents tunneling coupling between the leads and the central region:
H T = kσα∈L,R (t ασ c † αkσ d σ + H.C.),(3)
where t ασ refers to the tunneling coupling constant.
In the conventional NEGF method, the interactions in the leads are disregarded so that the semi-infinite leads can be described by the simple noninteracting Green's function (see Meir et al. [19]). In this paper, however, we incorporate the Coulombic interaction between the charges on the quantum dot and the two leads, which may be significant given the proximity of the two regions. This interaction term in the Hamiltonian H ld is given by [25]
H ld = αkσσ ′ I αk d † σ d σ c † αkσ ′ c αkσ ′ ,(4)
where I αk denotes the interaction strength. We first derive the general formula of the current by considering the equation of motion method, and the contour-ordered Green's function over the Keldysh loop, as will be shown explicitly below. The current from the lead α through barrier to the central region can be calculated from the time evolution of the occupation number operator of the lead α:
J ασ = −e Ṅ ασ = 2e Re k t ασ G < αkσ (t, t) ,(5)
where N ασ = k c † αkσ c αkσ and the lesser Green's function is defined as
G < αkσ (t, t ′ ) ≡ i c † αkσ (t ′ )d σ (t) .(6)
According to the equation of motion satisfied by the time-ordered Green's function G t αkσ , namely,
−i ∂ ∂t ′ G t αkσ (t, t ′ ) = ε αkσ G t αkσ (t, t ′ ) + t * ασ G t σ (t, t ′ ) + σ ′′ I αk (−i) T{d σ (t)d † σ ′′ (t ′ )d σ ′′ (t ′ )c † αkσ (t ′ )} ,(7)
where we define the central region time-ordered Green's function G t
σ (t, t ′ ) = −i T{d σ (t)d † σ (t ′ )
} with T being the time-ordering operator. Note that the Coulombic interaction between the quantum dot and the leads results in the last term in Eq. (7). This additional term prevents the closure of the equation of motion satisfied by G t αkσ except by utilizing certain approximations. We adopt the Hartree-Fock approximation to deal with this term and obtain a closed set of equations. Using the Hartree-Fock approximation, the last term can be expressed as
−i T {d σ (t)d † σ ′′ (t ′ )d σ ′′ (t ′ )c † αkσ (t ′ )} ≃ n σ ′′ (−i) T {d σ (t)c † αkσ (t ′ )} , here n σ ′′ = d † σ ′′ d σ ′′ .
Thus, the equation of motion of Eq. (7) can be rewritten as
−i ∂ ∂t ′ − ε αkσ − A αk G t αkσ (t, t ′ ) = t * ασ G t σ (t, t ′ ),(8)
where A αk = nI αk with n = n ↑ + n ↓ . Thus, the Green's function G < αkσ (t, t ′ ) can be derived through adopting Langreth's theorem, namely
G < αkσ (t, t ′ ) = dt 1 t * ασ G r σ (t, t 1 )g < αkσ (t 1 , t ′ ) + G < σ (t, t 1 )g a αkσ (t 1 , t ′ ) ,(9)
where the time-dependent Green's function of the leads for the uncoupled system with the modified energy level ε ′ αkσ are (5), we can obtain the formula of the current
g < αkσ (t, t ′ ) = i f ασ (ε ′ αkσ ) exp − iε ′ αkσ (t − t ′ ) ,(10)g (r,a) αkσ (t, t ′ ) = ∓iθ(±t ∓ t ′ ) exp − iε ′ αkσ (t − t ′ ) ,(11)where f ασ (ε ′ αkσ ) = exp (ε ′ αkσ − µ ασ )/k B T + 1 −1 . Substituting the Green's function G < αkσ (t, t ′ ) into Eq.J ασ = ie dǫ 2π Γ ασ (ǫ) G < σ (ǫ + A α (ǫ)) + f ασ (ǫ + A α (ǫ)) G r σ (ǫ + A α (ǫ)) − G a σ (ǫ + A α (ǫ)) ,(12)
where
Γ ασ = 2π k |t ασ | 2 δ(ǫ − ε αkσ ) is the linewidth function and A α (ǫ) = nI α (ǫ). The Green's function G r,< σ (ǫ) are the Fourier transformations of G r,< σ (t) with G r σ (t) = −iθ(t) {d σ (t), d † σ (0)} ≡ d σ (t)|d † σ (0) r , G < σ (t) = i d † σ (0)d σ (t) ≡ d σ (t)|d † σ (0) < .
For the case of proportionate coupling to the leads, i.e., Γ Lσ = λΓ Rσ , the formula of the current can be simplified. Defining the current J = xJ L − (1 − x)J R with J α = J α↑ + J α↓ , we can obtain the simplified formula of the charge current, namely
J = e σ dǫ f Lσ (ǫ + A α (ǫ)) − f Rσ (ǫ + A α (ǫ)) × Γ Lσ (ǫ)Γ Rσ (ǫ) Γ Lσ (ǫ) + Γ Rσ (ǫ) − 1 π ImG r σ (ǫ + A α (ǫ)) .(13)
Thus, finally, the current can be expressed solely in terms of the retarded Green's function.
We now proceed to solve the retarded Green's function G r σ (ǫ). The standard equation of motion technique yields the following general relation
ǫ F 1 |F 2 r = {F 1 ,F 2 } + F 1 , H |F 2 r ,
whereF 1 andF 2 are arbitrary operators. Note that the energy ǫ includes a infinitesimal imaginary part i0 + . By considering the model Hamiltonian, the equation of motion of the Green's function G r σ (t) can thus be written as
(ǫ − ε σ ) d σ |d † σ r = 1 + αkσ ′ I αk d σ c † αkσ ′ c αkσ ′ |d † σ r + αk t * ασ c αkσ |d † σ r + U d σ d † σ dσ|d † σ r .(14)
4
The above equation of motion generates three new Green's functions, i.e.,
d σ c † αkσ ′ c αkσ ′ |d † σ r , d σ d † σ dσ|d † σ r and c αkσ |d † σ r .
We then consider the respective equations of motions of these Green's functions:
(ǫ − ε αkσ ) c αkσ |d † σ r = t ασ d σ |d † σ r + σ ′ I αk d † σ ′ d σ ′ c αkσ |d † σ r .(15)(ǫ − ε αkσ ) d † σ ′ d σ ′ c αkσ |d † σ r = α ′ k ′ t α ′ σ ′ c † α ′ k ′ σ ′ c αkσ d σ ′ |d † σ r + t ασ d † σ ′ d σ ′ d σ |d † σ r + α ′ k ′ t * α ′ σ ′ d † σ ′ c α ′ k ′ σ ′ c αkσ |d † σ r + σ ′′ I αk d † σ ′′ d σ ′′ d † σ ′ d σ ′ c αkσ |d † σ r ,(16)(ǫ − ε σ − U) d σ d † σ dσ|d † σ r = nσ + α ′ k ′ t α ′σ c † α ′ k ′σ d σ dσ|d † σ r + α ′ k ′ t * α ′σ d σ d † σ c α ′ k ′σ |d † σ r + α ′ k ′ t * α ′ σ d † σ dσc α ′ k ′ σ |d † σ r + α ′ k ′ σ ′ I α ′ k ′ d σ d † σ dσc † α ′ k ′ σ ′ c α ′ k ′ σ ′ |d † σ r ,(17)(ǫ + ε α ′ k ′σ − εσ − ε σ − U) c † α ′ k ′σ d σ dσ|d † σ r = α ′′ k ′′ t * α ′′σ c † α ′ k ′σ d σ c α ′′ k ′′σ |d † σ r − α ′′ k ′′ t * α ′′ σ c † α ′ k ′σ dσc α ′′ k ′′ σ |d † σ r +t * α ′σ d σ d † σ dσ|d † σ r + α ′′ k ′′ σ ′′ 2I α ′′ k ′′ c † α ′ k ′σ d σ dσc † α ′′ k ′′ σ ′′ c α ′′ k ′′ σ ′′ |d † σ r ,(18)(ǫ − ε α ′ k ′σ + εσ − ε σ ) d σ d † σ c α ′ k ′σ|d † σ r = − c α ′ k ′σ d † σ − U (dσd † σ + d † σ d σ )d σ d † σ c α ′ k ′σ |d † σ r + α ′′ k ′′ t α ′′σ c α ′ k ′σ c † α ′′ k ′′σ d σ |d † σ r + α ′′ k ′′ t * α ′′σ c α ′ k ′σ c α ′′ k ′′ σ d † σ |d † σ r +t α ′σ dσd σ d † σ |d † σ r + σ ′′ I α ′ k ′ c α ′ k ′σ d † σ ′′ d σ ′′ d σ d † σ |d † σ r .(19)
To obtain a closed set of equations from the above relations, we adopt a decoupling approximation based on the following rules [5,26,27]:
YX = 0 and YX 1 X 2 |d † σ r ≈ X 1 X 2 Y|d † σ r
, where X and Y represent the operators of leads and the quantum dot. We shall also assume that higher-order spin-correlations terms can be neglected. With these approximations, Eq. (16)-Eq. (19) simplify to
(ǫ − ε αkσ ) d † σ ′ d σ ′ c αkσ |d † σ r = t ασ f ασ (ε αkσ ) d σ |d † σ r δ σσ ′ + d σ d † σ dσ|d † σ r δ σ ′σ ,(20)(ǫ + ε α ′ k ′σ − εσ − ε σ − U) c † α ′ k ′σ d σ dσ|d † σ r = t * α ′σ d σ d † σ dσ|d † σ r − f α ′σ(ε α ′ k ′σ ) d σ |d † σ r ,(21)(ǫ − ε α ′ k ′σ + εσ − ε σ ) d σ d † σ c α ′ k ′σ |d † σ r = t α ′σ d σ d † σ dσ|d † σ r − f α ′σ(ε α ′ k ′σ ) d σ |d † σ r ,(22)
after ignoring all of the higher-order terms. Substituting Eq. (20), Eq. (21) and Eq. (22) into Eq. (17), we then obtain the following:
(ǫ − ε σ − U − Σ 0σ − Σ 1σ ) d σ d † σ dσ|d † σ r = nσ − Σ 2σ d σ |d † σ r .(23)
In the above, the self-energies Σ 0σ,1σ,2σ are defined as (14), we obtain a relation involving the Green's function G r
Σ 0σ = α ′ k ′ |t α ′ σ | 2 ǫ − ε α ′ k ′ σ , Σ iσ = α ′ k ′ A (i) α ′ k ′σ |t α ′σ | 2 1 ǫ − ε α ′ k ′σ − ε σ + εσ + 1 ǫ + ε α ′ k ′σ − ε σ − εσ − U , where i = 1, 2, A (1) α ′ k ′σ = 1 and A (2) α ′ k ′σ = f α ′σ(ε α ′ k ′σσ (ǫ) = d σ |d † σ r : (ǫ − ε σ − Σ 3σ − Σ 4σ ) d σ |d † σ r = 1 + (U + Σ 5σ ) d σ d † σ dσ|d † σ r ,(24)
where
Σ 3σ = αkσ ′ I αk f ασ ′ (ε αkσ ′ ), Σ 4σ = αk |t ασ | 2 ǫ − ε αkσ 1 + I αk f ασ (ε αkσ ) ǫ − ε αkσ , Σ 5σ = αk |t ασ | 2 I αk (ǫ − ε αkσ ) 2 .
Finally we can obtain the analytic form of the required Green's function, namely,
G r σ (ǫ) ≡ d σ |d † σ r = ǫ − ε σ − U − Σ 0σ − Σ 1σ + (U + Σ 5σ )nσ (ǫ − ε σ − Σ 3σ − Σ 4σ )(ǫ − ε σ − U − Σ 0σ − Σ 1σ ) + (U + Σ 5σ )Σ 2σ .(25)
In the absence of any interaction between the quantum dot and the two leads, two of the selfenergy terms reduce to zero, i.e., Σ 3σ = Σ 5σ = 0, while
Σ 4σ = Σ 0σ = αk |t ασ | 2 ǫ − ε αkσ .
Eq. (25) is the Green's function of a quantum dot system with Coulombic interactions within the dot, and between the dot and the two leads. For simplicity, we assume the case of strong intra-dot Coulomb interaction (i.e., the infinite-U limit), for which the Green's function G r σ (ǫ) reduces to: where the new self energy Σ ′ 2σ is defined as
G r σ (ǫ) = d σ |d † σ r = 1 − nσ ǫ − ε σ − Σ ′ 2σ − Σ 3σ − Σ 4σ ,(26)Σ ′ 2σ = α ′ k ′ f α ′σ(ε α ′ k ′σ )|t α ′σ | 2 ǫ − ε α ′ k ′σ − ε σ + εσ .(27)
Obviously, the self-energies terms Σ 3σ and Σ 4σ represent the effects of the interaction between the quantum dot and the two leads. Note that the Green's function is dependent on the occupation number n σ which is given by the formula n σ = −i (dǫ/2π)G < σ (ǫ). The lesser Green's function G < σ (ǫ) in the formula may be obtained by adopting certain approximation, for instance, the noncrossing approximation [3] and the ansatz method [28]. Alternatively, one can directly calculate the integral dǫG < σ (ǫ) via the following relation:
n σ = − dǫ π ImG r σ Γ Lσ f Lσ (ǫ) + Γ Rσ f Rσ (ǫ) Γ Lσ + Γ Rσ .(28)
Thus, from Eqs. (26) and (28), one can evaluate G r σ (ǫ) and n σ self-consistently. Finally, the converged value of n σ is then used to calculate the charge current via Eq. (13).
Numerical results
Having derived the nonequilibrium transport for the general case of a central region coupled to interacting leads, we investigate the spin transport through a quantum dot system with a spin bias V s applied at the two leads, namely µ L↑ = −µ L↓ = V s /2 and µ R↓ = −µ R↑ = V s /2. Our focus is to analyze the effect of the lead-dot Coulomb interaction on the spin-transport properties. Firstly, we start to study density of states of the quantum dot according to the relation ρ σ (ǫ) = −(1/π)ImG r σ (ǫ). In the following numerical calculations, we assume that the quantum dot symmetrically couples to two leads with Lorentzian linewidth of 2D, namely Γ Lσ (ǫ) = Γ Rσ (ǫ) = γ 0 D 2 /2(ǫ 2 + D 2 ), with γ 0 = 1 as the unit of energy and D = 500. As for the dot-lead interaction, we adopt a flat-band profile, i.e., I α (ǫ) = I α θ(D − |ǫ|). 7 As shown in Figure 1, the spin-up and spin-down density of states are plotted in the presence of dot-lead Coulombic interactions described by I L and I R . In the absence of the dot-lead interaction, i.e., I L = I R = 0 [solid line], a broad main peak is observed at ǫ ∼ −2.5, which is associated with the renormalized level ε ↑ of the quantum dot. In addition, there are two sharp Kondo peaks at energies ǫ ≈ 0 and ǫ ≈ 0.35 [see Figure 1(a)]. The Kondo peaks for spin σ arise from the contribution of the self-energy Σ ′ 2σ , due to virtual intermediate states in which the site is occupied by an electron of opposite spinσ [3]. The real part of Σ ′ 2σ grows logarithmically near the energies ǫ σ = µ ασ + ∆ε σ with ∆ε σ = ε σ − εσ, due to the sharp Fermi surface at low temperature. This logarithmic increase translates into peaks in the density of states near those energies. According to the same physical explanation, we can deduce that the Kondo peaks of the spin-down density of state ρ ↓ should occur at energies ǫ = −0.05 and ǫ = −0.35, as can be confirmed from Figure 1(b). In the presence of the dot-lead interaction, for example I L = I R = 0.001 (dashed lines), the position of the broad main peak is shifted. However, the positions of the Kondo peaks are not affected by the dot-lead interaction which induces the self-energies Σ 3σ and Σ 4σ and do not contribute to the Kondo effect. When the interaction strength is increased, e.g., I R = 0.005 (dotted line), the position of the broad main peak is shifted further to higher energy compared to 8 the previous case, but the positions of the two Kondo peaks remain invariant. Next, we investigate the effect of the dot-lead interaction on the charge and spin differential conductances, which are defined as G c = dJ/dV s and G s = dJ s /dV s , where J s = (J ↑ − J ↓ )/2e, respectively. The differential conductance G c and G s are plotted as a function of the spin bias V s for different interaction strengths as shown in Figure 2. To explain the observed trends in the spin and charge differential conductance, we sketch the electrochemical potentials in the two leads, and superimpose on it the Kondo peaks in the density-of-states [see Figs. 2(a) and (b)]. We observe a plateau in G c over the bias interval of |V s | ≤ 0.2, for the cases of I L = I R = 0 and I L = I R = 0.001. However, the conductance plateau is destroyed in the case of asymmetric leaddot interaction, i.e. I L I R , and the charge differential conductance assumes a negative value over the same bias interval [see Figure 2(c)]. The spin differential conductance G s shows two Kondo peaks at |V s | = 0.2, irrespective of the symmetry or strength of the lead-dot interactions [shown in Figure 2(d)]. G s is also enhanced with increasing strength of the dot-lead interactions. We note that the spin-up and spin-down electrons flow along opposite directions, i.e., J = |J ↑ | − |J ↓ | and J s ∝ |J ↑ | + |J ↓ |, and that the two currents J ↑ and J ↓ can be different due to the energy splitting ∆ε ↑ . To explain the above conductance dependence on V s , we refer to the schematic diagram of the Kondo density-of-state peaks [see Figure 2(a)-(b)]. The Kondo peaks begin to enter the spin-bias conduction window when the spin bias is increased beyond |V s | = ∆ε ↑ . This results in the spin differential conductance G s having two Kondo peaks at V s = ±∆ε ↑ . The entry of the two Kondo peaks into the conduction window also reduces the difference in the magnitude of J ↑ and J ↓ , thus resulting in a sharp drop (plateau step) in the charge conductance at V s = ±∆ε ↑ . The conductance plateau can thus be attributed to the combined effect of the spin bias in the leads and the Zeeman splitting in the QD. The dot-lead interaction tends to increase the coupling between the leads and the QD, thus resulting in a general increase in the charge and spin differential conductances. In the presence of asymmetrical dot-lead interaction, i.e., I L = 0.001 and I R = 0.005, the symmetry in the transport across the QD is broken, and thus the conductance plateau disappears. Two conductance dips occur at |V s | < ∆ε ↑ due to the contribution from the Kondo peaks in the density-of-states.
L ! " L ! R ! " R ! # " # (a) s V ! " # L ! " L ! R ! " R ! # " # (b) s V ! " # -0.6 -0.4 -0.
Finally we study the temperature dependence of the differential conductances. As shown in Figure 3, the two Kondo peaks in G s at V s = ±0.2, and the conductance plateau in G c in 9 the bias interval |V s | < 0.2 can be clearly observed at a low temperature of T = 0.005. With increasing temperature, e.g., at T = 0.05, the Kondo peaks become thermally broadened, while the plateau in G c sharpens into a peak profile. With a further increase in temperature to T = 0.1, the plateau in G c is almost completely suppressed. These changes can be largely attributed to the thermal distribution of electrons about the electrochemical potential in the leads. The thermal distribution in turn affects the self-energy Σ ′ 2σ of the intra-dot Coulomb interaction, which is primarily responsible for the Kondo resonances in the density-of-states [see Eq. (27)].
Summary
In this work, we analyze the spin-transport properties of a quantum dot system driven by spin bias in the presence of dot-lead Coulombic interactions. The transport property is discussed on the basis of Keldysh nonequilibrium Green's function framework. According to the equationof-motion technique and Langreth's theorem, we derive the analytical expression of the current through the quantum dot in the presence of the dot-lead Coulombic interaction. Our numerical results show that although the interaction can renormalize the energy levels of the quantum dot, they leave the position of the Kondo peaks in the density of states unchanged. This is because the Kondo effect arises primarily for intra-dot Coulomb interactions involving electrons of opposite spins. The Kondo resonances in the density of states translate into peaks in the spin differential conductance when the magnitude of the spin bias is equal to that of the Zeeman energy split in the quantum dot. There also exists a plateau in the charge differential conductance at low bias, due to the combined effect of spin bias and the Zeeman energy splitting. The position of the steps of the conductance plateau can also be attributed to the Kondo effect. The strength of the Coulombic lead-dot interactions affects the magnitude of both the spin and charge conductances. Furthermore, in the presence of asymmetrical dot-lead interaction strengths, the plateau in the charge conductance disappears, and is replaced by conductance peaks. Finally, the temperature dependence of the differential conductances is qualitatively discussed.
Figure 1 :
1(a) and (b) are schematic diagrams of the density of states ρ σ (ǫ) for a quantum dot symmetrically coupled to two leads with Lorentzian linewidth of 2D. The quantum dot has two spin states with energies ε ↑ = −2.9, ε ↓ = −3.1 and an on-site interaction U → ∞. The linewidth is chosen to be D = 500 and the temperature is T = 0.005. The spin bias is V s = 0.3 and the chemical potentials are µ L↑ = −µ L↓ = V s /2 and µ R↓ = −µ R↑ = V s /2. Solid, dashed and dotted curves correspond to interaction parameters of I L = I R = 0, I L = I R = 0.001 and I L = 0.001, I R = 0.005, respectively.
Figure 2 :
2(a)-(b) Schematic energy diagrams of the Kondo density-of-state peaks and the electrochemical potentials of the two leads, for the case of ∆ε ↑ > V s and ∆ε ↑ < V s . (c)-(d) The differential charge and spin conductance G c and G s plotted as a function of the spin bias V s for different lead-dot interaction strengths. Solid, dashed and dotted curves correspond to the case of I L = I R = 0, I L = I R = 0.001 and I L = 0.001, I R = 0.005, respectively. The energies of the quantum dot are chosen to be ε ↑ = −2.9, ε ↓ = −3.1, so that ∆ε ↑ = 0.2. Other parameters are the same as those of Figure 1.
Figure 3 :
3The differential charge and spin conductance G c (a) and G s (b) versus the spin bias V s , for temperatures T = 0.005 (solid line), T = 0.05 (dashed line) and T = 0.1 (dotted line). The lead-dot interaction strength is set at I L = I R = 0.001. Other parameters are the same as those of Figure 1.
). Subsequently, by substituting Eq. (15), Eq. (20) and Eq. (23) into Eq.
The authors would like to thank the Agency for Science, Technology and Research (A*STAR) of Singapore, the National University of Singapore (NUS) Grant No. R-398-000-061-305 and the NUS Nanoscience and Nanotechnology Initiative for financially supporting their work. The work was also supported by Innovation Research Team for Spintronic Materials and Devices of Zhejiang Province.
. T K Ng, P A Lee, Phys. Rev. Lett. 611768T. K. Ng and P. A. Lee, Phys. Rev. Lett. 61, (1988) 1768;
. S Hershfield, J H Davies, J W Wilkins, Phys. Rev. Lett. 673720S. Hershfield, J. H. Davies and J. W. Wilkins, Phys. Rev. Lett. 67, (1991) 3720.
. L I Glazman, M E Raikh, JETP Lett. 47378L. I. Glazman and M. E. Raikh, JETP Lett., 47 (1988) 378.
. Y Meir, N S Wingreen, P A Lee, Phys. Rev. Lett. 702601Y. Meir, N. S. Wingreen and P. A. Lee, Phys. Rev. Lett. 70 (1993) 2601.
. M Pustilnik, L I Glazman, Phys. Rev. Lett. 87216601M. Pustilnik and L. I. Glazman, Phys. Rev. Lett. 87 (2001) 216601.
. Q F Sun, H Guo, Phys. Rev. B. 66155308Q. F. Sun and H. Guo, Phys. Rev. B 66 (2002) 155308.
. J Martinek, Y Utsumi, H Imamura, J Barnaś, S Maekawa, J König, G Schön, Phys. Rev. Lett. 91127203J. Martinek, Y. Utsumi, H. Imamura, J. Barnaś, S. Maekawa, J. König, and G. Schön, Phys. Rev. Lett. 91(2003) 127203.
. D Goldhaber-Gordon, J Göres, M A Kastner, H Shtrikman, D Mahalu, U Meirav, Phys. Rev. Lett. 815225D. Goldhaber-Gordon, J. Göres, M. A. Kastner, H. Shtrikman, D. Mahalu, and U. Meirav, Phys. Rev. Lett. 81 (1998) 5225.
. S Sasaki, S De Franceschi, J M Elzerman, W G Van Der Wiel, M Eto, S Tarucha, L P Kouwenhoven, Nature. 764S. Sasaki, S. De Franceschi, J. M. Elzerman, W. G. van der Wiel, M. Eto , S. Tarucha, L. P. Kouwenhoven, Nature (London) 405 (2000) 764.
. N Sergueev, Q F Sun, H Guo, B G Wang, J Wang, Phys. Rev. B. 65165303N. Sergueev, Q. F. Sun, H. Guo, B. G. Wang, and J. Wang, Phys. Rev. B 65 (2002) 165303.
. P Zhang, Q K Xue, Y P Wang, X C Xie, Phys. Rev. Lett. 89286803P. Zhang, Q. K. Xue, Y. P. Wang, and X. C. Xie, Phys. Rev. Lett. 89 (2002) 286803.
. A N Pasupathy, R C Bialczak, J Martinek, J E Grose, L A K Donev, P L Mceuen, D C Ralph, Science. 30686A. N. Pasupathy, R. C. Bialczak, J. Martinek, J. E. Grose, L. A. K. Donev, P. L. Mceuen, and D. C. Ralph, Science, 306 (2004) 86.
. Y Utsumi, J Martinek, G Schön, H Imamura, S Maekawa, Phys. Rev. B. 71245116Y. Utsumi, J. Martinek, G. Schön, H. Imamura, and S. Maekawa, Phys. Rev. B 71 (2005) 245116.
. E J Koop, B J Van Wees, D Reuter, A D Wieck, C H Van Der Wal, Phys. Rev. Lett. 10156602E. J. Koop, B. J. van Wees, D. Reuter, A. D. Wieck, and C. H. van der Wal, Phys. Rev. Lett. 101 (2008) 056602.
. Y K Kato, R C Myers, A C Gossard, D D Awschalom, Science. 3061910Y. K. Kato, R. C. Myers, A. C. Gossard, D. D. Awschalom, Science 306 (2004) 1910.
. S O Valenzuela, M Tinkham, Nature. 176S. O. Valenzuela, M. Tinkham, Nature (London) 442 (2006) 176.
. Y J Bao, N H Tong, Q F Sun, S Q Shen, Europhys. Lett. 8337007Y. J. Bao, N. H. Tong, Q. F. Sun, and S. Q. Shen, Europhys. Lett. 83, (2008) 37007.
. R Świrkovicz, J Barnaś, M Wilczyński, J. Magn. Magn. Mater. 3212414R.Świrkovicz, J. Barnaś, and M. Wilczyński, J. Magn. Magn. Mater. 321 (2009) 2414.
. T Kobayashi, S Tsuruta, S Sasaki, T Fujisawa, Y Tokura, T Akazaki, Phys. Rev. Lett. 10436804T. Kobayashi, S. Tsuruta, S. Sasaki, T. Fujisawa, Y. Tokura, and T. Akazaki, Phys. Rev. Lett. 104 (2010) 036804.
. Y Meir, N S Wingreen, Phys. Rev. Lett. 682512Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68 (1992) 2512.
. A P Jauho, N S Wingreen, Y Meir, Phys. Rev. B. 505528A. P. Jauho, N. S. Wingreen, and Y. Meir, Phys. Rev. B 50 (1994) 5528.
. M Buongiorno Nardelli, Phys. Rev. B. 607828M. Buongiorno Nardelli, Phys. Rev. B 60 (1999) 7828.
. Q F Sun, J Wang, H Guo, Phys. Rev. B. 71165310Q. F. Sun, J. Wang, and H. Guo, Phys. Rev. B 71 (2005) 165310.
. T C Li, S P Lu, Phys. Rev. B. 7785408T. C. Li and S. P. Lu, Phys. Rev. B 77 (2008) 085408.
. S Kumar, M B A Jalil, S G Tan, G C Liang, J. Appl. Phys. 10833709S. Bala Kumar, M. B. A. Jalil, S. G. Tan, and G. C. Liang, J. Appl. Phys. 108 (2010) 033709.
. M Sade, Y Weiss, M Goldstein, R Berkovits, Phys. Rev. B. 71153301M. Sade, Y. Weiss, M. Goldstein, and R. Berkovits, Phys. Rev. B 71 (2005) 153301.
. Y Meir, N S Wingreen, P A Lee, Phys. Rev. Lett. 663048Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. 66 (1991) 3048.
Quantum Kinetics in transport and optics of semiconductors, Springer serier in solid-state sciences. H Huag, A P Jauho, M Cardona, P Fulde, Von Klitzing, K Queisser, H J , H. Huag and A. P. Jauho 1998 Quantum Kinetics in transport and optics of semiconductors, Springer serier in solid-state sciences, edited by Cardona M, Fulde P, von Klitzing K and Queisser H J.
. T K Ng, Phys. Rev. Lett. 76487T. K. Ng, Phys. Rev. Lett. 76 (1996) 487.
|
[] |
[
"NON-SEPARATING COCIRCUITS AND GRAPHICNESS IN MATROIDS",
"NON-SEPARATING COCIRCUITS AND GRAPHICNESS IN MATROIDS"
] |
[
"Paulo João ",
"Costalonga "
] |
[] |
[] |
Let M be a 3-connected binary matroid and let Y (M) be the set of elements of M avoiding at least r (M) + 1 non-separating cocircuits of M. Lemos proved that M is non-graphic if and only if Y (M) = . We generalize this result when by establishing that Y (M) is very large when M is non-graphic and M has no M * (K ′′′ 3,3 )-minor if M is regular. More precisely that |E (M)− Y (M)| ≤ 1 in this case. We conjecture that when M is a regular matroid with an M * (K 3,3 )-minor, then r * M (E (M) − Y (M)) ≤ 2. The proof of such conjecture is reduced to a computational verification.
| null |
[
"https://arxiv.org/pdf/1211.5823v1.pdf"
] | 119,051,788 |
1211.5823
|
2c67322e6e3827b86ff9b54a1dccc9f8023a03b2
|
NON-SEPARATING COCIRCUITS AND GRAPHICNESS IN MATROIDS
25 Nov 2012
Paulo João
Costalonga
NON-SEPARATING COCIRCUITS AND GRAPHICNESS IN MATROIDS
25 Nov 2012
Let M be a 3-connected binary matroid and let Y (M) be the set of elements of M avoiding at least r (M) + 1 non-separating cocircuits of M. Lemos proved that M is non-graphic if and only if Y (M) = . We generalize this result when by establishing that Y (M) is very large when M is non-graphic and M has no M * (K ′′′ 3,3 )-minor if M is regular. More precisely that |E (M)− Y (M)| ≤ 1 in this case. We conjecture that when M is a regular matroid with an M * (K 3,3 )-minor, then r * M (E (M) − Y (M)) ≤ 2. The proof of such conjecture is reduced to a computational verification.
INTRODUCTION
A cocircuit in a connected matroid is said to be non-separating if its deletion results in a connected matroid. For a 3-connected graphic matroid, note that the non-separating cocircuits correspond to the stars of the vertices in its graphic representation.
Non-separating cocircuits play an important role in the understanding of the structure of graphic matroids, as we can see in some instances that follows. Non-separating cocircuits were first studied by Tutte [18], in the cographic case, to give a characterization of the planar graphs. Tutte also proved that the non-separating cocircuits of the bond matroid of a 3-connected graph spans, over GF (2), its cycle-space. Tutte's results was generalized by Bixby and Cunningham [1], as we summarize in the following theorem, which was conjectured by Edmonds: Lemos, in [10] and [11], proved results of similar nature as synthesized in the next theorem: There are another characterizations of graphicness in binary matroids using non-separating cocircuits in Kelmans [7], Lemos, Reid, and Wu [13] and Mighton [12]. Kelmans [6] gave a simple proof of Whitney's 2-isomorphism Theorem using non-separating cocircuits. Some algorithms for recognizing graphicness in binary matroids are based on concepts related to non-separating cocircuits, see Tutte [19], Cunningham [4], Mighton [12] and Wagner [16]. In this paper we reduce the proof of Theorem 1.3 and Conjecture 1.4. The theoretical part of the proof of conjecture is included in this paper. The computational part is being prepared. The computational part of proof of Theorem 1.3 is ready, but not properly written yet. Because of these missing pieces this paper is a preliminary report. More precisely, the theoretical part of proof of Conjecture 1.4, reduces its proof to the verification of the following:
SOME RESULTS IN CRITICAL 3-CONNECTIVITY
Let M and N be 3-connected matroids. We say that an element e ∈ E (M) is vertically Nremovable in M if co(M\e) is a 3-connected matroid with an N -minor.
We can summarize the dual versions of Lemma 3.4 and Theorem 3.1 of [17] and Theorem 1.3 of [3] in the following theorem: If M and N are 3-connected matroids, we say that a set X ⊆ E (M) is N -removable if M\X is a 3-connected matroid with an N -minor. We also say that an element e ∈ E (M) is N -removable if {e} is N -removable. Now, we define a special structure that will be largely used along this article. When M is binary, we say that a list of distinct elements e 1 , e 2 , e 3 , f 1 , In other words, such a list is an N -pyramid with top T * when the restriction of M to its elements is isomorphic M(K 4 ) as illustrated in the figure below, where {e 1 , e 2 , e 3 } is an N -removable triad of M and f 1 , f 2 and f 3 are also N -removable in M.
✫✪ ✬✩ r r r r ❜ ❜ ✧ ✧ e 1 e 2 e 3 f 1 f 2 f 3
Theorem 2.1 does not hold for k > 3, but, at the binary case, it can be extended by the following theorem, that is the dual version of Theorem 1.4, from [3], written in terms of N -pyramids. Next we establish:
Lemma 3.2.
Suppose that M is a 3-connected binary matroid satisfying r * (M) ≥ 4. If e is an element of M such that co(M\e) is 3-connected, then we may choose the ground set of co(M\e) so that:
(2) Y (co(M\e)) ⊆ Y (M); and { f ∈ E (M) : there is g ∈ Y (co(M\e)) with { f , g } ∈ C * (M\e)} ⊆ Y (M).
Proof. Choose the ground set of co(M\e) satisfying condition (1)
(3) Y (M) ⊆ cl * M ( Y (co[M\e]) ∪ e).* = T * , D * ∩ F = , or D * ∩ F = {e i , f j , f k }, for some distinct elements i , j , k ∈ {1, 2, 3}. (b) If C * ∈ R * (M\T * ), then there is a unique D * ∈ R * (M) such that C * ⊆ D * D * ∪ T * . Moreover, either (b1) C * ∩ T = and D * = C * , or (b2) there are distinct elements i , j , k of {1, 2, 3} such that C * ∩F = { f i , f j } and D * = C * ∪e k . (c) Y (M\T * ) ⊆ Y (M). Moreover, if f i ∈ Y (M\T * ), then T i ⊆ Y (M\T * ) and f ∉ cl * M ( Y (M)). (d) If r * M\T * ( Y (M\T * )) ≤ 2 and, for each i = {1, 2, 3}, r * M\ f i ( Y (M\ f i )) ≤ 2, then r * M ( Y (M)) ≤ 2.
Proof. In this proof we set, for {i ,
j , k} = {1, 2, 3}, T k := {e i , e j , f k }.
Let us prove (a). Suppose that D * ∈ R * (M), D * = T * and D * ∩ F = . As D * intersects F , it follows that D * intersect T k for some k. By orthogonality with T k , D * intersects T * . Hence |D * ∩ T * | = 1, since M\D * is connected; say e 1 ∈ D * . By orthogonality with T 2 and T 3 , it follows that {e 1
, f 2 , f 3 } ⊆ D * . As M is binary, none of T , T 2 or T 3 is contained in D * . It yields that D * ∩F = {e 1 , f 2 , f 3 }. We proved (a).
Let us prove (b). First, we examine the case that C * ∩ T = . In this case, it is straight to check that M\C * is connected. It is just left to show that C * is a cocircuit of M. Consider a cocircuit D * of M such that C * ⊆ D * C * ∪ T * , say e 1 ∉ D * . As C * ∩ T = , then f 2 , f 3 ∉ D * . By orthogonality with T 2 and T 3 , T * ∩ D * = . Thus D * = C * and C * is a cocircuit of M. Moreover, in this case, (b1) holds. So we may assume that C * ∩ T = . By orthogonality with T , we may suppose that
C * ∩ T = { f 1 , f 2 }. Let D * 0 be a cocircuit of M such that C * ⊆ D * 0 C * ∪ T * . By orthogonality with T 1 and T 2 , either D * 0 = C * ∪ e 3 or D * 0 = C * ∪ {e 1 , e 2 }. Note that C * ∪ {e 1 , e 2 } = (C * ∪ e 3 )∆T * .
Thus both C * ∪ e 3 and C * ∪ {e 1 , e 2 } are cocircuits of M. But it is easy to check that that M\(C * ∪ e 3 ) is connected and M\(C * ∪ {e 1 , e 2 }) is disconnected. Define D * := C * ∪ e 3 to conclude (b) and (b2).
To prove (c), let e ∈ Y (M\T * ). As R * e (M) is linearly dependent, there are distinct non-separating cocircuits C * 1 , . . . ,C * n of M\T * avoiding e such that:
(4) C * 1 ∆ . . . ∆C * n = .
For each l = 1, . . . , n, define D * l as the non-separating cocircuit of M such that C * l ⊆ D * l ⊆ C * l ∪T * , as described in (b). Consider, for {i , j , k} = {1, 2, 3}, the following subsets of {1, . . . , n}:
B i := {l : f i ∈ C * l } , and A i j := {l : f i , f j ∈ C * l } = {l : e k ∈ D *
l } (this equality holds by (a)). By (4), each B i has even cardinality. By (a), B i is equal to the disjoint union of A i j and A i k . Thus |A 12 |, |A 13 | and |A 23 | are congruent modulo 2. Hence D * 1 ∆ . . . ∆D * n is equal to or T * . Therefore D * 1 , . . . , D * n or D * 1 , . . . , D * n , T * is a list of linearly dependent non-separating cocircuits of M avoiding e. This proves the first part of (c).
For the second part of (c), say that f 1 ∈ Y (M\T * ). Note that, as above, for e = f 1 we have A 12 = A 13 = . Thus |A 23 | is even and, therefore, D * 1 ∆ . . . ∆D * n = . In particular this implies that D * 1 , . . . , D * n is a list of linearly dependent cocircuits of M avoiding T 1 . To finish, note that, in this case, Y (M) is contained in the cohyperplane E (M) − T 1 of M and, therefore,
f 1 ∉ cl * M ( Y (M)
). Now, we prove (d). Suppose, for a contradiction, that r * M ( Y (M)) ≥ 3. As M\T * is 3-connected and binary and T is a triangle of M\T * , and, by hypothesis,
r * M\T * ( Y (M\T * )) ≤ 2, it follows that T Y (M\T * ), say f 1 ∉ Y (M\T * ). Thus f 1 ∈ Y (M\T * ). By (c), f 1 ∉ Y (M). As, M\ f 1 is 3- connected, by Corollary 3.3 for e = f 1 , we have that Y (M) ⊆ cl * M ( Y (M\ f 1 )∪ f 1 ). But r * M ( Y (M\ f 1 )∪ f 1 ) ≤ 3, because r * M\ f 1 ( Y (M\ f 1 )) ≤ 2. Thus, as r * M ( Y (M)) ≥ 3, Y (M) spans f 1 in M * .
A contradiction to the second part of (c). This proves (d), and therefore, the lemma. The next lemma is a straight consequence of the submodularity of the rank function of a matroid.
Lemma 3.5. Let M be a matroid, X 1 , . . . , X m ⊆ E (M) and n := max
i r M (X i ). Suppose that r M (X 1 ∪ . . . ∪ X m ) ≥ n + 1. Then r M (X 1 ∩ . . . ∩ X m ) ≤ n − 1.each i , Y (M) ⊆ X i := cl * M ( Y (M i ) ∪ e i ). Our hypothesis implies that, for each i , r * M (X i ) ≤ l + 1. Define X := X 1 ∪ . . . ∪ X (l +2) . As I ⊆ X and r * M (X ) ≥ l +2 then, by the dual version of Lemma 3.5 for n = l +1, r * M (X 1 ∩. . .∩ X m ) ≤ l . Thus r * ( Y (M)) ≤ l , since Y (M) ⊆ X 1 ∩ . . . ∩ X l +2 .
SOME INITIAL CASES
The proof of the next lemma is just a routine check. Consider the partition of the vertices of K 3,3, into two stable sets V 1 and V 2 . For 0 ≤ j ≤ i ≤ 3, we define K (i ,j ) 3,3 by a simple graph obtained from K 3,3 by the addiction of i edges joining vertices of V 1 and j edges joining vertices of V 2 . We also consider the already established notations K ′ 3,3 := K (1,0) 3,3 , K ′′ 3,3 := K (2,0) 3,3 and K ′′′ 3,3 := K (3,0) 3,3 . We define a circuit C in a connected matroid M to be non-separating if C is a non-separating cocircuit of M * , that is, if M/C is connected. We also say that a circuit C of a 3-connected graph G is non-separating, if E (C ) is a non-separating circuit of M(G). Item (c) follows from the facts that K ′′′ 3,3 has no K 5 -minor and that K (1,1) 3,3 has a K 5 -minor. Indeed, |si (K ′′′ 3,3 /e)| < 10 for all e ∈ E (K ′′′ 3,3 ) and si (K (1,1) 3,3 / f ) ∼ = K 5 where f is the edge joining the two degree-3 vertices of K (1,1) 3,3 . Let us verify (a). First suppose that M ∼ = M * (G) for some connected graph extending K (1,1) 3,3 . Consider the edge f such that co(M\ f ) ∼ = M * (K 5 ), as before. By Corollary 3.3 and Lemma 4.1, it follows that Y (M) ⊆ { f }. It is just left to show that f avoids a linearly dependent set of nonseparating circuits in G to finish this case. In fact, note that the triangles of G that does not contain the end-vertices of f constitute such a set. Now, we verify (a) for the remaining graphs:
(1) K 3,3 : note that each 4-circuit of K 3,3 is non-separating. Let g ∈ E (K 3,3 ) and v an endvertex of g . Note that the set of the 4-cocircuits of K 3,3 avoiding v is linearly dependent. Thus g ∈ Y (M * (K 3,3 )), and so Y (M * (K 3,3 )) = E (M * (K 3,3 )). (2) K ′ 3,3 , q q q q q q ✟ ✟ ❍ ❍ : this graph has orbits q q q q q q , q q q q q q ❍ ❍ and q q q q q q ✟ ✟ of the automorphism group of its bond matroid. The set of representatives q q q q q q of the first two orbits avoid the list q q q q q q ✟ ✟ , q q q q q q , q q q q q q ✟ ✟ of linearly dependent non-separating circuits, while the element q q q q q q ✟ ✟ of the third orbit avoids q q q q q q ❍ ❍ , q q q q q q , q q q q q q ❍ ❍ and q q q q q q .
(3) K ′′ 3,3 , q q q q q q ✟ ✟ ❍ ❍ : analogously, we have orbits q q q q q q , q q q q q q and q q q q q q ✟ ✟ ❍ ❍ . The linearly dependent nonseparating circuits q q q q q q ❍ ❍ ✟ ✟ , q q q q q q ❍ ❍ and q q q q q q ✟ ✟ avoid the two first orbits, while the representative q q q q q q of the third orbit avoids q q q q q q ❍ ❍ ✟ ✟ , q q q q q q , q q q q q q ❍ ❍ , q q q q q q ✟ ✟ and q q q q q q .
It is remaining to prove (b). Note that in K ′′′ 3,3 , q q q q q q ✟ ✟ ❍ ❍ , the edge q q q q q q avoids exactly the following nonseparating circuits: q q q q q q , q q q q q q , q q q q q q ❍ ❍ , q q q q q q ✟ ✟ , q q q q q q and q q q q q q , that constitute a linearly independent set.
So, the orbit q q q q q q is contained in Y (M * (K ′′′ 3,3 )). But the representative q q q q q q of the other orbit avoids the linearly dependent circuits q q q q q q , q q q q q q , q q q q q q and q q q q q q . This finishes the proof of (b) and of the lemma. The next lemma has a computer assisted proof that will be approached in Section 8.
PROOF OF THE MAIN THEOREM
The statement of Theorem 1.3 just summarizes the lemmas proved in this section. Proof. Suppose the M is a minimal counter-example to the lemma. By Lemma 4.3, r * (M) ≥ 6. So, by Lemma 3.7, for l = 0, M has a vertically M * (K 5 )-removable element e such that Y (co(M\e)) is non-empty; a contradiction to the minimality of M.
Lemma 5.2. If M is a non-graphic regular matroid with no M * (K ′′′ 3,3 )-minor then E (M) = Y (M).
Proof. Suppose that M is a minimal counter-example for the lemma. By lemma 5.1, M has no M * (K 5 )-minor. Thus, since M is not graphic, M has a M * (K 3,3 )-minor. If r * (M) ≥ 7, then, by Lemma 3.7 for l = 0, it follows that M has a vertically M * (K 3,3 )-removable element e such that Y (co(M\e)) = . But it contradicts the minimality of M. Hence r * (M) ≤ 6.
Note that M is not isomorphic to R 10 nor have an R 10 -minor, since R 10 is a splitter for the class of the regular matroids. If r * (M) = 6, M has a vertically M * (K 3,3 )-removable element e. As co(M\e) has no minor isomorphic to R 10 or R 12 , it follows that co(M\e) is cographic. In particular, M is a corank-5 cographic matroid extending M * (K 3,3 ). By Proof. Note that F 7 satisfies the lemma. But F 7 and F * 7 are the unique excluded minors for regularity in the class of the binary matroids. Moreover F * 7 is a splitter for the class of binary matroids with no F 7 -minor. Thus we may assume that M has a F * 7 -minor. First suppose that M is a minimal counter-example for the first part of the lemma. By Proof of Theorem 1.6: It is clear that Conjecture 1.5 is a particular case of Conjecture 1.4. In other hand if we consider a minimal counter-example M for the converse, by Lemma 3.7 (for l = 2), analogously to the preceding proofs, M has a minor that contradicts the minimality of M.
EXTREMAL CASES FOR THE MAIN THEOREM
Here we denote by Z r the binary rank-r spike: a matroid represented by a binary (2r + 1) × r matrix in the form [I r |Ī r | 1], whereĪ r is I r with the symbols interchanged and 1 is a column full of ones. We use the respective labels a 1 , . . . , a r , b 1 , . . . , b r , c in this representation. We also define, for n ≥ 4, S 2n := Z n \b n . Proof. By theorem 1.3(i), we have that Y (S 2n ) ≤ 2. So it is enough to verify that Y (S 2n ) = . As S 2n is self-dual we prove the proposition by showing that a n avoids at most n −1 non-separating circuits of S 2n . Note that the spanning circuits of S 2n are not non-separating. It is not hard to verify that the non-spanning circuits of S 2n avoiding a n are those in the form {c, a i , b i }, for some 1 ≤ i < n −1, or in the form {a i , b i , a j , b j }, for some 1 ≤ i < j < n −1 (the reader may also see [15], page 662). But c is a loop of the matroids in the form S 2n /{a i , b i , a j , b j }, which are, therefore, disconnected. Thus a n avoids at most n − 1 non-separating circuits.
Let n ≥ 3 and let V 1 and V 2 be the members of a partition of the vertex set of K 3,n into two stable sets, where |V 1 | = 3. Define K ′′′ 3,n as the graph obtained from K 3,n by adding an edge joining each pair of vertices in V 1 . Note that the unique non-separating cocircuits of M * (K ′′′ 3,n ) are the triangles of K ′′′ 3,n meeting both V 1 and V 2 . So, Y (M * (K 3,n )) is the triad E (K ′′′ 3,n ) − E (K 3,n ). Thus we have an infinite set of matroids attaining the bound r * M ( Y (M)) = 2 in Theorem 1.3 (b).
COMPLEMENTARY MATROIDS IN RELATION TO PROJECTIVE GEOMETRIES AND A HANDMADE CLASSIFICATION OF THE RANK-4 3-CONNECTED BINARY MATROIDS
From the uniqueness of representability of binary and ternary matroids, and from [15, 6.3.15], we may conclude that: (PG(s, q)). Suppose that there is a matroid isomorphism ϕ : PG(s, q)|X → PG(s, q)|Y . Then, there is an automorphism Φ of PG(s, q) that extends ϕ and which restriction to E (PG(s, q)) − X is a matroid isomorphism between PG(s, q)\X and PG(s, q)\Y .
Lemma 7.1. Let q ∈ {2, 3}, s ≥ 2, and X , Y ⊆ E
If, for q ∈ {2, 3}, M is a rank-r simple matroid representable over GF (q) we have, for s ≥ r Proof. In this proof when we're talking about equality and uniqueness, it is up to isomorphisms. We're denoting by M an arbitrary 3-connected rank-4 binary matroid.
By the main result of Oxley [14], the only 3-connected rank-4 binary matroids with no M(W 4 )minor and are F * 7 , AG(3, 2), S 8 , and Z 4 . So, the rank-4 binary 3-connected matroids up to 8 elements are F * 7 , AG (3,2), S 8 and M(W 4 ). Let us find the complementaries of these matroids in relation to P.
It is easy to see that AG(3, 2) = P\F 7 . As the unique single-element deletion of F 7 is M(K 4 ), it follows that the unique rank-4 simple binary single-element extension of AG (3,2) is P\M(K 4 ) = Z 4 . So, the complementary of S 8 = Z 4 \b 4 is the unique single-element extension of M(K 4 ) different from F 7 , that is M(K 4 ) ⊕U 1,1 . We also have that M(W 4 ) = P\M(W 4 \b), were b is an edge in the rim of W 4 , as shown in the graphs and the respective matrices that represent then below. The only possible degree sequences for simple connected graphs with 6 edges and 4 or 5 vertices are: (3,3,3,3), (2,2,2,2,4), (2,2,2,3,3) and (1,2,2,3,4). Indeed, if 4 appear twice in such a sequence, then the graph has at least 7 edges. So 4 appears at most once. The sum of the degrees must be 12. So, as 4 appears at most once, 1 appears at most once too. Now it is easy to check that the possible sequence are those listed above. This implies that the unique simple matroids with 6 elements and rank up to 4 are: M(K 4 ), P (U 2,3 ,U 2,3 ), M(K 2,3 ), U 2,3 ⊕U 3,4 and M(K 4 − e) ⊕U 1,1 .
Below we can see, in this order, a draw of K 3,3 and matrices representing M(K 2,3 ) and P\M(K 2,3 ):
❭ ❭ ❭✜ ✜ ✜ ❭ ❭ ❭ ✜ ✜ ✜ r
COMPUTATIONAL RESULTS
In this section we describe briefly the methods and procedures used to prove Lemma 4.4. To get the list of non-separating cocircuits of a binary matroid M and count how many of then avoid each element, we just use a brutal force algorithm that examines each linear combination of lines in a standard matrix representing M. The subroutines for this are based on well known algorithms.
Consider a binary matrix A with columns c 1 , . . . , c n , and v = (v 1 , . . . , v n ) ∈ {0, 1, 2} n . Let, for each, i = 1, . . . , n,v i be the reminder of the division of v i by two. Moreover, let i 1 < · · · < i k be the elements of { j : 1 ≤ j ≤ n and v j = 2}. We define The definition of Γ may look awkward at a first moment, but it is easy to deal computationally, and make it easier to enumerate such matroids N as above. Another attractive property of Γ is equation (5) below.
For a binary matrix A with columns labelled by 1, . . . , n and for an automorphism σ of M[A] it is straight to check that:
(5) M[Γ(A, (v 1 , . . . , v n ))] ∼ = M[Γ(A, (v σ(1) , . . . , v σ(n) ))].
The next lemma is a straight consequence of Bixby's Theorem about decomposition of non 3connected matroids into 2-sums. The following lemma describe the procedures that are being used to prove Conjecture 1.5. isomorphic to a matroid in A 8 . Hence r = 9. Analogously we prove that M ′ ∈ A 9 and arrive at a contradiction.
Theorem 1. 1 .
1Let M be a 3-connected binary matroid with at least 4 elements. Then (a) the non-separating cocircuits of M span its cocircuit-space, (b) each element of M is in at least two non-separating cocircuits, and (c) M is graphic if and only if each element of M is in at most two non-separating cocircuits.
Theorem 1. 2 .
2Let M be a 3-connected binary matroid with at least 4 elements. Then (a) for e ∈ E (M), the non-separating cocircuits of M that avoid e span a hyperplane in the cocircuit-space of M, in particular e avoids at least r (M) − 1 cocircuits of M. Moreover, e avoids more than r * (M) − 1 non-separating cocircuits of M if and only if the set of the non-separating cocircuits of M avoiding e is linearly dependent, and (b) M is graphic if and only if each element of M avoids at most r * (M) − 1 non-separating cocircuits.
Theorems 1.1 and 1.2 identify two sets of obstructions for graphicness in a 3-connected binary matroid M. We define the set X (M) as the set of elements of M meeting more than two non-separating cocircuits and the set Y (M) of the elements of M avoiding more than r * (M) − 1 non-separating cocircuits. Equivalently, by Theorem 1.2, (a), we may define Y (M) as the set of the elements of M avoiding all the members of a linearly dependent family of non-separating cocircuits of M. Some sharp lower bounds for |X (M)| and |X (M)∩Y (M)| are given by Lai, Lemos, Reid and Shao [8] when M is not graphic. In this work we prove that Y (M) contains almost all elements of E (M) if M is not graphic. Define Y (M) := E (M)−Y (M). The main result we establish here is: Theorem 1.3. Let M be a 3-connected non-graphic binary matroid. Then (a) if M is non-regular, then | Y (M)| ≤ 1. Moreover, Y (M) = if M has no S 8 -minor or M has an PG(3, 2)-minor. (b) if M is regular with no M * (K ′′′ 3,3 )-minor or with a M * (K 5 )-minor, then Y (M) = . The following conjecture generalizes this last theorem. Conjecture 1.4. Let M be a 3-connected non-graphic binary matroid. Then r * M ( Y (M)) ≤ 2.
Conjecture 1. 5 .
5Let M be a 3-connected non-graphic regular matroid with an M * (K ′′′ 3,3 )-minor, with no M * (K 5 )-minor and satisfying r * (M) ≤ 9. Then r * N ( Y (N )) ≤ 2. More precisely, we prove in this version of this paper: Theorem 1.6. Conjectures 1.4 and 1.5 are equivalent. Conjecture 1.4 yields the following generalization of Lemos' graphicness criterion: Conjecture 1.7. Let M be a 3-connected binary matroid and S ⊆ E (M) satisfy r * M (S) ≥ 3. Then the following assertions are equivalent: (a) M is graphic; (b) Each element of S avoids at most r * (M) − 1 non-separating cocircuits of M; and (c) Each element of S avoids no linearly dependent set of non-separating cocircuits of M.
Theorem 2. 1 .
1Let M and N be 3-connected matroids. For k ∈ {1, 2, 3}, if M has an N -minor and r * (M) − r * (N ) ≥ k, then there is a k-coindependent set of M whose elements are vertically Nremovable in M.
f 2 , f 3 of M is an N -pyramid with top T * := {e 1 , e 2 , e 3 } and base T := { f 1 , f 2 , f 3 } provided: (a) T * is an N -removable triad of M, (b) T is a triangle of M, (c) for {i , j , k} = {1, 2, 3}, {e i , e j , f k } is a triangle of M, and (d) the elements f 1 , f 2 and f 3 are N -removable in M.
Theorem 2. 2 .
2Let M be a 3-connected binary matroid with a 3-connected minor N such that r * (M) − r * (N ) ≥ 5. Then (a) M has a 4-coindependent set whose elements are vertically N -removable, or (b) M has an N -pyramid. 3. LIFTING NON-SEPARATING COCIRCUITS FROM MINORS We define R * A (M) as the set of non-separating cocircuits of M avoiding A. We may write R * e (M) instead of R * {e} (M). We write d e p A (M) := |R * A (M)| − d i m(R * A (M)), where d i m(R * A (M)) is the dimension of the space spanned by R * A (M) in the cocircuit space of M. We simplify the notation d e p {e} (M) by d e p e (M). Observe that an element e of a 3-connected binary matroid M is in Y (M) if and only if d e p e (M) > 0. The next result is Lemma 3.1 from [11].
Lemma 3. 1 .
1Suppose that e is an element of a 3-connected binary matroid M such that the cosimplification of M\e is 3-connected. If r * (M) ≥ 4, then it is possible to choose the ground set of co(M\e) so that, for each A ⊆ E (co(M\e)), (1) d e p A (M) ≥ d e p A ′ (M) ≥ d e p A (co(M\e)), where A ′ is the minimal subset of E (M) satisfying A ⊆ A ′ and, for each triad T * of M that meets both e and A, T * − e ⊆ A ′ .
Lemma 3. 4 .
4Let M be a 3-connected binary matroid with an N -minor, satisfying r * (M) ≥ 4. Suppose that e 1 , e 2 , e 3 , f 1 , f 2 , f 3 is an N -pyramid of M, having top T * and base T . Denote F := T ∪T * . (a) If D * ∈ R * (M), then either D
Lemma 3. 6 .
6Let l ∈ {0, 1, 2} and M and N be 3-connected binary matroids. Suppose that M has an N -minor and r * (M) ≥ 4. If M has a (l + 2)-coindependent set I * , such that for each e ∈ I * , e is vertically N -removable and r * co(M\e) [ Y (co(M\e))] ≤ l , then r * M ( Y (M)) ≤ l . Proof. Let I * := {e 1 , . . . , e (l +2) }. For i = 1, . . . , l + 2, we choose the ground set of M i := co(M\e i ) satisfying equation (3), in Corollary 3.3. So we have that, for
Lemma 3. 7 .
7Let M be a 3-connected binary matroid with a 3-connected minor N with r * (M) ≥ 4 and let l ∈ {0, 1, 2}. If r * (M) − r * (N ) ≥ 2 + l + l 2 and r M ( Y (M)) ≥ l + 1, then (a) M has a vertically N -removable element e such that Y (co[M\e]) has rank at least l + 1 in co(M\e), or (b) l = 2 and M has an N -pyramid e 1 , e 2 , e 3 , f 1 , f 2 , f 3 such that there is K ∈ {M\{e 1 , e 2 , e 3 }, M\ f 1 , M\ f 2 , M\ f 3 } satisfying r K ( Y (K )) ≥ 3. Proof. By Theorems 2.1 and 2.2, one of the two statements holds: (i) M has an (l + 2)-coindependent set I := {e 1 , . . . , e (l +2) } whose elements are vertically Nremovable; or (ii) l = 2 and M has an N -pyramid e 1 , e 2 , e 3 , f 1 , f 2 , f 3 , with top T * . By lemma 3.6, (i) implies (a). By lemma 3.4, (d), (ii) implies (b). The lemma is proved.
Lemma 4. 1 .
1If M ∈ {F 7 , F * 7 , M * (K 5 ), R 10 } then Y (M) = E (M).
Lemma 4. 2 .
2Suppose that M is a simple cographic matroid with an M * (K 3,3 )-minor such that r * (M) = 5. Then
( a )
aIf M ≇ M * (K ′′′ 3,3 ), then Y (M) = E (M). (b) If M = M * (K ′′′ 3,3 ), then Y (M) is a triad of M and Y (M) = E (K 3,3 ). (c) M has a M * (K 5 )-minor if and only if M ∼ = M * (K (i ,j ) 3,3 ) for some i , j ∈ {1, 2, 3}. Proof. First note that M ∼ = M * (K (i ,j ) 3,3 ) for some 0 ≤ j ≤ i ≤ 3.
Lemma 4. 3 .
3If M is a 3-connected regular matroid with an M * (K 5 )-minor and r * (M) ≤ 5, then M is cographic and Y (M) = E (M).Proof. First let us verify that M is cographic. As r (M) ≤ 5, M has no R 12 -minor. If M has an R 10 -minor, as R 10 is a splitter for the class of the regular matroids, then M ∼ = R 10 ; a contradiction. Thus M is cographic. If r (M) = 4, then M ∼ = M * (K 5 ) and the lemma follows. So, we may assume that r (M) = 5. As M * (K 5 ) is a splitter for the class of the cographic matroids with no M * (K 3,3 )minor, then M is isomorphic to the bond matroid of a graph with 6 vertices extending K 3,3 . The lemma follows from items (a) and (c) of Lemma 4.2.
Lemma 4. 4 .
4Let M be a 3-connected binary matroid and e ∈ E (M). (a) If co(M\e) ∼ = S 8 , then | Y (M)| ≤ 1. (b) If r * (M) = 4 and M ≇ S 8 , then Y (M) = E (M). Moreover | Y (S 8 )| = 1. (c) If co(M\e) is isomorphic to M * (K (i ,0) 3,3 ) for some i ∈ {0, 1, 2}, then Y (M) = E (M). (d) If M has an element e such that co(M\e) ∼ = PG(3, 2) * then E (M) = Y (M).
Lemma 5. 1 .
1If M is a regular matroid with a M * (K 5 )-minor, then E (M) = Y (M).
Lemma 4.2(c), co(M\e) is isomorphic to M * (K (i ,0) 3,3 ) for some i ∈ {0, 1, 2}. In this case the result follows from Lemma 4.4(c). Thus r * (M) = 5. As before, M has no R 10 or R 12 -minor and M is cographic. Now the result follows from Lemma 4.2, (a). Lemma 5.3. Suppose that M is a 3-connected non-regular binary matroid. Then | Y (M)| ≤ 1. Moreover, if M has no S 8 -minor, then Y (M) = E (M).
Lemma 4.4(b), r * (M) ≥ 5. If r * (M) = 5 then, by Lemma 3.7 (for l = 0), M has an element e such that Y (co(M\e)) = . By Lemma 4.4(b) again, co(M\e) ∼ = S 8 . But it contradicts Lemma 4.4(a). Thus r * (M) ≥ 6. It follows by Lemma 3.7 (for l = 1), that M has a vertically F * 7 -removable element e such that | Y (co(M\e)| ≥ 2. Thus co(M\e) contradicts the minimality of M. Now suppose that M is a minimal counter-example for the second part of the lemma. If r * (M) ≥ 5, then, by Lemma 3.7, for l = 0, M has a vertically F * 7 -removable element e such that Y (co(M\e)) = , a contradiction to the minimality of M. Thus r * (M) = 4. Now, the result follows from Lemma 4.4(b).Lemma 5.4. If M is a binary 3-connected matroid with a PG(3, 2) * -minor, then Y (M) = E (M). Proof. Let M be a minimal counter-example for the lemma. If r * (M) ≥ 6, then, by Lemma 3.6 for l = 0, we have a contradiction to the minimality M. So, r * (M) ≤ 5, but, by Lemma 4.4(b), r * (M) ≥ 5. Thus r * (M) = 5. By Theorem 2.1, M has a vertically PG(3, 2) * -removable element e. Thus co(M\e) ∼ = PG(3, 2) * , since PG(3, 2) * is a maximal rank-4 binary matroid. But it contradicts Lemma 4.4(v).
Proposition 6. 1 .
1For n ≥ 4, S 2n attains the bound | Y (S 2n )| = 1 in Theorem 1.3.
− 1, well defined up to isomorphisms, the complementary of M in relation to PG(s, q) as the matroid PG(s, r )\M := PG(s, q)\X , where X ⊆ E (PG(s, q)) is a set that satisfies M ∼ = PG(s, q)|X . Lemma 7.1 implies: Corolary 7.2. Let q ∈ {2, 3},let M and N be simple rank-r matroids representable over GF (q) and let s ≥ max{r (M), r (N )} − 1. Then (a) M ∼ = N if, and only if, PG(s, q)\M ∼ = PG(s, q)\N ; and: (b) M is isomorphic to a minor of N if, and only if, PG(s, q)\N is isomorphic to a minor of PG(s, q)\M. Theorem 7.3. Let P := PG(3, 2). Up to isomorphisms, all the rank-4 binary 3-connected matroids are: (i) F * 7 , S 8 , AG(3, 2) and M(W 4 ), up to 8 elements; (ii) Z 4 ∼ = P\M(K 4 ), P 9 ∼ = P\[M(k 4 −e)⊕U 1,1 ], M * (K 3,3 ) ∼ = P\[U 2,3 ⊕U 2,3 ] and M(K 5 \e) ∼ = P\P (U 2,3 ,U 3,4 ), with 9 elements; (iii) P\M(K 4 \e), P\[U 2,3 ⊕U 2,2 ], P\[U 3,4 ⊕U 1,1 ] and M(K 5 ) ∼ = P\U 4,5 , with 10 elements; (iv) P\[U 2,3 ⊕U 1,1 ], P\U 3,4 and P\U 4,4 ; with 11 elements; and:(v) P\U 1,1 , P\U 2,2 , P\U 2,3 and P\U 3,3 with more than 11 elements.
As the only excluded minors for graphicness in binary matroids are F 7 , F * 7 , M * (K 5 ) and M * (K 3,3 ), then all binary matroids up to 6 elements are graphic. So, if |E (M)| ≥ 9, then P\M is graphic.
Note that the last row in the second matrix corresponds to a 2-cocircuit of P\M(K 2,3 ), which isnot 3-connected therefore. Note that all proper restrictions of M(K 2,3 ) are restrictions of M(K 4 )⊕ U 1,1 . So all 3-connected rank-4 binary matroids with at least 9 elements are complementaries of some restriction of M(K 4 ) ⊕U 1,1 or M(W 4 \b). In other hand, if M is a restriction of M(K 4 ) ⊕U 1,1 or M(W 4 \b), then P\M is a rank-4 extension of S 8 or M(W 4 ), and, therefore, M is 3-connected. This description corresponds to the matroids listed in the theorem.
Γ
(A, v ) := 1v 1 . . .v n 1 . . . 1 0 c 1 . . . c n c i 1 . . . c i k . It is easy to check that, for binary matroids M and N and a binary matrix A, if M ∼ = M[A] and there is e ∈ E (N ) such that M ∼ = si (N /e), then there is v ∈ {0, 1, 2} n such that N ∼ = M[Γ(A, v )].
Let
A := [I r |D] be an r × n binary matrix. We define: L (A) := {Γ(A, v ); v ∈ {0, 2} r × {0, 1, 2} n−r }. Let M be a family of binary matroids we define a family of binary matrices A to be a standard vector representation of M if each matroid of M is isomorphic to M[A] for some A ∈ A and all matrices in A are in the standard form. For a family of binary matrices A we denote M[A ] := {M[A] : A ∈ A } and M * [A ] := {M * [A] : A ∈ A } . We simplify the language saying that standard vector representation of M[A] is a standard vector representation of A . Lemma 8.3. If A is a binary matrix and M is a matroid with an element e such that M[A] ∼ = co(M\e) and |E (M)| − |E (A(M))| ≤ k, then M is isomorphic to a matroid represented by a matrix in L k (A).
Lemma 8. 4 .
4If M is a 3-connected regular matroid, with rank 5, with a M(K ′′′ 3,3 )-minor and with no M(K 5 )-minor, then M ∼ = M(K ′′′ 3,3 ). Proof. Since, M has no R 12 minor and M ≇ R 10 and M is not cographic, them M is graphic. So, if M ≇ K ′′′ 3,3 , then M is the cycle matroid of a 6-vertex simple graph properly extending K ′′′ 3,3 . The result follows from Lemma 4.2. Lemma 8.5. Suppose that M ′ is a 3-connected regular matroid with no M * (K 5 ) and with an M * (K ′′′ 3,3 )-minor such that r * (M ′ ) ≤ 9 and r * M ′ ( Y (M ′ )) ≥ 3. Then a matroid isomorphic to M ′ can be found with the following procedure: (1) Let A 0 be a standard binary matrix representing M(K ′′′ 3,3 ). Let A 6 be a standard vector representation of {A ∈ L A 0 , M * [A] is regular, 3-connected and has no K 5 -minor and Y (M * [A]) = }. Check if all the matroids M ∈ M * [A 6 ] satisfy Y (M * ) ≤ 2. (2) Let L 6 = A 6 . For i = 7, 8, 9 do the following step; (3) Let
A i be a set of representatives of the following family: {A ∈ L i , M[A] M is regular, 3-connected and has no K 5 -minor and r * Y (M * [A]) ≥ 2}. Check if all the matroids M ∈ M * [A i ] satisfy Y (M * ) ≤ 2. Proof of Lemma 8.5 : We have to prove that if such M ′ exists, then M ′ * is isomorphic to a matroid in M * [A 6 ] ∪ · · · ∪ M * [A 9 ]. Suppose for a contradiction that this does not hold. Let r = r * (M ′ ). By Lemma 2.1 and by Lemma 8.4, there is a chain of matroids M(K ′′′ 3,3 ) = M 5 , M 6 , . . . M r = M ′ * , such that for each i = 6, . . . , r there is an element e i ∈ E (M i ) such that M i −1 = si (M i /e i ). If r = 6 it is clear that there is a matroid in M * [A 6 ] isomorphic do M. So r ≥ 7. If r = 7, since r (M ′ ) − r (M(K ′′′ 3,3 )) ≥ 2, by Lemma 3.7 for l = 0, r M 6 ( Y (M * 6 )) ≥ 1. So M ′ * is isomorphic to a matroid in A 7 . Hence r ≥ 8. If r = 8, since r (M ′ ) − r (M(K ′′′ 3,3 )) ≥ 3, by Lemma 3.7, for l = 1, r M 7 ( Y (M * 7 )) ≥ 2. So M ′ * is
Lemma 8.1. Let N be a coloopless simple non-3-connected binary matroid with at least 4 elements. Suppose that e is an element of N such that si (N /e) is 3-connected. Then e belongs to a non-trivial series class of N . Corolary 8.2. Let A be a binary matrix with n ≥ 4 columns and v ∈ {0, 1, 2} n . If M[A] is 3connected and M[Γ(A, v )] is cossimple, then M[Γ(A, v )] is 3-connected.From this lemma we conclude:
ACKNOWLEDGMENTSThe author thanks to Manoel Lemos for suggesting an approach for one of his conjectures, which generalization gave rise to the results established here.
Matroids, graphs, and 3-connectivity. R E Bixby, W H Cunningham, Graph Theory and Related Topics. J.A. Bondy, U.S.R. MurtyNew YorkAcademic PressR.E. Bixby, W.H. Cunningham, Matroids, graphs, and 3-connectivity, in: J.A. Bondy, U.S.R. Murty (Eds.), Graph Theory and Related Topics, Academic Press, New York, 1979, pp. 91-103.
Cocircuitos não-separadores que evitam um elemento e graficidade em matroides binárias. J P Costalonga, Universidade Federal de PernambucoPhD. ThesisJ. P. Costalonga, Cocircuitos não-separadores que evitam um elemento e graficidade em matroides binárias, Universidade Federal de Pernambuco, PhD. Thesis, 2011.
3-connected minors of 3-connected matroids an graphs. J P Costalonga, European J. Combin. 33J. P. Costalonga, 3-connected minors of 3-connected matroids an graphs, European J. Combin., 33 1 (2012), pp. 72-81.
Separating cocircuits in binary matroids. W H Cunningham, Linear Algebra and its Applications. 43W.H. Cunningham, Separating cocircuits in binary matroids, Linear Algebra and its Applications, 43 (1982) 69-86.
On linear systems with integral valued solutions. I Heller, Pacific J. Math. 7Heller, I. On linear systems with integral valued solutions, Pacific J. Math. 7 (1957) 1351-1364.
The concepts of a vertex in a matroid, the non-separating circuits and a new criterion for graph planarity. A K Kelmans, Algebraic Methods in Graph Theory. Szeged, Hungary; AmsterdamNorth Holland1A.K. Kelmans, The concepts of a vertex in a matroid, the non-separating circuits and a new criterion for graph planarity, in: Algebraic Methods in Graph Theory, vol. 1, Colloq. Math. Soc. János Bolyai, vol. 25, Szeged, Hungary, 1978, North Holland, Amsterdam, 1981, pp. 345-388.
A K Kelmans, Graph planarity and related topics. N. Robertson, P.D. Seymour147Graph Structure TheoryA.K. Kelmans, Graph planarity and related topics, in: N. Robertson, P.D. Seymour (Eds.), Graph Structure The- ory, Contemporary Mathematics, vol. 147, 1991, pp. 635-667.
Obstructions to a binary matroid being graphic. H.-J Lai, M Lemos, T J Reid, Y Shao, H Wu, European J. Combin. 32H.-J. Lai, M. Lemos, T. J. Reid, Y. Shao, H. Wu, Obstructions to a binary matroid being graphic, European J. Combin., 32 6 (2011), pp. 853-860.
Non-Separating cocircuits in matroids. M Lemos, T R B Melo, Discrete Appl. Math. 156M. Lemos and T. R. B. Melo, Non-Separating cocircuits in matroids, Discrete Appl. Math., 156, 2008, 1019-1024.
Non-separating cocircuits in binary matroids. M Lemos, Linear Algebra and its Applications. 382M. Lemos, Non-separating cocircuits in binary matroids, Linear Algebra and its Applications, 382, 2004, 171-178.
A characterization of graphic matroids using non-separating cocircuits. M Lemos, Advances in Applied Mathematics. 42M. Lemos, A characterization of graphic matroids using non-separating cocircuits, Advances in Applied Math- ematics, 42, 2004, 75-81.
A new characterization of graphic matroids. J Mighton, J. Combin. Theory Ser. B. 98J. Mighton, A new characterization of graphic matroids, J. Combin. Theory Ser. B 98 (2008) 1253-1258.
Characterizing 3-connected planar graphs and graphic matroids. M Lemos, T J Reid, H Wu, J. Graph Theory. 642M. Lemos, T.J. Reid, H. Wu, Characterizing 3-connected planar graphs and graphic matroids, J. Graph Theory 64 (2) (2010) 165-174.
The binary matroids with no M(K 4 ) minor. J G Oxley, Trans. Amer. Math. Soc. 301J. G. Oxley, The binary matroids with no M(K 4 ) minor. Trans. Amer. Math. Soc. 301, 1987, 63-75.
. J G Oxley, Matroid Theory, Oxford University PressNew YorkSecond EditionJ.G. Oxley, Matroid Theory, Oxford University Press, New York, Second Edition, 2011.
On Mighton's characterization of graphic matroids. D K Wagner, J. Combin. Theory Ser. B. 100D.K.Wagner On Mighton's characterization of graphic matroids J. Combin. Theory Ser. B 100 (2010) 493-496.
Stabilizers of classes of representable matroids. G Whittle, J. Combin. Theory Ser. B. 771G. Whittle, Stabilizers of classes of representable matroids, J. Combin. Theory Ser. B 77, no. 1, 1999, 39-72.
How to draw a graph. W T Tutte, Proc. London Math. Soc. 13W.T. Tutte, How to draw a graph, Proc. London Math. Soc. 13 (1963) 734-768.
An algorithm for determining whether a given binary matroid is graphic. W T Tutte, Proc. Amer. Math. Soc. 11W.T. Tutte, An algorithm for determining whether a given binary matroid is graphic, Proc. Amer. Math. Soc. 11 (1960) 905-917.
. Cachoeiro Bricio Mesquita, 17, De Itapemirim, Es, [email protected], RUA BRICIO MESQUITA, 17, CACHOEIRO DE ITAPEMIRIM, ES, 29300-750, BRAZIL.
|
[] |
[
"Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon",
"Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon"
] |
[
"M Trigo ",
"Y ",
". T Kubacka ",
"J A Johnson ",
"M C Hoffmann ",
"C Vicario ",
"S Jong ",
"P Beaud ",
"S Grübel ",
"S.-W A Huang ",
"L Huber ",
"L G Patthey ",
"Y.-D Chuang ",
"J J Turner ",
"G L Dakovski ",
"W.-S Lee ",
"M P Minitti ",
"W J Schlotter ",
"R G Moore ",
"C P Hauri ",
"S M Koohpayeh ",
"V Scagnoli ",
"G Ingold ",
"S L Johnson ",
"U U A Staub "
] |
[] |
[
"Phys. Rev. Lett"
] |
We demonstrate a silicon-based, single-layer anti-reflection coating that suppresses the reflectivity of metals at near-infrared frequencies, enabling optical probing of nano-scale structures embedded in highly reflective surroundings. Our design does not affect the interaction of terahertz radiation with metallic structures that can be used to achieve terahertz near-field enhancement. We have verified the functionality of the design by calculating and measuring the reflectivity of both infrared and terahertz radiation from a silicon/gold double layer as a function of the silicon thickness. We have also fabricated the unit cell of a terahertz meta-material, a dipole antenna comprising two 20-nm thick extended gold plates separated by a 2 µm gap, where the terahertz field is locally enhanced. We used the time-domain finite element method to demonstrate that such near-field enhancement is preserved in the presence of the anti-reflection coating. Finally, we performed magneto-optical Kerr effect measurements on a single 3-nm thick, 1-µm wide magnetic wire placed in the gap of such a dipole antenna. The wire only occupies 2% of the area probed by the laser beam, but its magneto-optical response can be clearly detected. Our design paves the way for ultrafast time-resolved studies, using table-top femtosecond near-infrared lasers, of dynamics in nano-structures driven by strong terahertz radiation.
|
10.1364/oe.26.002917
|
[
"https://arxiv.org/pdf/1711.05670v2.pdf"
] | 46,778,863 |
1711.05670
|
f1401aadce376074504acd65003d968c866b1d27
|
Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon
2011. 2008. 2012. 2014. 2014. 2014. 2016. 2016
M Trigo
Y
. T Kubacka
J A Johnson
M C Hoffmann
C Vicario
S Jong
P Beaud
S Grübel
S.-W A Huang
L Huber
L G Patthey
Y.-D Chuang
J J Turner
G L Dakovski
W.-S Lee
M P Minitti
W J Schlotter
R G Moore
C P Hauri
S M Koohpayeh
V Scagnoli
G Ingold
S L Johnson
U U A Staub
Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon
Phys. Rev. Lett
10122011. 2008. 2012. 2014. 2014. 2014. 2016. 2016(160.3820) Magneto-optical materials. References and links 1. M. C. Hoffmann and J. A. Fülöp, "Intense ultrashort terahertz pulses: generation and applications ," J. Phys. D: Appl.Persistence of magnetic order in a highly excited Cu 2+ state in CuO," Phys. Rev. B 89(22), 220401(R) (2014).
We demonstrate a silicon-based, single-layer anti-reflection coating that suppresses the reflectivity of metals at near-infrared frequencies, enabling optical probing of nano-scale structures embedded in highly reflective surroundings. Our design does not affect the interaction of terahertz radiation with metallic structures that can be used to achieve terahertz near-field enhancement. We have verified the functionality of the design by calculating and measuring the reflectivity of both infrared and terahertz radiation from a silicon/gold double layer as a function of the silicon thickness. We have also fabricated the unit cell of a terahertz meta-material, a dipole antenna comprising two 20-nm thick extended gold plates separated by a 2 µm gap, where the terahertz field is locally enhanced. We used the time-domain finite element method to demonstrate that such near-field enhancement is preserved in the presence of the anti-reflection coating. Finally, we performed magneto-optical Kerr effect measurements on a single 3-nm thick, 1-µm wide magnetic wire placed in the gap of such a dipole antenna. The wire only occupies 2% of the area probed by the laser beam, but its magneto-optical response can be clearly detected. Our design paves the way for ultrafast time-resolved studies, using table-top femtosecond near-infrared lasers, of dynamics in nano-structures driven by strong terahertz radiation.
Jazbinšek, A. Schneider, and P. Günter, "A hydrogen-bonded organic nonlinear optical crystal for high-efficiency terahertz generation and detection," Opt. Express 16 (21), 16496-16508 (2008). 35. COMSOL Multiphysics® v. 5.3. https://www.comsol.com. COMSOL, AB, Stockholm, Sweden. 36. Since we are impinging on the air/silicon interface at normal incidence with a wavelength of ≈ 300 µm, the amplitude of the electric field at z = 10 nm above the silicon substrate is almost equal to the amplitude of the transmitted electric field, because of then × ì E S i − ì E Air = 0 boundary condition. 37. The discrepancy between the two values has to do with the fact that the single-cycle field is broadband, and different frequencies are amplified differently by a fixed-geometry design. 38. Z. Q. Qiu and S. D. Bader, "Surface magneto-optic Kerr effect," Rev. Sci. Instrum. 71(3), 1243 (2000). 39. P. Vavassori, "Polarization modulation technique for magneto-optical quantitative vector magnetometry," Appl. Phys.
Lett. 77 (11), 1605 (2000).
Introduction
Following the development of intense, coherent laser-based sources of terahertz radiation [1], the past decade has witnessed an increased interest in the use of this type of radiation to coherently control the properties of materials on the sub-picosecond time scale. Terahertz photons, with energies in the meV range, can drive nonlinear dynamics without significantly increasing the entropy of the system [2][3][4][5][6][7][8][9][10][11][12][13][14]. In the field of condensed matter physics, the investigations of ultrafast dynamics driven by strong terahertz fields are frequently performed using terahertz-pump (usually in the 1 -10 THz range, 300 to 30 µm in wavelength) and visible or near-infrared probing light (typically a sub-100 fs pulse).
To study the effects in the strong-field limit, the strength of the terahertz field can be locally enhanced exploiting near-field effects in meta-materials [3,[15][16][17][18][19][20], which typically consist of micrometer-sized metallic structures deposited on the sample surface. However, since the area of the sample is often significantly smaller than the area of metallic structures in meta-materials, the reflectivity in the visible or near-infrared frequency range of the probe is dominated by the latter. As a consequence, it is extremely challenging to isolate the sample response, despite the enhancement provided by the meta-material. This problem can be mitigated by using dielectric and absorbing coatings, for instance to enhance the magneto-optical activity in magnetic thin films, and to reduce the background reflections [21][22][23][24][25][26]. This solution greatly boosts the signal up to a point where single nano-structures can be measured. The drawback of this approach is that it imposes constraints on the choice of layers underneath the target structure. This limitation can become crucial if these underlayers are utilized to tune the important properties of the studied thin films. In this case, a more suitable solution is to deposit an anti-reflection (AR) coating only on the metal structures forming the meta-material, to minimize the reflection from those areas, which is the main factor affecting the strength of the measured signal. At the same time, the AR layer should not perturb the terahertz radiation that still needs to be enhanced by the metal layers.
In this work, we propose a simple, but until now unexplored, single-layer anti-reflection coating design that can be implemented on arbitrary meta-material structures comprising highly conducting and reflective metallic layers. The coating suppresses the near-infrared reflection typically utilized to probe the response of the sample, without noticeably affecting the terahertz radiation at much larger wavelengths. We have performed transfer matrix method calculations, as well as measurements of the reflectivity both in the near-infrared and terahertz range, to demonstrate the functionality of our design. We have also investigated, using time-domain finite element simulations, the near-field enhancement properties of a dipole antenna -a template for α-Si
Metal
ETHz
HTHz
NIR/VIS laser
THz E-field enhancement region
Single-cycle THz pulse Fig. 1. Design of the dipole antenna for terahertz near-field enhancement in the gap between two metallic electrodes, covered with an anti-reflection coating for near-infrared and visible radiation. A single-cycle of the terahertz field, with the suitable polarization for the optimal coupling to the antenna, is sketched. The pink arrows schematically show the working principle of the anti-reflection coating for a metal, where destructive interference (zig-zag arrows within the top layer) is combined with the dielectric losses to compensate for the forbidden transmission through the metallic electrodes (crossed-out arrows in the metal layer), as described in detail in the text. the terahertz meta-materials -covered with the anti-reflection coating. Finally, we experimentally measured the magneto-optical Kerr effect from a magnetic wire placed in the gap of the antenna.
Anti-reflection coating design
Terahertz meta-materials can be formed by depositing metallic (typically gold) layers that can locally enhance the electromagnetic field of incident radiation. One of the simplest realization of such structure consists of two metallic strips separated by a small gap, i.e. a dipole antenna [27]. For a suitable geometry of the antenna and polarization of the incident radiation, opposite charges can be induced by the electromagnetic field at the opposite edges of the gap, producing a strong local enhancement of electric field within the gap. Intuitively, but incorrectly, this charge motion is often attributed to the current driven by the electric field. However, the correct explanation is that the local electric field enhancement in the gap is caused by the screening of the magnetic field, which induces a current flow in the metal, in the direction orthogonal to it (parallel to the electric field). This current flow, known as the eddy current, can penetrate within the skin depth of the material (75 nm at 1 THz for gold). In contrast, the electric field component of the radiation is screened virtually instantaneously at the surface of the conductor by the charge redistribution, and at terahertz frequencies provides a negligible contribution to the net current flow in the bulk of the material, even in non-ideal metals.
In the standard AR coatings, designed to minimize the reflection from dielectric materials, one exploits the phenomenon of destructive interference of the waves reflected at the two interfaces, to cancel the total electric field that propagates in the backward direction. Since the energy of the electromagnetic wave is conserved, the transmission through the dielectric is maximized. However, this mechanism cannot be implemented for coatings on metals, since the wave cannot propagate through the metal, and hence the reflection cannot be eliminated.
For an AR coating to work for a metal, it is necessary to create destructive interference (to suppress Fresnel reflections), and to simultaneously absorb the radiation, as shown schematically in Fig. 1. In other words, the dielectric layer needs to be sufficiently lossy in the visible/nearinfrared region, so that the wave decays after multiple reflections at the interfaces. This idea was proposed decades ago by Hass et al. [28], who demonstrated that lossy double dielectric layers can suppress the reflectivity of aluminum and copper in the visible range, while maintaining high-reflectivity in the mid-infrared range, up to the wavelength of 10 µm. Moreover, they also highlighted the fact that absorption in single-layer AR coatings is necessary to reduce the high reflectance of metals in the visible range. In a later related work by Yoshida [29], a single-layer AR coating for metals was described mathematically. He first considered a non-absorbing dielectric layer with real refractive index n 1 > 1 and thickness d 1 , deposited on top of a metallic substrate characterized byñ 2 = n 2 + ik 2 . The reflectance R for the monochromatic light with wavelength λ, impinging at normal incidence on the three-layers stack composed of the air (n 0 = 1), the non-absorbing dielectric coating, and the metal substrate, is given by [29]
R = r 01 + r 12 exp (2iδ 1 ) 1 + r 01 r 12 exp (2iδ 1 ) 2 ,(1)
where r 01 = (1 − n 1 ) /(1 + n 1 ) is the Fresnel reflection coefficient for the air-dielectric interface, r 12 = (n 1 −ñ 2 ) /(n 1 +ñ 2 ) is the Fresnel reflection coefficient for the dielectric-metal interface, and δ 1 = 2πn 1 d 1 /λ. The reflectance reaches a minimum when [30],
n 1 d 1 = λ 2 (m + 1) − α 12 2π ,(2)
where m is an integer value, and
R min = r 01 + ρ 12 1 + r 01 ρ 12 2 ,(3)ρ 12 = |r 12 | = (n 1 − n 2 ) 2 + k 2 2 (n 1 + n 2 ) 2 + k 2 2 1 2 ,(4)α 12 = Arg (r 12 ) = arctan −2n 1 k 2 n 1 2 − n 2 2 − k 2 2 .(5)
The minimum reflectance R min is zero when ρ 12 = |r 01 |, giving
n 1 = n 2 + k 2 2 n 2 − 1 1 2 .(6)
If k 2 = 0, which corresponds to a dielectric coating on top of a dielectric substrate, Eq. (6) gives n 1 = n 2 1/2 . Since n 2 > n 1 , the solution for α 12 = arctan (0) in Eq. (5) should be α 12 = π, because of the π phase shift introduced by the dielectric coating-dielectric substrate interface in this case. Thus, Eq. (2) gives n 1 d 1 = λ/4 for m = 0, defining a quarter-wave coating, in which the reflectance is minimized by the destructive interference in the coating layer. Moreover, Eq. (6) imposes a constraint on the values of n 2 and k 2 , since n 1 > 1. The effect of this constraint is that, according to Yoshida [29], "zero reflection cannot be achieved with a single dielectric film coating for metals" with large extinction coefficient k 3, such as silver and gold. However, in this case, zero reflection can be obtained by allowing the dielectric coating to be slightly absorbing.
In the following, we experimentally confirm that the reflection from gold, and hence from any good metal, can be suppressed by using a single layer of sputtered amorphous silicon (α-Si). In the visible/near-infrared range, a thin α-Si film acts as a dielectric with a relatively large imaginary part of the refractive index, since the electronic states are not characterized by well-defined momentum, enhancing the radiation absorption in α-Si as compared to its crystalline form [31]. On the other hand, low absorption in the terahertz range (λ ∼ 100 µm), and the small thickness compared to the radiation wavelength, make these layers practically invisible, thus maintaining the high-reflectivity characteristics of gold in this range.
We first used the transfer matrix method (TMM) [32] to simulate the feasibility of this approach. We simulated the Air/α-Si/Au/Si(substrate)/Air multilayer, where the outermost Air layers were semi-infinite, and the substrate was 500 µm thick. The radiation was assumed to be monochromatic with a wavelength of 800 nm, the typical center-wavelength of a Ti:sapphire laser, at a normal incidence to the multilayer stack. We used the refractive indices n Air = 1, n α−Si ≈ 3.90 + 0.11 j, n Au ≈ 0.15 + 4.91 j, and n Si ≈ 3.681 + 0.005 j [33].
In Fig. 2 we plot the reflectance of the stack as a function of the α-Si thickness, for two gold layers with different thickness. For thin gold (20 nm), part of the radiation can be transmitted into the substrate, and ≈ 30 nm of amorphous silicon on top of it can efficiently suppress the reflectivity. For thick gold (100 nm), enough to prevent any transmission, a thicker amorphous silicon layer (≈ 230 nm) is needed to achieve the same suppression.
In the same figure, we also plot the reflectivity (dashed lines) of a fictitious dielectric layer with zero imaginary part of the refractive index, and its magnitude equal to that of the the amorphous silicon, representing a conventional dielectric with negligible losses. It is evident that such a layer on top of a 100-nm thick gold layer cannot efficiently suppress the reflectivity, demonstrating that the cumulative losses after multiple reflections are necessary to realize the anti-reflection configuration.
We note that a single α-Si anti-reflection coating remains efficient over a broad range of incidence angles. We used TMM to check that, when varying the incidence angle from 0 to 37.5 degrees, the optimal thickness for the α-Si layer varies by less than 2%. Furthermore, at the optimal thickness, the 800 nm reflectance remains below 0.05 (an acceptable value for the coating to properly work), at angles of incidence as high as 50 degrees. This can be understood as a consequence of the large refractive index of silicon, which causes the electromagnetic wave to strongly refract when entering the AR layer. As a result, the optical path in the silicon layer noticeably increases only at very large angles of incidence.
Experimental and numerical verification
In Fig. 3, we plot the calculated and the measured reflectance for several Air/α-Si(t)/Au(20 nm)/Si(substrate)/Air multilayers, as a function of t, both for 800 nm and for the terahertz radiation impinging on the sample at 10 degrees incidence. The 800 nm radiation was produced by the Ti:sapphire-based regenerative amplifier (Coherent Legend) in 40 fs pulses, with the 30 nm FWHM bandwidth around the center 800 nm wavelength, as measured by a grating spectrometer. The reflectance at 800 nm was measured directly using a photodiode. The signal was scaled using the known reflectance value of a commercial gold mirror. The reflectance R of the terahertz radiation, generated by optical rectification in a OH1 organic crystal [34], was determine from the measured transmittance T, using R = 1 − A − T, where the absorption A was calculated based on the TMM. The transmittance was taken to be proportional to the square of the normalized amplitude of the maximum electro-optical sampling signal in a 100 µm thick, 110-cut GaP crystal.
The excellent agreement between the data and the calculations directly demonstrates the functionality of our design in suppressing the near-infrared reflectivity, with the appropriate thickness of α-Si, 27 nm and 127 nm in the studied case of 800 nm radiation. We emphasize that our experiment demonstrates efficient suppression of the broadband 40 fs pulses of 800 nm radiation. This is not surprising, considering that the bandwidth to carrier ratio is less than 4%. This result confirms the suitability of our design for the conventional ultrafast experiments. On the other hand, the terahertz reflectivity is unchanged by the α-Si layer, suggesting that the terahertz near-field enhancement is also likely unaffected the silicon layer. However, since the measured reflectivity is a far-field property, and near-field properties are in general very sensitive to interface effects, one needs to perform a more detailed investigation of the possible effects of the anti-reflection coating on the terahertz radiation in the near-field regime. To analyze the near-field effects of the coating, we have performed finite-element numerical calculations using COMSOL Multiphysics®software [35]. In Fig. 4(a), we plot the electric field enhancement at a frequency of 1 THz, for a set of two infinitely long, 65 µm wide, 20 nm thick gold plates separated by a gap of 2 µm. The terahertz electric field is applied along the x−axis in this Figure. The field enhancement is computed by dividing the electric field value in the gap region by the electric field value at the air/silicon interface, in the absence of the gold plates. In Fig. 4(b), we plot time-dependence of the x-component of the terahertz electric field in the middle of the gap (x = 0), and at z = 10 nm above the silicon substrate. The simulation used the experimental time profile of the impinging terahertz field, measured by electro-optical sampling, with the peak value of ≈ 300 kV/cm.
For the bare silicon-air interface, the terahertz field is reduced to approximately half of its free-space magnitude, consistent with the relative amplitude of the transmitted wave at an air/silicon interface computed as t = 2/(n + 1), with n ≈ 3.4 [36]. The presence of the gold plates introduces a slight temporal shift, and enhances the amplitude of the terahertz field to more than 500 kV/cm, consistent with the ≈ 4 times enhancement observed in the frequency domain simulations of Fig. 4(a) [37]. Most importantly, the addition of the α-Si layer on top of the gold plates does not noticeably affect the field, confirming our intuitive conclusions based on the negligible effect of the α-Si layer on the terahertz reflectivity.
To test the functionality of the AR coating in a practical configuration, we measured the polar magneto-optical Kerr effect (MOKE) [38] from the 3 nm-thick CoNi film patterned into a 1 µmwide, 100 µm-long wire. The CoNi stack is formed by a Ta(2)|Cu(2)|[Co(0.2)|Ni(1)] 3 |Ni(0.5)|Ta(3) multilayer (thicknesses in nm) with perpendicular magnetic anisotropy. The wire is located in the 2 µm-wide gap between two 100 µm long and 65 µm wide gold plates, coated with 27 nm of α-Si. By analyzing the magneto-optic response, we can unambiguously identify the signal coming from the embedded CoNi wire, with no contribution from the non-magnetic electrodes. As the MOKE signal typically results in a tiny intensity variation on top of a large background, this specific system implementation also demonstrates the general suitability of our design for the detection of small effects other than magneto-optical ones.
The polar MOKE loops from the embedded wire are plotted in the main panel of Fig. 5 for two different wavelengths of the probing radiation, 550 nm and 800 nm. At 800 nm, the AR coating optimized for this wavelength is expected to completely suppress the reflectivity of the gold electrodes, while at 550 nm, a substantial reflection from the metallic pads is expected. Using radiation of different wavelengths is geometrically equivalent to studying samples with different AR coating thickness, with the advantage that the very same sample can be used and the wavelength can be tuned very accurately.
The plotted MOKE signals reflect the change in the polarization ellipticity of the probing light, determined with a suitable polarization analyzing system. In these measurements, we utilized a polarization modulation technique [39] to provide the sensitivity necessary for testing the effectiveness of the proposed AR coating. In this setup, the polarization of the incident light was modulated at the frequency ω, while both the total measured intensity I 0 and its variation I ω at the modulation frequency were simultaneously recorded. The polarization ellipticity was determined from I ω normalized by the total intensity I 0 reflected from the sample. While the variations of I ω were affected only by the magnetic structure, the magnitude of I 0 is determined by the whole probed area, including the gold plates.
The data plotted with symbols in Fig. 5 clearly show that the AR coating significantly enhances the signal-to-background ratio, resulting in a more than a ten-fold increase of the realtive amplitude of the loop at the design wavelength. We have checked that the increase in the relative signal is not caused by the difference between the magneto-optical constants of CoNi between 800 and 550 nm wavelengths, by measuring the MOKE signal from a 100 µm × 100 µm CoNi square of 3 nm thickness, with no gold electrodes surrounding the structure. The resulting hysteresis loops are shown with solid curves in the the same Fig. 5.
To quantitatively analyze our observations, we note that if the areas and the reflectivity of different reflecting regions are known, one can predict the difference in the total measured Kerr ellipticity K between the CoNi square larger than the probing spot, and the wire-shaped sample, according to [24] K,wire
K,square = A wire A wire + A Si R Si R CoNi + A CoatedAu R CoatedAu R CoNi ,(7)
where A m is the total area occupied by a certain material m illuminated by the laser beam, and R m the corresponding reflectivity that can be measured experimentally or calculated using Fresnel equations. Table 1 summarizes the relationship between the ellipticity of the wire and the ellipticity of the square, assuming that the probing light is focused in to a uniform circular spot with diameter φ = 75 µm, and the various probed areas are A wire = A Si ≈ φh (h = 1 µm), A CoatedAu ≈ π(φ/2) 2 − 2φh. The area occupied by the wire is therefore about 2% of the total area. Indeed, for the wavelength of 550 nm, at which the Au reflectivity is not suppressed by the AR coating, the ellipticity signal for the wire is about 1.6% of that for the large square, both theoretically and experimentally. In contrast, for the 800 nm wavelength, we expect and observe an increase of this ratio by an order of magnitude. The deviation between the theoretically expected value (24%) and the experimental value (19%) can be explained by a combination of a few-nm uncertainty in the deposited material thickness, deviations of the optical properties of different layers from their nominal values, and by the effects of the nanowire and gold electrode edges, whose scattering properties were not taken into account in the calculations reported in Table 1.
Conclusion
In summary, we have designed and experimentally demonstrated an anti-reflection coating for highly reflective metals, which are typically utilized in the fabrication of terahertz meta-materials. The anti-reflection coating can efficiently suppress the reflection of light in the visible and infrared ranges, typically used in the studies of ultrafast phenomena by pump-probe techniques. At the same time, the coating does not perturb the propagation of terahertz radiation, and does not affect the near-field enhancement in the meta-materials. Our results are expected to open a path for time-resolved experiments aimed at probing the ultrafast dynamics driven in nano-scale structures by strong terahertz fields, by using table-top femtosecond near-infrared laser sources.
Soc. Am. 46(1), 31-35 (1956). 29. S. Yoshida, "Antireflection coatings on metals for selective solar absorbers," Thin Solid Films 56(3), 321-329 (1979). 30. K. C. Park, "The Extreme Values of Reflectivity and the Conditions for Zero Reflection from Thin Dielectric Films on Metal," Appl. Opt. 3(7), 877-881 (1964). 31. S. Adachi and H. Mori, "Optical properties of fully amorphous silicon," Phys. Rev. B 62(15), 10158 (2000). 32. M. Born and E. Wolf, Principles of Optics (Cambridge University, 1999). 33. M. N. Polyanskiy, "Refractive index database," https://refractiveindex.info. 34. F. D. J. Brunner, O-P. Kwon, S.-J. Kwon, M.
Fig. 2 .
2(Solid curves: Calculated reflectance at the wavelength of 800 nm for the Air/α-Si/Au/Si(substrate)/Air multilayer, as a function of the α-Si thickness, for two different Au thicknesses at normal incidence. Dashed curve: Calculated reflectance at a wavelength of 800 nm for an ideal dielectric on top of a 100 nm Au layer, characterized by n = 3.9 and zero imaginary part of the refractive index.
Fig. 3 .
3Experimental (symbols) and calculated (line) reflectance for a α-Si/Au/Si(substrate) sample based on a 20 nm-thick Au layer, at wavelengths of 800 nm (magenta) and 300 µm (black), the latter corresponding to the radiation frequency of 1 THz.
Fig. 4 .
4(a) Frequency-domain, finite element analysis of the enhancement map for a monochromatic electromagnetic field with frequency f = 1 THz incident on two gold plates separated by a gap. (b) Time-domain finite element simulations of the x-component of the electric field for a single-cycle (broadband) terahertz field at the center of the gap, without Au plates (solid black curve), with Au plates (solid gray curve) and with α-Si/Au plates (filled gray dots). In all the calculations, the electric field of the incident radiation is polarized along the x−axis, and the propagation direction is along the z−axis, normal to the sample plane.
Fig. 5 .
5Polar Kerr ellipticity as a function of the magnetic field applied orthogonal to the sample plane, for the 800 nm and 550 nm wavelengths of the probing light. Symbols: average of 25 hysteresis loops for a 1 µm-wide, 100 µm-long CoNi wire. Solid curves: average of 4 hysteresis loops hysteresis loops for a 100 µm × 100 µm CoNi square, at the same wavelengths. Inset: Zoom in on the 550 nm wavelength hysteresis loop for the CoNi wire.
Table 1 .
1Summary of the ellipticity ratio between a CoNi wire and a CoNi square calculated according to Eq.(7).R CoNi
R Si
R CoatedAu
K,wire / K,square
550 nm (theor.) 0.49 0.40
0.50
0.017
550 nm (exp.)
0.46 0.40
0.50
0.016
800 nm (theor.) 0.47 0.36
0.020
0.24
800 nm (exp.)
0.49 0.34
0.031
0.19
Intense ultrashort terahertz pulses: generation and applications. M C Hoffmann, J A Fülöp, J. Phys. D: Appl. Phys. 44883001M. C. Hoffmann and J. A. Fülöp, "Intense ultrashort terahertz pulses: generation and applications ," J. Phys. D: Appl. Phys. 44(8), 083001 (2011).
Probing Unfolded Acoustic Phonons with X Rays. M Trigo, Y M Sheu, D A Arms, J Chen, S Ghimire, R S Goldman, E Landahl, R Merlin, E Peterson, M Reason, D A Reis, Phys. Rev. Lett. 101225505M. Trigo, Y. M. Sheu, D. A. Arms, J. Chen, S. Ghimire, R. S. Goldman, E. Landahl, R. Merlin, E. Peterson, M. Reason, and D. A. Reis, "Probing Unfolded Acoustic Phonons with X Rays," Phys. Rev. Lett. 101(2), 025505 (2008).
Terahertz-field-induced insulator-to-metal transition in vanadium dioxide metamaterial. M Liu, H Y Hwang, H Tao, A C Strikwerda, K Fan, G R Keiser, A J Sternbach, K G West, S Kittiwatanakul, J Lu, S A Wolf, F G Omenetto, X Zhang, K A Nelson, R D Averitt, Nature. 4877407M. Liu, H. Y. Hwang, H. Tao, A. C. Strikwerda, K. Fan, G. R. Keiser, A. J. Sternbach, K. G. West, S. Kittiwatanakul, J. Lu, S. A. Wolf, F. G. Omenetto, X. Zhang, K. A. Nelson, and R. D. Averitt, "Terahertz-field-induced insulator-to-metal transition in vanadium dioxide metamaterial," Nature 487(7407), 345-348 (2012).
Ultrafast Photovoltaic Response in Ferroelectric Nanolayers. D Daranciang, M J Highland, H Wen, S M Young, N C Brandt, H Y Hwang, M Vattilana, M Nicoul, F Quirin, J Goodfellow, T Qi, I Grinberg, D M Fritz, M Cammarata, D Zhu, H T Lemke, D A Walko, E M Dufresne, Y Li, J Larsson, D A Reis, K Sokolowski-Tinten, K A Nelson, A M Rappe, P H Fuoss, G B Stephenson, A M Lindenberg, Phys. Rev. Lett. 108887601D. Daranciang, M. J. Highland, H. Wen, S. M. Young, N. C. Brandt, H. Y. Hwang, M. Vattilana, M. Nicoul, F. Quirin, J. Goodfellow, T. Qi, I. Grinberg, D. M. Fritz, M. Cammarata, D. Zhu, H. T. Lemke, D. A. Walko, E. M. Dufresne, Y. Li, J. Larsson, D. A. Reis, K. Sokolowski-Tinten, K. A. Nelson, A. M. Rappe, P. H. Fuoss, G. B. Stephenson, and A. M. Lindenberg, "Ultrafast Photovoltaic Response in Ferroelectric Nanolayers," Phys. Rev. Lett. 108(8), 087601 (2012).
Nonlinear lattice dynamics as a basis for enhanced superconductivity in YBa 2 Cu 3 O 6.5. R Mankowsky, A Subedi, M Först, S O Mariager, M Chollet, H T Lemke, J S Robinson, J M Glownia, M P Minitti, A Frano, M Fechner, N A Spaldin, T Loew, B Keimer, A Georges, A Cavalleri, Nature. 5167529R. Mankowsky, A. Subedi, M. Först, S. O. Mariager, M. Chollet, H. T. Lemke, J. S. Robinson, J. M. Glownia, M. P. Minitti, A. Frano, M. Fechner, N. A. Spaldin, T. Loew, B. Keimer, A. Georges, and A. Cavalleri, "Nonlinear lattice dynamics as a basis for enhanced superconductivity in YBa 2 Cu 3 O 6.5 ," Nature 516(7529), 71-73 (2014).
Persistence of magnetic order in a highly excited Cu 2+ state in CuO. U Staub, R A Souza, P Beaud, E Möhr-Vorobeva, G Ingold, A Caviezel, V Scagnoli, B Delley, W F Schlotter, J J Turner, O Krupin, W.-S Lee, Y.-D Chuang, L Patthey, R G Moore, D Lu, M Yi, P S Kirchmann, M Trigo, P Denes, D Doering, Z Hussain, Z X Shen, D Prabhakaran, A T Boothroyd, S L Johnson, Phys. Rev. B. 8922220401U. Staub, R. A. de Souza, P. Beaud, E. Möhr-Vorobeva, G. Ingold, A. Caviezel, V. Scagnoli, B. Delley, W. F. Schlotter, J. J. Turner, O. Krupin, W.-S. Lee, Y.-D. Chuang, L. Patthey, R. G. Moore, D. Lu, M. Yi, P. S. Kirchmann, M. Trigo, P. Denes, D. Doering, Z. Hussain, Z. X. Shen, D. Prabhakaran, A. T. Boothroyd, and S. L. Johnson, "Persistence of magnetic order in a highly excited Cu 2+ state in CuO," Phys. Rev. B 89(22), 220401(R) (2014).
A time-dependent order parameter for ultrafast photoinduced phase transitions. P Beaud, A Caviezel, S O Mariager, L Rettig, G Ingold, C Dornes, S.-W Huang, J A Johnson, M Radovic, T Huber, T Kubacka, A Ferrer, H T Lemke, M Chollet, D Zhu, J M Glownia, M Sikorski, A Robert, H Wadati, M Nakamura, M Kawasaki, Y Tokura, S L Johnson, U Staub, Nat. Mater. 1310P. Beaud, A. Caviezel, S. O. Mariager, L. Rettig, G. Ingold, C. Dornes, S.-W. Huang, J. A. Johnson, M. Radovic, T. Huber, T. Kubacka, A. Ferrer, H. T. Lemke, M. Chollet, D. Zhu, J. M. Glownia, M. Sikorski, A. Robert, H. Wadati, M. Nakamura, M. Kawasaki, Y. Tokura, S. L. Johnson, and U. Staub, "A time-dependent order parameter for ultrafast photoinduced phase transitions," Nat. Mater. 13(10), 923-927 (2014).
Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon. T Kubacka, J A Johnson, M C Hoffmann, C Vicario, S Jong, P Beaud, S Grübel, S.-W Huang, L Huber, L Patthey, Y.-D Chuang, J J Turner, G L Dakovski, W.-S Lee, M P Minitti, W Schlotter, R G Moore, C P Hauri, S M Koohpayeh, V Scagnoli, G Ingold, S L Johnson, U Staub, Science. 3436177T. Kubacka, J. A. Johnson, M. C. Hoffmann, C. Vicario, S. de Jong, P. Beaud, S. Grübel, S.-W. Huang, L. Huber, L. Patthey, Y.-D. Chuang, J. J. Turner, G. L. Dakovski, W.-S. Lee, M. P. Minitti, W. Schlotter, R. G. Moore, C. P. Hauri, S. M. Koohpayeh, V. Scagnoli, G. Ingold, S. L. Johnson, and U. Staub, "Large-Amplitude Spin Dynamics Driven by a THz Pulse in Resonance with an Electromagnon," Science 343(6177), 1333-1336 (2014).
Enhanced coherent oscillations in the superconducting state of underdoped YBa 2 Cu 3 O 6+x induced via ultrafast terahertz excitation. G L Dakovski, W.-S Lee, D G Hawthorn, N Garner, D Bonn, W Hardy, R Liang, M C Hoffmann, J J Turner, Phys. Rev. B. 9122220506G. L. Dakovski, W.-S. Lee, D. G. Hawthorn, N. Garner, D. Bonn, W. Hardy, R. Liang, M. C. Hoffmann, and J. J. Turner, "Enhanced coherent oscillations in the superconducting state of underdoped YBa 2 Cu 3 O 6+x induced via ultrafast terahertz excitation," Phys. Rev. B 91(22), 220506(R) (2015).
THz-Driven Ultrafast Spin-Lattice Scattering in Amorphous Metallic Ferromagnets. S Bonetti, M C Hoffmann, M.-J Sher, Z Chen, S.-H Yang, M G Samant, S S P Parkin, H A Dürr, Phys. Rev. Lett. 117887205S. Bonetti, M. C. Hoffmann, M.-J. Sher, Z. Chen, S.-H. Yang, M. G. Samant, S. S. P. Parkin, and H. A. Dürr, "THz-Driven Ultrafast Spin-Lattice Scattering in Amorphous Metallic Ferromagnets," Phys. Rev. Lett. 117(8), 087205 (2016).
Ultrafast energy-and momentum-resolved dynamics of magnetic correlations in the photo-doped Mott insulator Sr 2 IrO 4. M P M Dean, Y Cao, X Liu, S Wall, D Zhu, R Mankowsky, V Thampy, X M Chen, J G Vale, D Casa, J Kim, A H Said, P Juhas, R Alonso-Mori, J M Glownia, A Robert, J Robinson, M Sikorski, S Song, M Kozina, H Lemke, L Patthey, S Owada, T Katayama, M Yabashi, Y Tanaka, T Togashi, J Liu, C Serrao, B J Kim, L Huber, C.-L Chang, D F Mcmorrow, M Först, J P Hill, Nat. Mater. 156M. P. M. Dean, Y. Cao, X. Liu, S. Wall, D. Zhu, R. Mankowsky, V. Thampy, X. M. Chen, J. G. Vale, D. Casa, J. Kim, A. H. Said, P. Juhas, R. Alonso-Mori, J. M. Glownia, A. Robert, J. Robinson, M. Sikorski, S. Song, M. Kozina, H. Lemke, L. Patthey, S. Owada, T. Katayama, M. Yabashi, Y. Tanaka, T. Togashi, J. Liu, C. Rayan Serrao, B. J. Kim, L. Huber, C.-L. Chang, D. F. McMorrow, M. Först, and J. P. Hill, "Ultrafast energy-and momentum-resolved dynamics of magnetic correlations in the photo-doped Mott insulator Sr 2 IrO 4 ," Nat. Mater. 15(6), 601-605 (2016).
Control of two-phonon correlations and the mechanism of high-wavevector phonon generation by ultrafast light pulses. T Henighan, M Trigo, M Chollet, J N Clark, S Fahy, J M Glownia, M P Jiang, M Kozina, H Liu, S Song, D Zhu, D A Reis, Phys. Rev. B. 94220302T. Henighan, M. Trigo, M. Chollet, J. N. Clark, S. Fahy, J. M. Glownia, M. P. Jiang, M. Kozina, H. Liu, S. Song, D. Zhu, and D. A. Reis, "Control of two-phonon correlations and the mechanism of high-wavevector phonon generation by ultrafast light pulses," Phys. Rev. B 94(2), 020302(R) (2016).
Ultrafast x-ray diffraction of a ferroelectric soft mode driven by broadband terahertz pulses. S Grübel, J A Johnson, P Beaud, C Dornes, A Ferrer, V Haborets, L Huber, T Huber, A Kohutych, T Kubacka, M Kubli, S O Mariager, J Rittmann, J I Saari, Y Vysochanskii, G Ingold, S L Johnson, arXiv:1602.05435v1S. Grübel, J. A. Johnson, P. Beaud, C. Dornes, A. Ferrer, V. Haborets, L. Huber, T. Huber, A. Kohutych, T. Kubacka, M. Kubli, S. O. Mariager, J. Rittmann, J. I. Saari, Y. Vysochanskii, G. Ingold, and S. L. Johnson, "Ultrafast x-ray diffraction of a ferroelectric soft mode driven by broadband terahertz pulses," arXiv:1602.05435v1 (2016).
. F Chen, Y Zhu, S Liu, Y Qi, H Y Hwang, N C Brandt, J Lu, F Quirin, H Enquist, P Zalden, T Hu, J Goodfellow, M.-J Sher, M C Hoffmann, D Zhu, H Lemke, J Glownia, M Chollet, A R Damodaran, J Park, Z Cai, I W , F. Chen, Y. Zhu, S. Liu, Y. Qi, H. Y. Hwang, N. C. Brandt, J. Lu, F. Quirin, H. Enquist, P. Zalden, T. Hu, J. Goodfellow, M.-J. Sher, M. C. Hoffmann, D. Zhu, H. Lemke, J. Glownia, M. Chollet, A. R. Damodaran, J. Park, Z. Cai, I. W.
Ultrafast terahertz-field-driven ionic response in ferroelectric BaTiO 3. M J Jung, D A Highland, J W Walko, P G Freeland, A Evans, J Vailionis, K A Larsson, A M Nelson, K Rappe, L W Sokolowski-Tinten, H Martin, A M Wen, Lindenberg, Phys. Rev. B. 9418180104Jung, M. J. Highland, D. A. Walko, J. W. Freeland, P. G. Evans, A. Vailionis, J. Larsson, K. A. Nelson, A. M. Rappe, K. Sokolowski-Tinten, L. W. Martin, H. Wen, and A. M. Lindenberg, "Ultrafast terahertz-field-driven ionic response in ferroelectric BaTiO 3 ," Phys. Rev. B 94(18), 180104(R) (2016).
Active terahertz metamaterial devices. H.-T Chen, W J Padilla, J M O Zide, A C Gossard, A J Taylor, R D Averitt, Nature. 4447119H.-T. Chen, W. J. Padilla, J. M. O. Zide, A. C. Gossard, A. J. Taylor, and R. D. Averitt, "Active terahertz metamaterial devices," Nature 444(7119), 597-600 (2006).
Extremely large extinction efficiency and field enhancement in terahertz resonant dipole nanoantennas. L Razzari, A Toma, M Shalaby, M Clerici, R Zaccaria, C Liberale, S Marras, I A I Al-Naib, G Das, F De Angelis, M Peccianti, A Falqui, T Ozaki, R Morandotti, E Di Fabrizio, Opt. Express. 1927L. Razzari, A. Toma, M. Shalaby, M. Clerici, R. Proietti Zaccaria, C. Liberale, S. Marras, I. A. I. Al-Naib, G. Das, F. De Angelis, M. Peccianti, A. Falqui, T. Ozaki, R. Morandotti, and E. Di Fabrizio, "Extremely large extinction efficiency and field enhancement in terahertz resonant dipole nanoantennas," Opt. Express 19(27), 26088-26094 (2011).
Time-resolved imaging of near-fields in THz antennas and direct quantitative measurement of field enhancements. C A Werley, K Fan, A C Strikwerda, S M Teo, X Zhang, R D Averitt, K A Nelson, Opt. Express. 208C. A. Werley, K. Fan, A. C. Strikwerda, S. M. Teo, X. Zhang, R. D. Averitt, and K. A. Nelson, "Time-resolved imaging of near-fields in THz antennas and direct quantitative measurement of field enhancements," Opt. Express 20(8), 8551-8567 (2012).
Terahertz radiation-induced sub-cycle field electron emission across a split-gap dipole antenna. J Zhang, X Zhao, K Fan, X Wang, G.-F Zhang, K Geng, X Zhang, R D Averitt, Appl. Phys. Lett. 10723231101J. Zhang, X. Zhao, K. Fan, X. Wang, G.-F. Zhang, K. Geng, X. Zhang, and R. D. Averitt, "Terahertz radiation-induced sub-cycle field electron emission across a split-gap dipole antenna," Appl. Phys. Lett. 107(23), 231101 (2015).
THz near-field enhancement by means of isolated dipolar antennas: the effect of finite sample size. M Savoini, S Grübel, S Bagiante, H Sigg, T Feurer, P Beaud, S L Johnson, Opt. Express. 245M. Savoini, S. Grübel, S. Bagiante, H. Sigg, T. Feurer, P. Beaud, and S. L. Johnson, "THz near-field enhancement by means of isolated dipolar antennas: the effect of finite sample size," Opt. Express 24(5), 4552-4562 (2016).
Local Terahertz Field Enhancement for Time-Resolved X-ray Diffraction. M Kozina, M Pancaldi, C Bernhard, T Van Driel, J M Glownia, P Marsik, M Radovic, C A F Vaz, D Zhu, S Bonetti, U Staub, M C Hoffmann, Appl. Phys. Lett. 110881106M. Kozina, M. Pancaldi, C. Bernhard, T. van Driel, J.M. Glownia, P. Marsik, M. Radovic, C. A. F. Vaz, D. Zhu, S. Bonetti, U. Staub, and M.C. Hoffmann, "Local Terahertz Field Enhancement for Time-Resolved X-ray Diffraction," Appl. Phys. Lett. 110(8), 081106 (2017).
Modeling magneto-optical thin film media for optical data storage. K Balasubramanian, A S Marathay, H A Macleod, Thin Solid Films. 164K. Balasubramanian, A. S. Marathay, and H. A. Macleod, "Modeling magneto-optical thin film media for optical data storage," Thin Solid Films 164, 391-403 (1988).
Quadrilayer magneto-optic enhancement with zero Kerr ellipticity. R Atkinson, I W Salter, J Xu, J. Magn. Magn. Mater. 1023R. Atkinson, I. W. Salter, and J. Xu, "Quadrilayer magneto-optic enhancement with zero Kerr ellipticity," J. Magn. Magn. Mater. 102(3), 357-364 (1991).
Cavity enhancement of the magneto-optic Kerr effect for optical studies of magnetic nanostructures. N Qureshi, H Schmidt, A R Hawkins, Appl. Phys. Lett. 853431N. Qureshi, H. Schmidt, and A. R. Hawkins, "Cavity enhancement of the magneto-optic Kerr effect for optical studies of magnetic nanostructures," Appl. Phys. Lett. 85(3), 431 (2004).
Cavity-Enhanced Magnetooptical Observation of Magnetization Reversal in Individual Single-Domain Nanomagnets. N Qureshi, S Wang, M A Lowther, A R Hawkins, S Kwon, A Liddle, J Bokor, H Schmidt, Nano Lett. 57N. Qureshi, S. Wang, M. A. Lowther, A. R. Hawkins, S. Kwon, A. Liddle, J. Bokor, and H. Schmidt, "Cavity-Enhanced Magnetooptical Observation of Magnetization Reversal in Individual Single-Domain Nanomagnets," Nano Lett. 5(7), 1413-1417 (2005).
Magneto-Optical Observation of Picosecond Dynamics of Single Nanomagnets. A Barman, S Wang, J D Maas, A R Hawkins, S Kwon, A Liddle, J Bokor, H Schmidt, Nano Lett. 612A. Barman, S. Wang, J. D. Maas, A. R. Hawkins, S. Kwon, A. Liddle, J. Bokor, and H. Schmidt, "Magneto-Optical Observation of Picosecond Dynamics of Single Nanomagnets," Nano Lett. 6(12), 2939-2944 (2006).
Optimization of nano-magneto-optic sensitivity using dual dielectric layer enhancement. S Wang, A Barman, H Schmidt, J D Maas, A R Hawkins, S Kwon, B Harteneck, S Cabrini, J Bokor, Appl. Phys. Lett. 9025252504S. Wang, A. Barman, H. Schmidt, J. D. Maas, A. R. Hawkins, S. Kwon, B. Harteneck, S. Cabrini, and J. Bokor, "Optimization of nano-magneto-optic sensitivity using dual dielectric layer enhancement," Appl. Phys. Lett. 90(25), 252504 (2007).
Nanoantennas for visible and infrared radiation. P Biagioni, J.-S Huang, B Hecht, Rep. Prog. Phys. 75224402P. Biagioni, J.-S. Huang, and B. Hecht, "Nanoantennas for visible and infrared radiation," Rep. Prog. Phys. 75(2), 024402 (2012).
Mirror Coatings for Low Visible and High Infrared Reflectance. G Hass, H H Schroeder, A F Turner, J. Opt. G. Hass, H. H. Schroeder, and A. F. Turner, "Mirror Coatings for Low Visible and High Infrared Reflectance," J. Opt.
|
[] |
[
"MaxEnt, second variation, and generalized statistics",
"MaxEnt, second variation, and generalized statistics"
] |
[
"A Plastino \nLa Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727\n1900La PlataArgentina\n",
"M C Rocca \nLa Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727\n1900La PlataArgentina\n"
] |
[
"La Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727\n1900La PlataArgentina",
"La Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727\n1900La PlataArgentina"
] |
[] |
There are two kinds of Tsallis-probability distributions: heavy tail ones and compact support distributions. We show here, by appeal to functional analysis' tools, that for lower bound Hamiltonians, the second variation's analysis of the entropic functional guarantees that the heavy tail q-distribution constitute a maximum of Tsallis' entropy. On the other hand, in the compact support instance, a case by case analysis is necessary in order to tackle the issue.
|
10.1016/j.physa.2015.05.084
|
[
"https://arxiv.org/pdf/1503.07580v2.pdf"
] | 119,613,801 |
1503.07580
|
4b19c24e6643746c7060576c1b6b90e2833bb106
|
MaxEnt, second variation, and generalized statistics
30 Apr 2015 May 4, 2015
A Plastino
La Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727
1900La PlataArgentina
M C Rocca
La Plata National University and Argentina's National Research Council (IFLP-CCT-CONICET)-C. C. 727
1900La PlataArgentina
MaxEnt, second variation, and generalized statistics
30 Apr 2015 May 4, 2015arXiv:1503.07580v2 [cond-mat.stat-mech]MaxEntsecond variationgeneralized statistics
There are two kinds of Tsallis-probability distributions: heavy tail ones and compact support distributions. We show here, by appeal to functional analysis' tools, that for lower bound Hamiltonians, the second variation's analysis of the entropic functional guarantees that the heavy tail q-distribution constitute a maximum of Tsallis' entropy. On the other hand, in the compact support instance, a case by case analysis is necessary in order to tackle the issue.
Introduction
During more than 25 years, an important topic in statistical mechanics theory revolved around the notion of generalized q-statistics, pioneered by Tsallis [1]. It has been amply demonstrated that, in many occasions, the celebrated Boltzmann-Gibbs logarithmic entropy does not yield a correct description of the system under scrutiny [2]. Other entropic forms, called q-entropies, produce a much better performance [2]. One may cite a large number of such instances. For example, non-ergodic systems exhibiting a complex dynamics [2]. The non-extensive statistical mechanics of Tsallis' has been employed to fruitfully discuss phenomena in variegated fields. One may mention, for instance, high-energy physics [3]- [4], spin-glasses [5], cold atoms in optical lattices [6], trapped ions [7], anomalous diffusion [8], [9], dusty plasmas [10], low-dimensional dissipative and conservative maps in dynamical systems [11], [12], [13], turbulent flows [14], Levy flights [15], the QCD-based Nambu, Jona, Lasinio model of a many-body field theory [16], etc. Notions related to qstatistical mechanics have been found useful not only in physics but also in chemistry, biology, mathematics, economics, and informatics [17], [18], [19].
In this work we revisit the subject by appeal, in a classical MaxEnt phasespace framework, to the second variation of functionals. We find that such analysis guarantees a maximum of the Tsallis' entropy only in the case of the heavy tail distributions. Our present treatment makes it advisable, on a more general MaxEnt framework, to always look at the second functional variation. We begin our discussion by remembering the concept of second variation.
Second variation of a functional
The essential concept that we need here is that of increment h of a functional. Note that the general theory of Variational Calculus has been developed in a Banach Space (BS) [20]. Particularly important BS instantiations are, of course, Hilbert's space and classical phase-space. The MaxEnt approach in Banach space requires a first variation that should vanish and a second one that ascertains the nature of the pertinent extremum. This second variation is not usually encountered in MaxEnt practice, since one believes that the entropy possesses a global maximum. This second functional variation is the protagonist of the present endeavor. The approach is described in detail, for instance in the canonical book by Shilov [20] (for local minima). It is simply explained. One needs to evaluate the increment h of a functional F at the point y of the Banach space one is dealing with. One has
F (y + h) − F (y) = δ 1 F (y, h) + 1 2 δ 2 F (y, h 2 ) + ε(h) (2.1) where lim h→0 ||ε(h)|| ||h|| 2 = 0. (2.2) By definition, δ 1 F (y, h) is the first variation of F (if it is linear in h). δ 2 F (y, h) is F 's second variation, quadratic in h. If y is an extremum of F then δ 1 F (y, h) = 0 (2.3)
and it is a local minimum if
δ 2 F (y, h) ≥ C||h|| 2 C > 0, (2.4)
or a local maximum if
δ 2 F (y, h) ≤ C||h|| 2 C < 0 (2.5)
where C is a positive constant and ||h|| stands for the norm of h. In phasespace, the object of our present concerns,
||h|| 2 = M h 2 dµ (2.6)
where M is the region of phase-space one is interested in and µ the associated measure-volume for the concomitant space. We start our consideration with reference to the orthodox instance.
Motivation
The following considerations should motivate the reader to seriously consider the importance of elementary notions of functional analysis in q-statistics.
q-Exponentials as linear functionals or distributions
A generalized function (or distribution) is a continuous functional defined on a space of test-functions [21]. A typical such test space is the so-called K space of Schwartz', of infinitely differentiable functions with compact support. One can prove [21] that x α + , defined by
x α + = x α , f or x > 0, x α + = 0, f or x ≤ 0. (3.1)
is a distribution possessing single poles at integers α = −k with residues (at the pole)
R = (−1) k (k − 1)! δ (k−1) (x),(3.2)
with k = 1, 2, ...., n, .... [21]. A function is a particular instance of a distribution, called regular distribution. A singular distribution is that which cannot represented as a function. For example, Dirac's delta is such a singular distribution. Tsallis' q-exponentials e q , defined as
e q (x) = [1 + (q − 1)x] 1 1−q + ,(3.3)
becomes clearly a distribution defined via
e q (x) = [1 + (q − 1)x] 1 1−q + = [1 + (q − 1)x] 1 1−q if 1 + (q − 1)x > 0 e q (x) = 0 otherwise. (3.4)
Computations involving e q should fruitfully appeal to distribution theory.
An instructive example for second variations [20]
Generally, the minimal condition
δ 2 F (y, h) ≥ C||h|| 2 ,(3.5)
with C > 0 cannot be naively replaced by the weaker restriction
δ 2 F (y, h) ≥ .0 (3.6)
Consider, for instance, minimizing the functional
F (y) = 1 0 y 2 (x)[x − y(x)] dx. (3.7)
It is easily seen that y(x) ≡ 0, (3.8) gives a functional extremum for F (y), that is, for y given by (3.8) one has
δ 1 F (y, h) = 0. (3.9)
The second variation
δ 2 F (y, h) = 1 0 xh 2 (x) dx,(3.10)
is > 0 for any function h(x) = 0. Thus, one may naively assume that (3.8) yields a minimum for F (y).
To disprove such an assertion it is enough, given ǫ > 0, to consider as y(x) any non-negative function that is positive at x = 0, does not exceed ǫ − x for x < ǫ, and vanishes for x ≥ ǫ. For example, let y(x) = ǫ − x for x < ǫ and y(x) = 0 for x ≥ 0. Then,
F (y) = ǫ 0 (ǫ − x) 2 (2x − ǫ) dx = − ǫ 4 6 ! (3.11)
For a y(x) ≡ 0, F = 0, but the functional does not possess a minimum there. Similar considerations apply regarding local maxima apply if one considers the restriction
δ 2 F (y, h) ≤ C||h|| 2 ,(3.12)
with C < 0, that cannot be replaced by the weaker condition
[δ 2 F (y, h) ≤ 0. (3.13)
Choose, for instance, the functional
F (y) = 1 0 y 2 (x)[y(x) − x] dx,(3.14)
and repeat the above analysis.
Applying second variation 4.1 Boltzmann-Gibbs' Statistics
In the general case the prior information consists of N mean values corresponding to the observables < R i >, i = 1, . . . , N: However, the points we are about to make here emerge already at the simplest level of just one observable, the Hamiltonian H (canonical ensemble). We limit ourselves to this instance in this work. Additionally, we assume that H is lower bounded. The MaxEnt variational functional becomes
F S (P ) = − M P ln(P ) dµ + α M P H dµ− < U > + γ M P dµ − 1 ,
(4.1) with P the probability density MaxEnt is designed to encounter. H is the hamiltonian whose mean value is called < U >. Finally, α and γ will represent Lagrange multipliers. We consider now F S 's increment.
F S (P +h)−F S (P ) = − M (P +h) ln(P +h) dµ+α M (P + h)H dµ− < U > + γ M (P + h) dµ − 1 + M P ln(P ) dµ − α M P H dµ− < U > − γ M P dµ − 1 . (4.2)
We can also write
F S (P + h) − F S (P ) = M (−1 − ln(P ) + αH + γ) h − h 2 2P dµ + O(h 3 ).
(4.3) From (4.3) we find the first variation with its associated Euler-Lagrange equation plus the second variation as well. One has
− 1 − ln(P ) + αH + γ = 0 (4.4) − M h 2 P dµ ≤ C||h|| 2 (4.5) From (4.4) one gathers that P = e −βH Z Z = M e −βH dµ, (4.6)
with Z the system's partition function and β proportional to the inverse temperature. Eq. (4.5) yields
− M h 2 P dµ = −Z M h 2 e βH dµ ≤ −Z M h 2 dµ = −Z||h|| 2 (4.7)
Notice that we can pass from the second to the third integral because e βH is always greater than unity. This is a trivial point here, but not so when we consider Tsallis' statistics below. Looking at (4.7) we see that one can choose C = −Z. Remark that these are classical considerations. Problems with (4.6) at T = 0 are thus not surprising, on account of Thermodynamics' third law. As a bonus, we discover here that the bound-constant C is the partition function itself.
Tsallis' Statistics
Here there is no single way of computing mean values for the theory [2]. Several options are available, that are today considered equivalent for all practical purposes [2,22], because there is a "dictionary" that univocally relates two given probability densities P 1 − P 2 , obtained using two different mean-values' choices [22]. We consider in this work the three more important such choices and restrict ourselves to quadratic Hamiltonians. We insist in stating that the q-exponential is defined as (
x = βH ≥ 0) e q (−x) = [1 − (1 − q)x] 1/(1−q) , if (1 − q)x ≤ 1 ; = 0 otherwise (Tsallis ′ cutoff), (4.8)
that tends to the ordinary exponential as q → 1. One speaks of long tailed distributions for [1 − (1 − q)x] ≥ 0 for all x > 0 and compact-support ones whenever the Tsallis cutoff becomes operative for some x−values.
Orthodox linear choice
One evaluates mean values in the customary fashion, linear in P , i.e., < R >= M RP dµ. The concomitant Tsallis el functional is
F S (P ) = − M P q ln q (P ) dµ + α M P H dµ− < U > + γ M P dµ − 1 ,(4.
9) For the increment we have
F S (P +h)−F S (P ) = − M (P +h) q ln q (P +h) dµ+α M (P + h)H dµ− < U > + γ M (P + h) dµ − 1 + M P q ln q (P ) dµ − α M P H dµ− < U > − γ M P dµ − 1 (4.10)
Eq. (4.10) can be recast as (see Appendix B)
F S (P + h) − F S (P ) = M q 1 − q P q−1 + αH + γ h dµ− M qP q−2 h 2 2 dµ + O(h 3 ),(4.11)
Eq. (4.11) leads to the following equations:
q 1 − q P q−1 + αH + γ = 0, (4.12) − M qP q−2 h 2 dµ ≤ C||h|| 2 (4.13)
Eq. (4.12) is the Euler-Lagrange one while (4.13) gives bounds originating from the second variation. Thus, (4.12) entails (using the procedure given in [23]) α = βqZ 1−q (4.14)
γ = q q − 1 Z 1−q (4.15) P = [1 + β(1 − q)H] 1 q−1 Z = e 2−q (−βH)/Z, (4.16) Z = M [1 + β(1 − q)H] 1 q−1 dµ. (4.17)
For Eq. (4.13) we have,
W = − M qP q−2 h 2 dµ = − M qZ 2−q [1 + β(1 − q)H] q−2 q−1 h 2 dµ. (4.18)
In order to obtain a bound, we need to find a constant C (independent, in particular, of H). Thus, one needs to make sure that the bracket
[1 + β(1 − q)H] ≥ 0. (4.19)
This entails
0 < q ≤ 1. (4.20)
In such a case, the integral without the bracket is smaller or equal than the integral with the bracket and we have
W ≤ −qZ 2−q ||h 2 || = −C||h 2 ||. (4.21)
The present arguments, based on (4.19), guarantee an entropic maximum only for long-tail (or heavy-tail) Tsallis distributions. For compact support distributions, the present arguments are inconclusive. The maximum may or may not exist. One should further investigate things on a case-by-case fashion. For instance, if q > 1, the bracket [1 + β(1 − q)H] might remain positive for low enough β. We will discuss this possibility in a different Section below.
Curado-Tsalllis mean values
A second alternative way of obtaining mean values has been advanced in Ref. [24], where the authors define
< U >= M P q H dµ,(4.22)
so that the MaxEnt Lagrangian F S becomes
F S (P ) = − M P q ln q (P ) dµ + α M P q H dµ− < U > + γ M P dµ − 1 ,
(4.23) and for the functional increment one writes (see Appendix B)
F S (P +h)−F S (P ) = − M (P +h) q ln q (P +h) dµ+α M (P + h) q H dµ− < U > + γ M (P + h) dµ − 1 + M P q ln q (P ) dµ − α M P q H dµ− < U > − γ M P dµ − 1 (4.24)
Expansion in order of up to h 2 is now demanded for A) (P +h) q , B) ln q (P +h), and the product A) B). Keeping only terms of order h and h 2 , one deduces from (4.24) that the linear term in h yields
− q q − 1 [1 − α(q − 1)H]P q−1 + γ = 0, (4.25) while de h 2 -term generates − M q[1 − α(q − 1)H]P q−2 h 2 dµ ≤ C||h|| 2 (4.26)
Using (4.25) produces here (with the procedure of [23]):
α = −β (4.27) γ = − q q − 1 Z 1−q (4.28) P = [1 − β(1 − q)H] 1 1−q Z = e q (−βH)/Z, (4.29) Z = M [1 − β(1 − q)H] 1 1−q dµ (4.30)
while (4.26) leads to
W = − M q[1 − α(q − 1)H]P q−2 h 2 dµ = = − M qZ 2−q [1 + (q − 1)βH] q−2 1−q +1 h 2 = = −Z 2−q q M [1 + β(q − 1)H] 1 q−1 h 2 dµ. (4.31)
Again, in order to obtain a constant bound we need the bracket in the last line above to be positive. This entails, again, heavy-tail distributions, here implying q ≥ 1. The obvious demand q > 0 is satisfied given (4.32). As in the preceding subsection, the present arguments guarantee an entropic maximum only for long-tail Tsallis distributions. For compact support ones, these arguments are inconclusive. The maximum may or may not exist. One should further investigate things on a case-by-case fashion. For instance, if q < 1, the bracket [1 + β(q − 1)H] might remain positive for low enough β. We will discuss this possibility in a future Section below.
Tsallis-Mendes-Plastino (TMP) mean values
Following Ref. [23] we tackle the relationships
< U >= M P q H dµ M P q dµ (4.34)
Our Lagrangian reads now
F S (P ) = − M P q ln q (P ) dµ + α M P q H dµ − M P q dµ < U > + γ M P dµ − 1 . (4.35)
Thus (see Appendix B),
F S (P + h) − F S (P ) = − M (P + h) q ln q (P + h) dµ+ α M (P + h) q H dµ − M (P + h) q dµ < U > + γ M (P + h) dµ − 1 + M P q ln q (P ) dµ − α M P q H dµ − M P q dµ < U > − γ M P dµ − 1 . (4.36)
This simplifies to
F S (P + h) − F S (P ) = M − 1 q − 1 + α(H− < U >) qP q−1 + γ h dµ+ − 1 2 M [1 − α(q − 1)(H− < U >)]qP q−2 h 2 dµ + O(h 3 ). (4.37)
Now we gather that
− 1 q − 1 + α(H− < U >) qP q−1 + γ = 0 (4.38) − 1 2 M [1 − α(q − 1)(H− < U >)]qP q−2 h 2 dµ ≤ C||h|| 2 . (4.39)
From (4.38) one finds, with the usual procedure (see [23]):
α = − β 1 − β(1 − q) < U > , (4.40) γ = qZ 1−q − [1 − β(1 − q) < U >] (q − 1)[1 − β(1 − q) < U >] , (4.41) P = [1 − β(1 − q)H] 1 1−q Z , (4.42) Z = M [1 − β(1 − q)H] 1 1−q dµ. (4.43)
A bit of algebra produces, from the above relations,
W = − M [1 − α(q − 1)(H− < U >)]qP q−2 h 2 dµ = = q M −kT 1−(1−q)H kT +(q−1)<U > P q−2 h 2 dµ,(4.44)
and setting P q−2 = Z 2−q e q (−βH) q−2 ,
W = −qZ 2−q M [1 + (q − 1)βH] 1/(q−1) 1 + β(q − 1) < U > h 2 dµ. (4.45)
One sees that, as in the two preceding instances, the bracket [1 + (q − 1)βH] must be positive (long-tails!) so as to find a constant bound. This entails
q ≥ 1,(4.46)
and one has
W ≤ −qZ 2−q M 1 1 + β(q − 1) < U > ||h|| 2 ≤ C||h|| 2 , (4.47) that is C = q 1 1 + β(q − 1) < U > Z 2−q ,(4.48)
and we obtain the required bound for the entropy to become maximal. Moreover, from (4.47), we get, once again, the here redundant condition q > 0. As in the preceding two subsections, here our arguments guarantee an entropic maximum only for long-tail Tsallis distributions. For compact support distributions, the present arguments remain inconclusive. The maximum may or may not exist. One should further investigate things on a case-by-case fashion. The comment made below Eq. (4.33) above is also pertinent here.
Example: the Harmonic Oscillator (HO)
Consider the simple Hamiltonian (in phase space) H = P 2 + Q 2 . We will reconfirm the second variation functional restrictions encountered above in the concomitant three q-statistics cases.
Linear constraint < U >
We start by remembering (4.11), that required q ≤ 1. One has
− M qP q−2 h 2 dµ = − M qZ 2−q [1 + β(1 − q)(P 2 + Q 2 )] q−2 q−1 + h 2 dµ ≤ − M qZ 2−q [1+β(1−q)(P 2 +Q 2 )] 1 1−q + h 2 dµ ≤ −qZ 2−q M h 2 dµ = −qZ 2−q ||h|| 2 , (5.1) since [1 + β(1 − q)(P 2 + Q 2 )] 1 1−q + ≥ 1. (5.2)
We reobtain the restriction on heavy-tail Tsallis distributions.
Curado-Tsallis non linear constraints
We recall (4.35) and (4.36). We had here the restriction q ≥ 1 and deal now with
− M q[1−α(q−1)(P 2 +Q 2 )]P q−2 h 2 dµ = − M Z 2−q q[1+β(q−1)(P 2 +Q 2 )] 1 q−1 + h 2 dµ ≤ − M qZ 2−q h 2 dµ = −qZ 2−q ||h|| 2 (5.3) Since q ≥ 1 we have [1 + β(q − 1)(P 2 + Q 2 )] 1 q−1 + ≥ 1.
Heavy tails once again!
TMP constraints
Here we must go back to (4.46) and (4.49). The operative restriction is q ≥ 1. Accordingly,
− M [1 − α(q − 1)(P 2 + Q 2 − < U >)]qP q−2 h 2 dµ = − qZ 2−q 1 + β(q − 1) < U > M [1 + β(q − 1)(P 2 + Q 2 )] 1 q−1 + h 2 dµ ≤ − qZ 2−q 1 + β(q − 1) < U > ||h|| 2 ≤ C||h|| 2 . (5.4)
We need again to appeal to heavy tail distributions.
The compact support instance
We now consider the case of compact support probabilistic distributions in the formulation of Curado-Tsallis (similar arguments can be made for the other two possibilities). In such a case we need to satisfy, for a maximum, the relation
− Z 2−q q M [1 + β(q − 1)H] 1 q−1 h 2 dµ ≤ C||h|| 2 (6.1)
A maximum is not guaranteed if 0 < q < 1. Consider now in more detail such a q-interval. H must be bounded by above in all phase space. By choosing β sufficiently small, the bracket above does take a minimum positive value ∆ and then we have
− Z 2−q q∆ 1 1−q ||h|| 2 ≤ C||h|| 2 (6.2) Thus, selecting − Z 2−q q∆ 1 1−q = C (6.3)
we conclude that entropy does exhibit a maximum. Another way of viewing this argument is to consider that H have a maximum value R. Consider q < 1. More specifically, q = 0.5. Then our critical bracket reads (6.4) entailing, for β = 1/kT ,
[1 − 0.5βR] > 0,T > R 2k ,(6.5)
i.e., a minimum temperature in order to guarantee the desired maximal condition that concerns us here. One might wish to speculate that, for lower temperatures, since no entropic maximum is possible, equilibrium might not be reached.
Let us, for instance, H be given by
H = (P 2 + Q 2 )H(P 2 0 + Q 2 0 − P 2 + Q 2 ) (6.6)
where H is the Heaviside's step function. We need [1 + β(q − 1)H] to be positive. Selecting q = 1 2 and β = 1/kT small enough we have:
0 ≤ P 2 + Q 2 ≤ P 2 0 + Q 2 0 < 2 β ,(6.7)
and thus T > (P 2 0 + Q 2 0 )/2k, (6.8)
∆ = 1 − β 2 (P 2 0 + Q 2 0 ) (6.9)
Conclusions
Our second variation bounds are given, for the three Tsallis' cases, by the C-bounds (4.21), (4.33), and (4.48), respectively. Note that, since Z vanishes at T = 0, we can not guarantee there a finite bound C, as required by the second variation protocol. The three bounds yield exactly the same conclusion: entropic maxima are guaranteed only for long tail Tsallis distributions. For compact support distributions, the present arguments are inconclusive. The maximum may or may not exist. One should further investigate things on a case-by-case fashion, as we have done, for particular instances, in Section 6. Summing up, the three Tsallis treatments, that were proved to yield identical predictions for mean values in [22], still give the same results concerning the requirements for entropic maxima. It is almost trivial to show that, for lower bounded Hamiltonians, a quantum n−levels treatment yields identical conclusions (see Appendix A). Our present treatment makes it advisable, on a more general MaxEnt standpoint, to always look at the second functional variation.
AppendixA
We consider a quantum system with n discrete levels of positive energies ǫ i , probabilities p i , and increments h i . The probability-vector P and incrementvector h belong to l 2 . Consider, for instance, the orthodox linear choice for mean values, i.e., one evaluates mean values in the customary fashion, linear in the probabilities. The concomitant functional is
F S (P ) = − n i=1 p q i ln q (p i ) + α n i=1 p i ǫ i − < U > + γ n i=1 p i , (A.1)
For the increment we have
F S (P +h)−F S (P ) = − n i=1 (p i +h i ) q ln q (p i +h i )+α n i=1 (p i + h i )ǫ i − < U > + γ n i=1 (p i + h i ) − 1 + i=1 p q i ln q (p i ) − α n i=1 p i ǫ i − < U > −p i = [1 + β(1 − q)ǫ i ] 1 q−1 Z (A.8) Z = n i=1 [1 + β(1 − q)ǫ i ] 1 q−1 . (A.9)
For the bound given by (A.5) we have
W = − n i=1 qp q−2 i h 2 i = − n i=1 qZ 2−q [1 + β(1 − q)ǫ i ] q−2 q−1 h 2 i . (A.10)
To obtain a constant bound, independent of the ǫ i , we must demand [1 + β(1 − q)ǫ i ] ≥ 0 (long tail distribution!). Accordingly,
W = − i=1 qZ 2−q [1 + β(1 − q)ǫ i ] 1 1−q h 2 i ≤ −qZ 2−q n i=1
h 2 i = −qZ 2−q ||h|| 2 (A.11) From (A.11) we see that C = −qZ 2−q and q > 0. As q ≤ 1 we have finally for q the bounds 0 < q ≤ 1.
AppendixB
Consider a functional of P called F S (P ), given by Accordingly,
F S (P + h) = 1 q − 1 + M P q + qP (q−1) h + q(q−1) 2 P q−2 h 2 1 − q dµ + O(h 3 ) (B.3)
F
S (P + h) − F S (
F
S
. C Tsallis, J. of Stat. Phys. 52479C. Tsallis, J. of Stat. Phys., 52 (1988) 479.
C Tsallis, Introduction to Nonextensive Statistical Mechanics Approaching a Complex World. NYSpringerC. Tsallis, Introduction to Nonextensive Statistical Mechanics Ap- proaching a Complex World (Springer, NY, 2009).
. A Adare, Phys. Rev. D. 8352004A. Adare et al., Phys. Rev. D 83 (2011) 052004.
. G Wilk, Z Wlodarczyk, Physica A. 305227G. Wilk, Z. Wlodarczyk, Physica A 305 (2002) 227.
. R M Pickup, R Cywinski, C Pappas, B Farago, P Fouquet, Phys. Rev. Lett. 10297202R. M. Pickup, R. Cywinski, C. Pappas, B. Farago, and P. Fouquet, Phys. Rev. Lett. 102 (2009) 097202.
. E Lutz, F Renzoni, Nature Physics. 9615E. Lutz and F. Renzoni, Nature Physics 9 (2013) 615.
. R G Devoe, Phys. Rev. Lett. 10263001R. G. DeVoe, Phys. Rev. Lett. 102 (2009) 063001.
. Z Huang, G Su, A El Kaabouchi, Q A Wang, J Chen, J. Stat. Mech. 05001Z. Huang, G. Su, A. El Kaabouchi, Q. A. Wang, and J. Chen, J. Stat. Mech. L05001 (2010).
. J Prehl, C Essex, K H Hoffman, Entropy. 14701J. Prehl, C. Essex, and K. H. Hoffman, Entropy 14 (2012) 701.
. B Liu, J Goree, Phys. Rev. Lett. 10055003B. Liu and J. Goree, Phys. Rev. Lett. 100 (2018) 055003.
. O Afsar, U Tirnakli, EPL. 101O. Afsar and U. Tirnakli, EPL 101 (2013) 20003.
. U Tirnakli, C Tsallis, C Beck, Phys. Rev. E. 7956209U. Tirnakli, C. Tsallis, and C. Beck, Phys. Rev. E 79 (2009) 056209.
. G Ruiz, T Bountis, C Tsallis, Int. J. Bifurcation Chaos. 221250208G. Ruiz, T. Bountis, and C. Tsallis, Int. J. Bifurcation Chaos 22 (2012) 1250208.
. C Beck, S Miah, 031002. 011109Phys. Rev. E. 87C. Beck and S. Miah, Phys. Rev. E 87 (2013) 031002. 011109.
. G Wilk, Z Wlodarczyk, Phys. Rev. Lett. 842770G. Wilk, Z. Wlodarczyk, Phys. Rev. Lett. 84 (2000) 2770.
. J Rozynek, G Wilk, J. of Physics G. 36125108J. Rozynek, G. Wilk, J. of Physics G 36 (2009) 125108.
C M Gell-Mann, C Tsallis, Nonextensive EntropyInterdisciplinary Applications. New YorkOxford University PressC. M. Gell-Mann and C. Tsallis, Nonextensive EntropyInterdisciplinary Applications (Oxford University Press, New York, 2004).
. S Abe, Astrophys. Space Sci. 305241S. Abe, Astrophys. Space Sci. 305 (2006) 241.
. S Picoli, R S Mendes, L C Malacarne, R P B Santos, Braz. J. Phys. 39468S. Picoli, R. S. Mendes, L. C. Malacarne, and R. P. B. Santos, Braz. J. Phys. 39 (2009) 468.
G Y Shilov, Mathematical Analysis. NYPergamon PressG. Y. Shilov: Mathematical Analysis (Pergamon Press, NY, 1965).
I M , G E Shilov, Generalized Functions, Vols. NYAcademic Press1I. M. Gel'fand and G. E. Shilov, Generalized Functions, Vols. 1-2 (Aca- demic Press, NY, 1964).
. G L Ferri, S Martinez, Plastino, J. of Stat. Mech. 4009G. L. Ferri, S. Martinez, A Plastino, J. of Stat. Mech. P04009 (2005).
. C Tsallis, R S Mendes, A R Plastino, Physica A. 261534C. Tsallis, R. S. Mendes, A.R. Plastino, Physica A 261 (1998) 534.
. E M F Curado, C Tsallis, J. Phys. A. 241019E.M.F. Curado, C. Tsallis, J. Phys. A 24 (1991) L69; Corrigenda: 24 (1991) 3187 and 25 (1992) 1019.
|
[] |
[
"Diamagnetic persistent currents for electrons in ballistic billiards subject to a point flux",
"Diamagnetic persistent currents for electrons in ballistic billiards subject to a point flux"
] |
[
"Oleksandr Zelyak \nDepartment of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKentuckyUSA\n",
"Ganpathy Murthy \nDepartment of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKentuckyUSA\n\nDepartment of Physics\nHarvard University\n02138CambridgeMassachusetts\n"
] |
[
"Department of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKentuckyUSA",
"Department of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKentuckyUSA",
"Department of Physics\nHarvard University\n02138CambridgeMassachusetts"
] |
[] |
We study the persistent current of noninteracting electrons subject to a pointlike magnetic flux in the simply connected chaotic Robnik-Berry quantum billiard, and also in an annular analog thereof.For the simply connected billiard we find a large diamagnetic contribution to the persistent current at small flux, which is independent of the flux and is proportional to the number of electrons (or equivalently the density since we keep the area fixed). The size of this diamagnetic contribution is much larger than mesoscopic fluctuations in the persistent current in the simply connected billiard, and can ultimately be traced to the response of the angular momentum l = 0 levels (neglected in semiclassical expansions) on the unit disk to a pointlike flux at its center. The same behavior is observed for the annular billiard when the inner radius is much smaller than the outer one, while the usual fluctuating persistent current and Anderson-like localization due to boundary scattering are seen when the annulus tends to a one-dimensional ring. We explore the conditions for the observability of this phenomenon.
|
10.1103/physrevb.78.125305
|
[
"https://arxiv.org/pdf/0806.0826v2.pdf"
] | 118,367,251 |
0806.0826
|
51ca6b310d139171be4efc1e62abc34d36cff5c1
|
Diamagnetic persistent currents for electrons in ballistic billiards subject to a point flux
5 Jun 2008
Oleksandr Zelyak
Department of Physics and Astronomy
University of Kentucky
40506LexingtonKentuckyUSA
Ganpathy Murthy
Department of Physics and Astronomy
University of Kentucky
40506LexingtonKentuckyUSA
Department of Physics
Harvard University
02138CambridgeMassachusetts
Diamagnetic persistent currents for electrons in ballistic billiards subject to a point flux
5 Jun 2008(Dated: June 5, 2008)numbers: 7323Ra7323-b7343Qt7575+a Keywords: quantum dotpersistent currentquantum billiard
We study the persistent current of noninteracting electrons subject to a pointlike magnetic flux in the simply connected chaotic Robnik-Berry quantum billiard, and also in an annular analog thereof.For the simply connected billiard we find a large diamagnetic contribution to the persistent current at small flux, which is independent of the flux and is proportional to the number of electrons (or equivalently the density since we keep the area fixed). The size of this diamagnetic contribution is much larger than mesoscopic fluctuations in the persistent current in the simply connected billiard, and can ultimately be traced to the response of the angular momentum l = 0 levels (neglected in semiclassical expansions) on the unit disk to a pointlike flux at its center. The same behavior is observed for the annular billiard when the inner radius is much smaller than the outer one, while the usual fluctuating persistent current and Anderson-like localization due to boundary scattering are seen when the annulus tends to a one-dimensional ring. We explore the conditions for the observability of this phenomenon.
I. INTRODUCTION
A resistanceless flow of electrons can occur in mesoscopic systems if the linear size L is less than the phase coherence length L φ . The simplest example of this is a one-dimensional metallic ring threaded by a magnetic flux Φ. The thermodynamic relation
I = − ∂F ∂Φ(1)
defines the persistent current in MKS units. At zero temperature, which we will focus on, the free energy F can be replaced by the total ground state energy E.
Persistent currents were first predicted to occur in superconducting rings 1,2,3 . It was later realized that persistent currents exist in normal metallic rings as well 4,5 . The phenomenon is understood most easily at zero temperature for a ring of noninteracting electrons, where the electronic wavefunction extends coherently over the whole ring. If the ring is threaded by a solenoidal flux, all physical properties are periodic in applied magnetic flux with a period of the flux quantum Φ 0 = h/e. A nonzero flux splits the degeneracy between clockwise and anticlockwise moving electrons. Upon filling the energy states with electrons, one finds ground states which have net orbital angular momentum, and net persistent current. Much experimental work has been carried out on ensembles of rings/quantum dots 6,7 in a flux as well as on single metallic 8,9,10,11,12 or semiconductor quantum dots/rings 13,14,15,16 . The subject has a long theoretical history as well 17,18,19,20,21,22,23,24,25,26 .
In this paper we investigate the persistent current of noninteracting electrons in quantum billiards subject to a point flux. Related semiclassical calculations have been carried out in the past for regular (integrable in the absence of flux) 27,28,29,30,31,32 and chaotic billiards 27,29,30,33 . Numerics have previously been performed on these systems as well 34,35 .
We carry out calculations on the simply connected chaotic Robnik-Berry billiard 36,37,38,39,40 obtained by deforming the boundary of the integrable disk, and on an annular analog which we call the Robnik-Berry annulus. The ratio of the inner r to the outer radius R of the annulus (ξ = r/R) plays an important role in our analysis, and allows us to go continuously between the simply connected chaotic two-dimensional billiard and a (effectively disordered)
quasi-one-dimensional ring.
Our main result is that there is a large diamagnetic and flux-independent contribution to the persistent current for |Φ| ≪ Φ 0 in the simply connected billiard which is proportional to the number of particles, and overwhelms the mesoscopic fluctuations which have been the focus of previous work 17,18,23,24,25,27,28,29,30,31,32,33,34,35 . This arises from angular momentum l = 0 states in the integrable disk, which respond diamagnetically, with energy increasing linearly with the point flux, for small flux. This behavior is robust under the deformation of the boundary which makes the dynamics chaotic. As ξ increases from zero, this contribution to the persistent current persists for typical Φ/Φ 0 ≃ 1, but smoothly decreases in magnitude and becomes negligible for ξ → 1. The precise ξ at which the diamagnetic contribution to the persistent current becomes equal to the typical fluctuating paramagnetic contribution depends on the electron density. For ξ = 0 and very tiny flux the diamagnetic contribution to the persistent current varies linearly with Φ/Φ 0 (see below).
This diamagnetic contribution seems to have been missed in previous work, to the best of our knowledge. The reason is that the semiclassical approximation becomes asymptotically exact as the energy tends to infinity, and in this limit, the spectral density of l = 0 states vanishes. Thus, l = 0 states are explicitly disregarded 27,29,30,33 in the semiclassical approach, since they do not enclose flux. It has been noted in the past that diffraction effects necessitate an inclusion of l = 0 states in the sum over periodic orbits on the integrable disk 28 , but the connection to persistent currents was not made.
It should be emphasized that since the total persistent current is a sum over the contributions of all levels, the diamagnetic contribution we uncover exists even at very large energies, where the levels at the Fermi energy are well approximated by semiclassics.
The robustness of the diamagnetic contribution to the persistent current under deformation can be understood as follows: In the chaotic billiard, each state at a particular energy is roughly a linear combination of states of the disk within a Thouless energy (
E T ≃ v F /L,
where L is the linear size of the billiard) of its energy. When the Fermi energy E F greatly exceeds E T , the contribution of the occupied states does not change much when the boundary is deformed and chaos is introduced.
This behavior appears similar to, but is different from Landau diamagnetism 41 in a finite system, which is a response to a uniform magnetic field. The primary difference is that the orbital magnetization (proportional to the persistent current) in Landau diamagnetism is proportional to the field itself (because the energy goes quadratically with the field strength), whereas the effect we describe is independent of the flux for small flux in the simply connected Robnik-Berry billiard (because the energy goes linearly with the flux). In the Robnik-Berry annulus with ξ → 0, the energy rises quadratically with the flux for very tiny flux Φ ≪ Φ 0 / log Nξ −2 , but crosses over to the linear behavior characteristic of the simply connected system for larger Φ. Since the flux is pointlike, and in the annular case nonzero only where the electron wavefunctions vanish, the entire effect is due to Aharanov-Bohm quantum interference. Since the effect is primarily caused by levels deep below E F , experimental detection is feasible only through the total magnetization, and not by conductance fluctuations which are sensitive to the levels within the Thouless shell (lying within E T of E F ). Previous samples have been subjected to a uniform field rather than a point flux 8,9,10,11,13,14,15 , and anyway the ring samples have ξ too large for this effect to be seen.
However, we believe that experiments can be designed to observe this effect.
The plan of the paper is as follows: In Section II we describe the method we use to calculate the spectrum, and present analytical expressions and numerical results for persistent current in the disk and simply connected Robnik-Berry billiards. In Section III we generalize the method to the annulus and present our results. Conclusions and implications are presented in Section IV.
II. THE SIMPLY CONNECTED ROBNIK-BERRY BILLIARD
We begin by briefly describing the procedure to obtain the energy levels ǫ k within the billiard, which leads to the persistent current:
I = k I k , I k = − ∂ǫ k ∂Φ ,(2)
We work with the Robnik-Berry billiard 37,38 , which is obtained from the unit disk by conformal transformation. The original problem of finding energy levels of electron in the domain with complicated boundaries is reduced to a problem where the electron moves in the unit disk in a fictitious potential introduced by the following conformal transformation;
w(z) = z + bz 2 + ce iδ z 3 √ 1 + 2b 2 + 3c 2(3)
where w = u+iv represents the coordinates in the laboratory coordinate system, and z = x+ iy are the conformally transformed coordinates (details are in Appendix A). The parameters b, c, and δ control the shape of the original billiard, and for the values we use, the classical dynamics is mixed, but largely chaotic. It is also straightforward to introduce a point flux which penetrates the center of the unit disk 37,38,39,40 (after the conformal transformation). We find 600 energy levels for regular and chaotic billiards for different values of parameter α that controls magnetic flux coming through billiards. Only the lowest 200 levels are actually used in further calculations, since the higher levels become increasingly inaccurate 39,40 .
The persistent current is obtained as a numerical derivative of the ground state energy for a given number of electrons.
For the unit disk billiard the ground state energy E G has a non zero slope as α → 0.
Thus there is a persistent current in the system for arbitrarily small magnetic flux (see Fig. 1).
Qualitatively this behavior can be understood as follows. In the absence of magnetic field energy levels corresponding to orbital quantum numbers ±l, are degenerate. A nonzero Φ lifts the degeneracy and for small α the two ±l levels have slopes that are equal in magnitude and opposite in sign. Thus, as long as both are occupied, these levels do not contribute to the net persistent current I. The only nonzero contribution comes from levels with l = 0.
For the unit disk the expression for the persistent current can be derived analytically for small values of magnetic flux. At zero temperature the persistent current due to kth level is I k = −∂ǫ k /∂Φ (ǫ k is a dimensionless energy, and I k is persistent current divided by the energy unit 2 /2mR 2 ; see appendix A for notations). For the unit disk, the energy levels are found from the quantization condition:
J |l−α| (γ |l−α|,n ) = 0, ǫ k = γ 2 n (|l − α|).(4)
Then from Eq. (2), persistent current caused by kth level is:
I k = − 2e h γ n (|l k − α|) ∂γ n (|l k − α|) ∂α .(5)
To find ∂γ/∂α we differentiate Eq. (4):
∂J ν (γ) ∂α = ∂J ν (γ) ∂ν ∂ν ∂α + ∂J ν (γ) ∂γ ∂γ ∂α = 0.(6)
For l = 0 levels, ν = |l − α| = α. In α → 0 limit, for the derivatives of Bessel function one gets:
∂J ν (γ) ∂ν ν=0 = π 2 N 0 (γ), ∂J ν (γ) ∂γ ν=0 = −J 1 (γ).(7)
Combining Eqs. (6) and (7) and using relation that for γ ≫ 1 Bessel function N 0 (γ) ≈ J 1 (γ) (this approximation works well already for the first root of Eq. (4)), we find ∂γ ∂α | α=0 = π 2 , which leads to:
I = − πe h n γ n (0),(8)
where summation is over the levels with orbital quantum number l = 0.
For large argument values (which is the same as large energies) the quantization condition (4) for the unit disk becomes cos(γ n − πα/2 − π/4) = 0, with roots:
γ n = πα/2 + π/4 + π(2n + 1)/2.(9)
With the energy being measured in 2 /2mR 2 units, the Fermi wave vector is k F = γ max ≈ πn max , where n max denotes the largest l = 0 level. With disk area equal to π (R = 1), the number of particles in the system is N = (πn max /2) 2 . This allows us to find the dependence of the persistent current on number of particles in the system in α → 0 limit.
I = − eπ h n ( 3π 4 + πn) ≈ − eπ 2 2h n 2 max = − e h 2N,(10)
where we neglected a subleading term proportional to n max . We remind the reader that the physical persistent current is the expression in formula (10) divided by energy unit 2 /2mR 2 .
In Fig. 2 To see that levels with l = 0 do not contribute to persistent current at weak magnetic flux we simply note that derivative of γ:
∂γ n ∂α α=0 = − ∂J ν (γ n ) ∂ν ∂ν ∂α ∂J ν (γ n ) ∂γ n α=0(11)
is an odd function of l. In the α → 0 limit, the root γ n (ν) and the derivatives of J ν (γ n ) in
Eq. (11) are even functions of l, and ∂ν/∂α is odd. As a result, the whole expression is odd function of l, which proves the cancellation of ±l levels in Eq. (5).
For the chaotic simply connected Robnik-Berry billiard each eigenstate is a superposition of all l-states of regular disk (see Eq. (A5)), mostly within a Thouless shell of its energy.
Assuming that states with ±l enter this superposition with equal probability over the ensemble due to the chaotic nature of motion, one can conclude that the ensemble-averaged contribution of these levels to the net current is zero. However, as seen in Fig. 2b, the mesoscopic fluctuations due to the l = 0 levels are overwhelmed by the diamagnetic contribution linear in N for small α.
III. THE ROBNIK-BERRY ANNULUS
Now we turn our attention to annular billiard. Here there is an additional parameter ξ, which is the ratio of the inner radius r to the outer radius R of the regular annulus in the (conformally transformed) z plane. By varying ξ we are able to smoothly go from the simply connected Robnik-Berry billiard to an (effectively) disordered ring in the limit ξ → 1 − .
First consider the disk limit ξ → 0. We can derive an analytical expression for I when ξ is small enough that ξγ n ≪ 1, and γ n ≫ 1. For a regular annulus with R = 1 energy quantization follows from the Dirichlet boundary condition:
J ν (γ n )N ν (γ n ξ) − J ν (γ n ξ)N ν (γ n ) = 0.(12)
Here n numerates root at fixed angular momentum ν. We use the large and small argument expansions for Bessel functions to obtain for the l = 0 levels:
cot(γ n − πα 2 − π 2 ) = 1 Γ(1 + α) γ n ξ 2 α cot(απ) Γ(1 + α) γ n ξ 2 α − Γ(α) π γ n ξ 2 −α .(13)
We express the roots for the annulus as a small deviation from the roots for the disk, which we denote γ (d)
n ; γ n = γ (d)
n + δγ n with γ (d) n = απ/2 + π/4 + π(2n + 1)/2. Approximating cot(απ) by 1/(απ), for small values of δγ n we find:
δγ n = − απ 2 1 + coth α ln γ n ξ 2 .(14)
In Eq. (14), for small α, γ n under logarithm can be safely replaced by its value for the disk γ (d)
n . Then roots for the annulus are:
γ n = π 4 + π 2 (2n + 1) − π 2 α coth α ln γ (d) n ξ 2 .(15)
One can now take various limits of Eq. (15). To recover Eq. (9) for the disk roots, we keep magnetic flux α fixed and take the limit ξ → 0. As one can see from Eq. (15), convergence to the disk limit is slow due to the logarithm, and occurs only for α ≫ 1/ log (γ
(d)
n ξ/2). Another limit of interest is to keep ξ fixed and obtain behavior of roots γ n for small α. For small α ≪ 1/ log (γ (d) n ξ/2), the roots γ n with l = 0 vanish quadratically with α. Expanding the coth function in Eq. (15), we obtain:
γ n = π 4 + π 2 (2n + 1) − π 2 1 + α 2 3 ln 2 γ (d) n ξ 2 ln −1 γ (d) n ξ 2 .(16)
which leads to the persistent current:
I ≈ 2πeα 3h n π 4 + π 2 (2n + 1) ln γ (d) n ξ 2 − π 2 .(17)
Rough estimation of this sum with help of the Euler-MacLaurin formula gives:
I ≈ π 2 eα 3h n 2 max ln n max ξπ 2 √ e ,(18)
where we kept only terms proportional to n 2 max and e = 2.71828... inside the logarithm denotes Euler's number and not the electronic charge.
Using the relation N = (πn max /2) 2 (for small values of ξ the density of states for the annulus and the disk are practically the same), the persistent current becomes for small
α ≪ 1/ log (γ (d) n ξ/2): I = I (d) α 3 ln Nξ 2 e , I (d) = − e h 2N.(19)
To approach the limit of a one-dimensional ring, where γ n ≫ 1 and γ n ξ ≫ 1, we return to quantization condition (12) and use the following large argument expansion for Bessel functions:
J ν (z) ≈ 2 πz cos(z − πν 2 − π 4 ) − sin(z − πν 2 − π 4 ) ν 2 − 1/4 2z , N ν (z) ≈ 2 πz sin(z − πν 2 − π 4 ) + cos(z − πν 2 − π 4 ) ν 2 − 1/4 2z .(20)
We use formulas (20) and quantization condition (12) to get a new equation for roots:
sin(γ n σ) − cos(γ n σ)
ν 2 − 1/4 2γ n ξ σ = 0,(21)
where we ignore the term proportional 1/γ 2 n . The quantity σ = 1 − ξ, is assumed to be much less than unity. For sufficiently small σ one can drop the second term in Eq. (21) and get γ n σ = πn. To find corrections to this expression we assume that γ n σ = πn + η with η = ν 2 −1/4 2πn σ 2 ≪ 1 and plug it in Eq. (21) to obtain the solutions of quantization condition (21) are:
γ n = πn σ + ν 2 − 1/4 2πn σ, ν = |l − α|.(22)
The energy spectrum for the annulus in this limit is:
ǫ n,l = γ 2 n ≈ πn σ 2 + (ν 2 − 1/4).(23)
The first term in Eq. (23) denotes the radial kinetic energy and diverges in σ → 0 limit.
This divergence can be absorbed into the chemical potential for the n = 1 radial state. The difference between energy levels with radial quantum numbers n is of the order n(π/σ) 2 .
For σ → 0 ⇒ ξ → 1 one can assume that all the levels of interest have the radial quantum number n = 1, and are labelled only by orbital quantum number l. Since our diamagnetic persistent current arises from a large number ∝ √ N of l = 0 levels, it is clear that it vanishes in the limit of a ring.
It is straightforward to show that for a regular annulus the contributions of ±l levels also cancel each other for small values of α. However, levels with l = 0 have zero slope when α → 0. To show this one takes the derivative of quantization condition (12):
J ν (γ n ξ)N ν (γ n ) + J ν (γ n ξ)Ṅ ν (γ n ) −J ν (γ n )N ν (γ n ξ) − J ν (γ n )Ṅ ν (γ n ξ) + ∂γ n ∂α ξJ ′ ν (γ n ξ)N ν (γ n ) + J ν (γ n ξ)N ′ ν (γ n ) − J ′ ν (γ n )N ν (γ n ξ) − ξJ ν (γ n )N ′ ν (γ n ξ) = 0,(24)
whereȦ ν (z) = ∂A ν (z)/∂ν, and A area of the annulus the same, thus keeping the average density of states the same. For a regular annulus (Fig. 3a) for small values of flux, the current is a linear function of α. As ξ gets smaller, the diamagnetic contribution to the persistent current increases. This behavior is consistent with Eq. (19) that shows linear dependence on α and slow growth as ξ → 0.
In the regular annulus, for ξ close to unity the behavior of the persistent current is close, but not identical, to that of a 1D ring. Even for ξ = 0.9 there exist several states with l = 0, which means that our billiard is not purely a 1D ring. The effect of these states on the persistent current is not entirely trivial. For a fixed number of particles in the system, as ξ changes, the number of l = 0 levels also changes. As the next l = 0 level is added (or expelled), the current experiences a jump. The magnitude of this jump is large enough for small α to make current to be positive. For larger α the current remains diamagnetic.
In the distorted annulus (Fig. 3b) the persistent current is a linear function of α for small α. For larger magnetic flux one observes nonlinear behavior that can be attributed to level repulsion in the chaotic billiard.
The dependence of the persistent current in the annulus on the number of particles N at fixed α is similar to that in the simply connected billiard. At small ξ the persistent current in the regular annulus is a staircase-like function. For the distorted annulus the numerics are scattered around a straight line (see Fig. 4).
For larger ξ the magnitude of diamagnetic contribution to the persistent current decreases, and the numerics are dominated by mesoscopic fluctuations. When ξ → 1, the persistent current becomes negligible (see Fig. 5a) for low occupations. We believe this is a manifes-tation of Anderson localization due to the boundary scattering. At high energies, when the localization length exceeds the circumference of the annulus, extended states reappear and can carry persistent current. In Fig. 5 we plot the persistent current for 2 different sets of parameters controlling the shape of annulus. For a large distortion (Fig. 5a) the current is nonzero only for high energy states beyond N = 110. In Fig. 5b the parameters b, c, and δ are chosen to make the annulus less distorted, and we see that the threshold for extended states moves to lower energy (about N = 40). N is the number of electrons), the contribution to the persistent current due to these levels is proportional to N and is independent of the flux for small flux in simply connected billiards, and can overwhelm the fluctuating mesoscopic contribution 17,18,19,20,21,22,23,24,25,27,29,30,33 from the states in the Thouless shell (|E − E F | ≤ E T ). This effect is quite distinct from Landau
diamagnetism 41 .
The diamagnetic contribution to the persistent current from l = 0 states seems to have been missed in previous work using the semiclassical sum over periodic orbits 27,29,30,33 . This is understandable, since the semiclassical approach becomes exact only as E → ∞, and in this limit, the l = 0 states have vanishing spectral density ρ l=0 (E) ≃ 1/ √ E. However, we emphasize that the total persistent current contains the sum over all levels, and will indeed behave diamagnetically at small flux (in the simply connected billiard) as we have described.
For very tiny ξ, the annular Robnik-Berry billiard behaves much like the simply connected one for most values of the dimensionless flux α = Φ/Φ 0 ≫ 1/ log Nξ −2 , with a diamagnetic contribution to the persistent current which is proportional to the electron density. However, convergence to the ξ = 0 limit is logarithmically slow, and the limits α → 0 and ξ → 0 do not commute. As the aspect ratio ξ increases, and the annulus tends to a one-dimensional ring, this effect diminishes to zero. For ξ close to 1, we also see Anderson localization in the distorted annular billiards, wherein the persistent currents are negligible below a certain energy (presumably because the localization length for these levels is smaller than the circumference), and become nonzero only beyond a threshold energy.
While we can obtain analytical estimates for the limits ξ → 0 and ξ → 1, it is difficult to make analytical progress for generic values of ξ (not close to 0 or 1). However, one can easily verify from the asymptotic expansions that for generic ξ the diamagnetic contribution to the persistent current for Φ ≪ Φ 0 goes as
I dia ≃ − 2 πmrR α 2N(R − r) (R + r)(25)
where r, R, are the inner and outer radii respectively. This should be compared to the typical fluctuating persistent current for interacting particles 19,20,21,22 which behaves as
I f luc ≃ E T Φ 0 ≃ 2 mRΦ 0 N R(R − r)(26)
It can be seen that the ratio of the systematic diamagnetic persistent contribution to the fluctuating contribution is roughly
|I dia | |I f luc | ≃ (R − r) r Φ Φ 0(27)
Previous ring samples 8,9,10,11,12,13,14,16 have (R − r) ≪ r. They are also subject to a uniform magnetic field rather than a point flux. Despite this, a systematic diamagnetic contribution at low flux has been detected in recent experiments 12,16 . However, the experiments are carried out at finite frequency, and the effects of attractive pair interactions 42,43 (see below) or nonequilibrium noise 44 cannot be ruled out.
In order to detect this effect unambiguously, one must work with a material which has no superconductivity at any temperature, to rule out attractive pair interactions. It is also clear that R−r r needs to be made as large as possible in order to render this effect easily observable. Care must be taken that there is no magnetic flux in the region where the electron wavefunctions are nonzero in order to maintain the pure Aharanov-Bohm quantum interference nature of this effect.
Let us now mention some caveats about our work. We have taken only a few (≈ 200) levels into account, whereas most experimental samples have a hugely greater number of levels. However, the physics of the diamagnetic contribution to the persistent current for a particular level concerns only whether that level has l = 0 or not, and is independent of its relative position in the spectrum. We expect our conclusions to hold for arbitrary densities.
We have considered a pointlike flux, which is unachievable in practice. For the annular billiard, all one needs to ensure is that the flux is nonzero only in the central hole of the annulus, and is zero in regions where the electron density is nonzero. By gauge invariance, such a situation will be equivalent to the one we study.
We have also ignored the effect of interelectron interactions. For weak repulsive Similarly, though we have concentrated on the zero-temperature behavior, we expect this effect to persist to quite high temperatures, since most of the l = 0 levels involved lie deep within the Fermi sea.
interactions
Finally, it would be interesting to investigate the effects of static disorder within a chaotic billiard, which would induce the system to cross over from a ballistic/chaotic to a disordered (diffusive) system. We hope to address this and other issues in future work.
Compound index j = (ν, n) numerates levels in ascending order. Normalized function φ l,n (r, θ) is: φ l,n (r, θ) = J |l−α| (γ |l−α|,n r)e ilθ √ πJ ′ |l−α| (γ |l−α|,n ) (A6)
Function J ν (r) is the Bessel function of the first kind, γ ν,n is the nth root of J ν (r), and l is an orbital quantum number. The coefficients of expansion in Eq. (A5) are chosen that way for further convenience.
Plugging expansion (A5) into Eq. (A4), after simplification one gets the matrix equation for eigenvalue problem:
M ij c (p) j = 1 ǫ p c (p) i ,(A7)
where matrix M is:
M ij = δ ij γ i γ j + δ l i ,l j −2 6ce −iδ I(2)
ij + δ l i ,l j −1 (4bI (1) ij + 12bce −iδ I
ij ) + δ l i ,l j (8b 2 I
ij + 18c 2 I
ij ) + δ l i ,l j +1 (4bI (1) ij + 12bce iδ I
ij ) + δ l i ,l j +2 6ce iδ I
(2) ij /(1 + 2b 2 + 3c 2 ).
(A8)
The integrals I (h) ij have the form:
I (h) ij = 1 0 drr h+1 J ν i (γ i r)J ν j (γ j r) γ i γ j J ′ ν i (γ i )J ′ ν j (γ j )
.
(A9)
Along with the simply connected domain (irregular disk) we consider irregular annulus.
Using similar conformal transformation, we map the annulus with irregular boundaries from (uv) plane onto regular annulus in (xy) plane with inner radius ξ and outer radius R = 1.
Proper conformal transformation looks as follows:
w(z) = z + bz 2 + ce iδ z 3 1 + 2b 2 (1 + ξ 2 ) + 3c 2 (1 + ξ 2 + ξ 4 ) .
For this kind of billiard expansion of Ψ(r, θ) from Eq. (A5) is in terms of eigenstates φ j (r, θ) for regular annulus:
φ l,n (r, θ) = J |l−α| (γ |l−α|,n r) − J |l−α| (γ |l−α|,n ξ) N |l−α| (γ |l−α|,n ξ) N |l−α| (γ |l−α|,n r) e ilθ √ 2π 1 ξ drr J |l−α| (γ |l−α|,n r) − J |l−α| (γ |l−α|,n ξ) N |l−α| (γ |l−α|,n ξ) N |l−α| (γ |l−α|,n r) 2 (A11)
The counterpart of matrix M for the annulus is:
M ij = δ ij γ i γ j + δ l i ,l j −2 3ce −iδ I (2)
ij + δ l i ,l j −1 (2bI (1) ij + 6bce −iδ I
ij ) + δ l i ,l j (4b 2 I
(2)
ij + 9c 2 I
ij ) + δ l i ,l j +1 (2bI (1) ij + 6bce iδ I
ij ) + δ l i ,l j +2 3ce iδ I
(2) ij /(1 + 2b 2 (1 + ξ 2 ) + 3c 2 (1 + ξ 2 + ξ 4 )).
(A12)
where the integrals I (h) ij are defined as:
I (h) ij = 1 ξ drr h+1φ i (r)φ j (r) γ i γ j ,φ i (r) = √ 2πe −ilθ φ i (r, θ).(A13)
FIG. 1 :
1Ground state energy E G in units of 10 4 × 2 2mR 2 , and persistent current I in units of e 4πmR 2 as a function of dimensionless flux for the regular disk (panels a,b) and the simply connected chaotic billiard (panels c,d). The results are for 200 particles.
the persistent current I is plotted against the number of particles N for magnetic flux α = 0.01. The behavior of the current is consistent with Eq. (10). That is, for small magnetic flux it is proportional to 2N. For the regular disk (Fig. 2a)the persistent current is a set of consecutive steps. Each step appears when the next l = 0 level is added to the system. The length of the steps is equal to the number of l = 0 levels between two adjacent levels with zero orbital quantum number. As one particle is added to the l = 0 level, it results in persistent current jump. The next level has opposite slope, and once it is occupied, cancels the contribution of the previous l = 0 level to the net persistent current.This explains the noise above each step inFig. 2a. In addition, each step has a small inclination which is due to the fact that the l = 0 levels do not cancel each other exactly when Φ = 0. For larger magnetic flux the steps become more inclined.
FIG. 2 :
2Persistent current I vs. number of particles N for regular disk (a) and chaotic disk (b) for the value of reduced magnetic flux α = 0.01.
′νFIG. 3 :
3(z) = ∂A ν (z)/∂z. When α → 0, derivatives of Bessel functions becomeJ ν (z) = πN 0 (z)/2,Ṅ ν (z) = −πJ 0 (z)/2, J′ ν (z) = −J 1 (z), and N ′ ν (z) = −N 1 (z). Then all terms outside square brackets in Eq. (24) cancel each other. The expression inside brackets in general has a non-zero value, which means ∂γ n /∂α = 0. In Fig. 3 the persistent current in the annular billiard is depicted for different values of the aspect ratio ξ. To facilitate the comparison between different values of ξ, we keep the Persistent current I vs. reduced magnetic flux α for several values of ξ for N = 200 particles. Figure (a) represents current for regular annulus normalized to the same density of states (same area) for different values of ξ. The current for the chaotic annulus is depicted in Figure (b).
FIG. 4 :FIG. 5 :
45Persistent current I in distorted annulus vs. number of particles N for several values of ξ. Persistent current I vs. number of particles N for distorted annulus. Parameters b, c, and δ control the shape of billiard.
IV. CONCLUSIONS, CAVEATS, AND OPEN QUESTIONSWe have investigated the behavior of chaotic simply connected and annular billiards penetrated by a pointlike flux. The annular billiards are characterized by a dimensionless aspect ratio ξ = r/R, the ratio of the inner (r) to the outer radius (R). Note that in the annular billiards, the flux exists in a region where the electrons cannot penetrate, and the effects of the flux on the electrons are purely Aharanov-Bohm quantum intereference effects.Our main result is that there is a systematic diamagnetic contribution to the persistent current which can be traced back to the flux response of the l = 0 levels of a regular unit disk (or annulus). Even though the number of such l = 0 levels is submacroscopic (∝ √ N, where
45,46,47,48 , we expect interactions to modify the effect only slightly, because it comes primarily from occupied levels deep within the Fermi sea, which are Pauli-blocked from responding to the interactions. However, for strong repulsive interactions 49,50,51 , significant corrections to the persistent current 52 from electrons in the Thouless shell cannot be ruled out. If the interactions are weak but attractive 19,20,21,22 , the low-energy fluctuations of Cooper pairs become very important 42,43 , and can produce additional large diamagnetic contributions at low fields.
sions, and to Shiro Kawabata for bringing a missed reference to our attention. OZ wishes to thank the College of Arts and Sciences and the Department of Physics at the University of Kentucky for partial support, and GM thanks the Aspen Center for Physics for its hospitality, where part of this work was carried out.
AcknowledgmentsThe authors would like to thank National Science Foundation for partial support under DMR-0311761 and DMR-0703992. We are also grateful to Denis Ullmo for helpful discus-APPENDIX A: NUMERICS FOR ENERGY LEVELSThe idea of this method is as follows37,38,39,40. In the original (uv) domain the Schrödinger equation is:To keep dynamics of electron unchanged, it is assumed that magnetic field exists only at the origin of (uv) plane inside the billiard. This requires that vector potential satisfies the condition ∇ × A(r) = nΦδ(r), where n is a unit vector perpendicular to the plane of the billiard.The billiard is threaded by single magnetic flux tube. The strength of the flux is Φ = αΦ 0 ,If the vector potential has the form:then with the help of conformal transformation:the Schrödinger equation in polar coordinates of (xy) plane becomes:Here the energy ǫ is measured in units of 2 /2mR 2 , and the distance is in units of R, where To find the energy spectrum, one expands the Ψ(r, θ) function in Eq. (A4) in terms of the eigenstates φ l,n (r, θ) of free electron(w = 0) inside the round billiard (R = 1):
. N Byers, C N Yang, Phys. Rev. Lett. 746N. Byers and C. N. Yang, Phys. Rev. Lett. 7, 46 (1961).
. F Bloch, Phys. Rev. 137787F. Bloch, Phys. Rev. 137, A787 (1965).
. L Gunther, Y Imry, Solid State Communications. 71391L. Gunther and Y. Imry, Solid State Communications 7, 1391 (1969).
. M Büttiker, Y Imry, R Landauer, Physics Letters A. 96365M. Büttiker, Y. Imry, and R. Landauer, Physics Letters A 96, 365 (1983).
. M Büttiker, Phys. Rev. B. 321846M. Büttiker, Phys. Rev. B 32, 1846 (1985).
. L P Lévy, D H Reich, L Pfeiffer, K West, Physica B: Condensed Matter. 189204L. P. Lévy, D. H. Reich, L. Pfeiffer, and K. West, Physica B: Condensed Matter 189, 204 (1993).
. C M Marcus, A J Rimberg, R M Westervelt, P F Hopkins, A C Gossard, Phys. Rev. Lett. 69506C. M. Marcus, A. J. Rimberg, R. M. Westervelt, P. F. Hopkins, and A. C. Gossard, Phys. Rev. Lett. 69, 506 (1992).
. B L Al'tshuler, A G Aronov, B Z Spivak, D Y Sharvin, Y V Sharvin, JETP Letters. 35588B. L. Al'tshuler, A. G. Aronov, B. Z. Spivak, D. Y. Sharvin, and Y. V. Sharvin, JETP Letters 35, 588 (1982).
. R A Webb, S Washburn, C P Umbach, R B Laibowitz, Phys. Rev. Lett. 542696R. A. Webb, S. Washburn, C. P. Umbach, and R. B. Laibowitz, Phys. Rev. Lett. 54, 2696 (1985).
. V Chandrasekhar, M J Rooks, S Wind, D E Prober, Phys. Rev. Lett. 551610V. Chandrasekhar, M. J. Rooks, S. Wind, and D. E. Prober, Phys. Rev. Lett. 55, 1610 (1985).
. V Chandrasekhar, R A Webb, M J Brady, M B Ketchen, W J Gallagher, A Kleinsasser, Phys. Rev. Lett. 673578V. Chandrasekhar, R. A. Webb, M. J. Brady, M. B. Ketchen, W. J. Gallagher, and A. Klein- sasser, Phys. Rev. Lett. 67, 3578 (1991).
. R Deblock, R Bel, B Reulet, H Bouchiat, D Mailly, Phys. Rev. Lett. 89206803R. Deblock, R. Bel, B. Reulet, H. Bouchiat, and D. Mailly, Phys. Rev. Lett. 89, 206803 (2002).
. G Timp, A M Chang, J E Cunningham, T Y Chang, P Mankiewich, R Behringer, R E Howard, Phys. Rev. Lett. 582814G. Timp, A. M. Chang, J. E. Cunningham, T. Y. Chang, P. Mankiewich, R. Behringer, and R. E. Howard, Phys. Rev. Lett. 58, 2814 (1987).
. B J Van Wees, L P Kouwenhoven, C J P M Harmans, J G Williamson, C E Timmering, M E I Broekaart, C T Foxon, J J Harris, Phys. Rev. Lett. 622523B. J. van Wees, L. P. Kouwenhoven, C. J. P. M. Harmans, J. G. Williamson, C. E. Timmering, M. E. I. Broekaart, C. T. Foxon, and J. J. Harris, Phys. Rev. Lett. 62, 2523 (1989).
. A Yacoby, M Heiblum, D Mahalu, H Shtrikman, Phys. Rev. Lett. 744047A. Yacoby, M. Heiblum, D. Mahalu, and H. Shtrikman, Phys. Rev. Lett. 74, 4047 (1995).
. R Deblock, Y Noat, B Reulet, H Bouchiat, D Mailly, Phys. Rev. B. 6575301R. Deblock, Y. Noat, B. Reulet, H. Bouchiat, and D. Mailly, Phys. Rev. B 65, 075301 (2002).
. H.-F Cheung, Y Gefen, E K Riedel, W.-H Shih, Phys. Rev. B. 376050H.-F. Cheung, Y. Gefen, E. K. Riedel, and W.-H. Shih, Phys. Rev. B 37, 6050 (1988).
. H.-F Cheung, E K Riedel, Y Gefen, Phys. Rev. Lett. 62587H.-F. Cheung, E. K. Riedel, and Y. Gefen, Phys. Rev. Lett. 62, 587 (1989).
. V Ambegaokar, U Eckern, Phys. Rev. Lett. 65381V. Ambegaokar and U. Eckern, Phys. Rev. Lett. 65, 381 (1990).
. V Ambegaokar, U Eckern, Europhysics Letters). 13733EPLV. Ambegaokar and U. Eckern, EPL (Europhysics Letters) 13, 733 (1990).
. U Eckern, Zeitschrift für Physik B Condensed Matter. 82393U. Eckern, Zeitschrift für Physik B Condensed Matter 82, 393 (1991).
. V Ambegaokar, U Eckern, Phys. Rev. B. 4410358V. Ambegaokar and U. Eckern, Phys. Rev. B 44, 10358 (1991).
. A Schmid, Phys. Rev. Lett. 6680A. Schmid, Phys. Rev. Lett. 66, 80 (1991).
. F Von Oppen, E K Riedel, Phys. Rev. Lett. 6684F. von Oppen and E. K. Riedel, Phys. Rev. Lett. 66, 84 (1991).
. B L Altshuler, Y Gefen, Y Imry, Phys. Rev. Lett. 6688B. L. Altshuler, Y. Gefen, and Y. Imry, Phys. Rev. Lett. 66, 88 (1991).
. V M Apel, G Chiappe, M J Sánchez, Phys. Rev. Lett. 854152V. M. Apel, G. Chiappe, and M. J. Sánchez, Phys. Rev. Lett. 85, 4152 (2000).
. F Von Oppen, E K Riedel, Phys. Rev. B. 489170F. von Oppen and E. K. Riedel, Phys. Rev. B 48, 9170 (1993).
. S M Reimann, M Brack, A G Magner, J Blaschke, M V N Murthy, Phys. Rev. A. 5339S. M. Reimann, M. Brack, A. G. Magner, J. Blaschke, and M. V. N. Murthy, Phys. Rev. A 53, 39 (1996).
. R A Jalabert, K Richter, D Ullmo, Surface Science. 700R. A. Jalabert, K. Richter, and D. Ullmo, Surface Science 361-362, 700 (1996).
. K Richter, D Ullmo, R A Jalabert, Physics Reports. 2761K. Richter, D. Ullmo, and R. A. Jalabert, Physics Reports 276, 1 (1996).
. R Narevich, R E Prange, O Zaitsev, Phys. Rev. E. 622046R. Narevich, R. E. Prange, and O. Zaitsev, Phys. Rev. E 62, 2046 (2000).
. C J Howls, Journal of Physics A: Mathematical and General. 347811C. J. Howls, Journal of Physics A: Mathematical and General 34, 7811 (2001).
. S Kawabata, Phys. Rev. B. 5912256S. Kawabata, Phys. Rev. B 59, 12256 (1999).
. S Ree, L E , Phys. Rev. B. 598163S. Ree and L. E. Reichl, Phys. Rev. B 59, 8163 (1999).
R Narevich, R E Prange, O Zaitsev, Physica E: Low-dimensional Systems and Nanostructures. 9578R. Narevich, R. E. Prange, and O. Zaitsev, Physica E: Low-dimensional Systems and Nanos- tructures 9, 578 (2001).
. M V Berry, Journal of Physics A: Mathematical and General. 102083M. V. Berry, Journal of Physics A: Mathematical and General 10, 2083 (1977).
. M Robnik, Journal of Physics A: Mathematical and General. 171049M. Robnik, Journal of Physics A: Mathematical and General 17, 1049 (1984).
. M V Berry, M Robnik, J. Phys. A: Math. Gen. 19649M. V. Berry and M. Robnik, J. Phys. A: Math. Gen. 19, 649 (1986).
. H Bruus, A D Stone, arXiv:cond-mat/9406010v1H. Bruus and A. D. Stone, arXiv:cond-mat/9406010v1 (1994).
. H Bruus, A D Stone, Phys. Rev. B. 5018275H. Bruus and A. D. Stone, Phys. Rev. B 50, 18275 (1994).
. K Nakamura, H Thomas, Phys. Rev. Lett. 61247K. Nakamura and H. Thomas, Phys. Rev. Lett. 61, 247 (1988).
. M Schechter, Y Oreg, Y Imry, Y Levinson, Phys. Rev. Lett. 9026805M. Schechter, Y. Oreg, Y. Imry, and Y. Levinson, Phys. Rev. Lett. 90, 026805 (2003).
. G Murthy, Phys. Rev. B. 70153304G. Murthy, Phys. Rev. B 70, 153304 (2004).
. V E Kravtsov, B L Altshuler, Phys. Rev. Lett. 843394V. E. Kravtsov and B. L. Altshuler, Phys. Rev. Lett. 84, 3394 (2000).
. A V Andreev, A Kamenev, Phys. Rev. Lett. 813199A. V. Andreev and A. Kamenev, Phys. Rev. Lett. 81, 3199 (1998).
. P W Brouwer, Y Oreg, B I Halperin, Phys. Rev. B. 6013977P. W. Brouwer, Y. Oreg, and B. I. Halperin, Phys. Rev. B 60, R13977 (1999).
. H U Baranger, D Ullmo, L I Glazman, Phys. Rev. B. 612425H. U. Baranger, D. Ullmo, and L. I. Glazman, Phys. Rev. B 61, R2425 (2000).
. I L Kurland, I L Aleiner, B L Altshuler, Phys. Rev. B. 6214886I. L. Kurland, I. L. Aleiner, and B. L. Altshuler, Phys. Rev. B 62, 14886 (2000).
. G Murthy, H Mathur, Phys. Rev. Lett. 89126804G. Murthy and H. Mathur, Phys. Rev. Lett. 89, 126804 (2002).
. G Murthy, R Shankar, Phys. Rev. Lett. 9066801G. Murthy and R. Shankar, Phys. Rev. Lett. 90, 066801 (2003).
. G Murthy, R Shankar, D Herman, H Mathur, Physical Review B. 6933G. Murthy, R. Shankar, D. Herman, and H. Mathur, Physical Review B 69, 075321 (pages 33) (2004).
. D Herman, H Mathur, G Murthy, Phys. Rev. B. 6941301D. Herman, H. Mathur, and G. Murthy, Phys. Rev. B 69, 041301 (2004).
|
[] |
[
"Lightning black holes as unidentified TeV sources",
"Lightning black holes as unidentified TeV sources"
] |
[
"Kouichi Hirotani ",
"Hung-Yi Pu ",
"Satoki Matsushita "
] |
[] |
[] |
Imaging Atmospheric Cherenkov Telescopes have revealed more than 100 TeV sources along the Galactic Plane, around 45% of them remain unidentified. However, radio observations revealed that dense molecular clumps are associated with 67% of 18 unidentified TeV sources. In this paper, we propose that an electron-positron magnetospheric accelerator emits detectable TeV gamma-rays when a rapidly rotating black hole enters a gaseous cloud. Since the general-relativistic effect plays an essential role in this magnetospheric lepton accelerator scenario, the emissions take place in the direct vicinity of the event horizon, resulting in a point-like gamma-ray image. We demonstrate that their gamma-ray spectra have two peaks around 0.1 GeV and 0.1 TeV and that the accelerators become most luminous when the mass accretion rate becomes about 0.01% of the Eddington accretion rate. We compare the results with alternative scenarios such as the cosmic-ray hadron scenario, which predicts an extended morphology of the gamma-ray image with a single power-law photon spectrum from GeV to 100 TeV.
|
10.1007/s12036-018-9545-2
|
[
"https://arxiv.org/pdf/1808.03075v1.pdf"
] | 55,063,123 |
1808.03075
|
4e75b10e39026cf7104144f36efac11092ae68e1
|
Lightning black holes as unidentified TeV sources
Kouichi Hirotani
Hung-Yi Pu
Satoki Matsushita
Lightning black holes as unidentified TeV sources
Received: date / Accepted: datearXiv:1808.03075v1 [astro-ph.HE] 9 Aug 2018 Noname manuscript No. (will be inserted by the editor)Black hole physics · Gamma-rays · Magnetic fields
Imaging Atmospheric Cherenkov Telescopes have revealed more than 100 TeV sources along the Galactic Plane, around 45% of them remain unidentified. However, radio observations revealed that dense molecular clumps are associated with 67% of 18 unidentified TeV sources. In this paper, we propose that an electron-positron magnetospheric accelerator emits detectable TeV gamma-rays when a rapidly rotating black hole enters a gaseous cloud. Since the general-relativistic effect plays an essential role in this magnetospheric lepton accelerator scenario, the emissions take place in the direct vicinity of the event horizon, resulting in a point-like gamma-ray image. We demonstrate that their gamma-ray spectra have two peaks around 0.1 GeV and 0.1 TeV and that the accelerators become most luminous when the mass accretion rate becomes about 0.01% of the Eddington accretion rate. We compare the results with alternative scenarios such as the cosmic-ray hadron scenario, which predicts an extended morphology of the gamma-ray image with a single power-law photon spectrum from GeV to 100 TeV.
Introduction
The Imaging Atmospheric Cherenkov Telescopes (IACTs) provides a wealth of new data on various energetic astrophysical objects, increasing the number of detected very-high-energy (VHE) gamma-ray sources, typically between 0.01 and 100 TeV, from 7 to more than 200 in this century 1 . Among the presently operating three IACTs, High Energy Stereoscopic System (HESS) [Wilhelmi (2009)] has so far discovered 42 new VHE sources along the Galactic Plane, 22 of which are still unidentified. The nature of these unidentified VHE sources may be hadronic origin [Black & Fazio (1973), Issa & Wolfendale (1981)], because protons can be efficiently accelerated into VHE in a supernova remnant to penetrate into adjacent dense molecular clouds, which leads to an extended gamma-ray image. By a systematic comparison between the published HESS data and the molecular radio line data, 38 sources are found to be associated with dense molecular clumps out of the 49 Galactic VHE sources covered by 12 mm observations [de Wilt et al. (2017)].
There is, however, an alternative scenario for the VHE emissions from gaseous clouds. In the Milky Way, molecular gas is mostly located in giant molecular clouds, in which massive stars are occasionally formed. If a massive star evolves into a black hole and encounters an adjacent molecular clouds, it accretes gases. It is, therefore, noteworthy that a rapidly rotating, stellar-mass black hole emits copious gamma-rays in 0.001-1 TeV [Hirotani & Pu (2016b)], provided that its dimensionless accretion rateṁ ≡Ṁ /Ṁ Edd , satisfies 6 × 10 −5 <ṁ < 2 × 10 −4 , whereṀ designates the mass accretion rate,Ṁ Edd ≡ 1.39 × 10 19 M 1 g s −1 is the Eddington accretion rate, M 1 = M/(10M ⊙ ) and M ⊙ denotes the solar mass. The electric currents flowing in such an accreting plasma create the magnetic field threading the event horizon. In this leptonic scenario, migratory electrons and positrons (e ± s) are accelerated to TeV by a strong electric field exerted along these magnetic field lines, and cascade into many pairs as a result of the collisions between the VHE photons emitted by the gap-accelerated e ± 's and the IR photons emitted by the hot e − 's in the equatorial accretion flow. The resulting gamma radiation takes place only near the black hole; thus, their VHE image should have a point-like morphology with a spectral turnover around TeV.
Black hole accretion in a gaseous cloud
When a black hole moves in a gaseous cloud, the particles are captured by the hole's gravity to form an accretion flow. Since the temperature is very low in a molecular cloud, the black hole will move with a supersonic velocity V , forming a bow shock behind. Under this situation, the gas pressure can be neglected and the particles within the impact parameter r B ∼ GM V −2 from the black hole will be captured. For a homogeneous gas, the mass accretion rate becomes [Bondi & Hoyle (1944)
]Ṁ B = 4πλ(GM ) 2 (C S 2 +V 2 ) −3/2 ρ ≈ 4πλ(GM ) 2 V −3 ρ,
where ρ denotes the mass density of the gas, λ a constant of order unity, G the gravitational constant, and C S the sound speed in the homogeneous gas; the last near equality comes from the supersonic nature (i.e., V ≫ C S ) of Fig. 1 (A) Luminosity of black-hole lepton accelerators when a ten-solar-mass, extremely rotating (a = 0.99rg) black hole is moving with velocity V in a cloud with molecule hydrogen density n H 2 . For an atomic hydrogen gas with density n HI , put n H 2 = n HI /2, because the mass is halved. The five straight lines correspond to the Bondi-Hoyle accretion rates, 10 −2 , 10 −3 , 10 −4 , 10 −5 , and 10 −6 , as labeled. In the lower-right white region, the enhanced photon illumination from the equatorial accretion flow results in an efficient pair production, and hence a complete screening of the magnetic-field-aligned electric field; thus, the accelerator vanishes in this region. In the upper-left white region, stationary accelerators cannot be formed (see text). Thus, stationary accelerators arise only in the green-black region. (B) Schematic figure (side view) of a black-hole magnetosphere. The polar funnel is assumed to be bounded from the radiatively inefficient accretion flow (cyan region) at colatitude θ = 60 • (dashed line) from the rotation axis (ordinate).
accretion. For a molecular hydrogen gas, we obtain the dimensionless Bondi accretion rateṁ B ≡Ṁ B /Ṁ Edd = 5.4 × 10 −9 λn H2 M 1 (V /10 2 km s −1 ) −3 , where n H2 denotes the number density of hydrogen molecules per cm 3 . Representative values ofṁ B are plotted as the five straight lines in figure 1A.
Since the accreting gases have little angular momentum as a whole with respect to the black hole, they form an accretion disk only within a radius that is much less than r B . Thus, we neglect the mass loss as a disk wind between r B and the inner-most region, and evaluate the accretion rate near the black hole,ṁ, withṁ B . In what follows, we consider a ten solar mass black hole, which is typical as a stellar-mass black hole [Tetarenko et al. (2016), Corral-Santana et al. (2016)]. It is reasonable to suppose that such black holes have kick velocities of V < 10 2 km s −1 with respect to the star-forming region. Under this circumstance, a typical velocity dispersion in a molecular cloud, ∆V < 10 km s −1 , is an order of magnitude less than V . Accordingly, the net specific angular momentum of the gas at r B , which is typically r B ∆V , is much less than the Keplerian value, r B V = √ GM r B . At a much smaller radius r = (∆V /V ) 2 r B < 0.01r B , r B ∆V equals the Keplerian value; therefore, a disk is formed within this radius. Since the accreting gas does not have to lose angular momentum when falling from r = r B to (∆V /V ) 2 r B , we neglect the mass loss in this region and evaluate the accretion rateṁ near the BH with the Bondi-Hoyle accretion rateṁ B for simplicity in the present paper.
Development of charge-starved magnetosphere
Whenṁ becomes typically less than 10 −2 , Coulomb collisions become so inefficient that the accreting protons thermal energy cannot be efficiently transferred to the electrons. If the accretion rate decreases toṁ < 10 −2.5 , such a radiatively inefficient accretion flow (RIAF) [Ichimaru (1979), Narayan (1994), Mahadevan (1997)] cannot supply enough soft gamma-rays that are needed to sustain the magnetosphere force-free [Levinson (2011)]. Accordingly, a chargestarved, nearly vacuum magnetosphere develops in the polar funnel ( fig. 1B), because the equatorial accreting plasmas cannot penetrate there due to the centrifugal-force barrier. If the accretion rate further decreases toṁ < 6×10 −5 , stationary pair production cascade cannot be sustained. However, the luminosity of such a non-stationary accelerator becomes less than the stationary cases, because only a weaker magnetic field can be confined near the black hole for a lower accretion rate. Thus, we consider only the range 6 × 10 −5 < m < 2 × 10 −4 (green-black region in fig. 1A) and concentrate on stationary accelerators. It is noteworthy that if a stellar-mass black hole moves slowly (i.e., V ≪ 10 2 km s −1 ), its accelerator can be activated with a low gas density (e.g., n H2 ≪ 10 3 cm −3 ; fig. 1A). Thus, a significant gamma-ray emission is possible when a black hole encounters not only a dense molecular cloud but also a diffuse molecular gas or even an atomic gas.
Lepton accelerator in black hole magnetospheres
In a vacuum magnetosphere, an electric field, E , arises along the magnetic field lines. Accordingly, electrons and positrons (red arrows in fig. 1B) are accelerated into ultra-relativistic energies to emit high-energy gamma-rays (wavy line with middle wavelength) via the curvature process (a kind of the synchrotron process whose the electron's gyro radius is replaced with the macroscopic curvature radius of three-dimensional electron's motion) and VHE gamma-rays (wavy line with shortest wavelength) via the inverse-Compton (IC) scatterings of the soft photons (wavy line with longest wavelength) emitted from the RIAF. A fraction of such VHE photons collide with the soft RIAF photons to materialize as e ± pairs, which partially screen the original E when they separate. It is noteworthy that pair annihilation is negligible compared to pair production in BH gaps. To compute the actual strength of E , we solve the e ± pair production cascade in a stationary and axisymmetric magnetosphere on the meridional plane (r,θ), where r denotes the Boyer-Lindquist radial coordinate, and θ does the colatitude measured from the rotation axis. The black holes rotational energy is electromagnetically extracted via the Blandford-Znajek process [Blandford & Znajek (1977)] and partially dissipated as particle acceleration and the resultant radiation within the accelerator. It is noteworthy that the electrodynamics of this lepton accelerator is essentially described by the general-relativistic Goldreich-Julian charge density, which is governed by the magnetic-field strength and the frame-dragging effects. Thus, the accelerator solution little depends on the magnetic field configuration near the event horizon. We therefore assume that the magnetic field is radial in the meridional plane and that magnetic axis is aligned with the rotation axis. The magnetic field lines are twisted in the azimuthal direction due to the frame-dragging effect, and its curvature radius is assumed to be r g in the local reference frame. This assumption modestly affects the curvature spectrum, but does not affect the entire electrodynamics, because the pair-production process, and hence the screening of E is governed by the highest-energy, IC-scattered photons.
Basic equations
Let us quantify the accelerator electrodynamics. In a rotating black-hole magnetosphere, electron-positron accelerator is formed in the direct vicinity of the event horizon. Thus, we start with describing the background spacetime in a fully general-relativistic way. We adopt the geometrized unit, putting c = G = 1, where c and G denote the speed of light and the gravitational constant, respectively. Around a rotating BH, the spacetime geometry is described by the Kerr metric [Kerr (1963)]. In the Boyer-Lindquist coordinates, it becomes [Boyer & Lindquist (1967)]
ds 2 = g tt dt 2 + 2g tϕ dtdϕ + g ϕϕ dϕ 2 + g rr dr 2 + g θθ dθ 2 ,(1)
where
g tt ≡ − ∆ − a 2 sin 2 θ Σ , g tϕ ≡ − 2M ar sin 2 θ Σ ,(2)g ϕϕ ≡ A sin 2 θ Σ , g rr ≡ Σ ∆ , g θθ ≡ Σ;(3)
∆ ≡ r 2 − 2M r + a 2 , Σ ≡ r 2 + a 2 cos 2 θ, A ≡ (r 2 + a 2 ) 2 − ∆a 2 sin 2 θ. At the horizon, we obtain ∆ = 0, which gives the horizon radius, r H ≡ M + √ M 2 − a 2 , where M corresponds to the gravitational radius, r g ≡ GM c −2 = M . The spin parameter a becomes a = M for a maximally rotating BH, and becomes a = 0 for a non-rotating BH. The spacetime dragging frequency is given by ω(r, θ) = −g tϕ /g ϕϕ , which decreases outwards as ω ∝ r −3 at r ≫ r g = M .
We assume that the non-corotational potential Φ depends on t and ϕ only through the form ϕ − Ω F t, and put
F µt + Ω F F µϕ = −∂ µ Φ(r, θ, ϕ − Ω F t),(4)
where Ω F denotes the magnetic-field-line rotational angular frequency. We refer to such a solution as a 'stationary' solution in the present paper. The Gauss's law gives the Poisson equation that describes Φ in a three dimensional magnetosphere [Hirotani (2006)
], − 1 √ −g ∂ µ √ −g ρ 2 w g µν g ϕϕ ∂ ν Φ = 4π(ρ − ρ GJ ),(5)
where ρ 2 w ≡ g 2 tϕ − g tt g ϕϕ = ∆ sin 2 θ, and the general-relativistic Goldreich-Julian (GJ) charge density is defined as [Hirotani (2006)]
ρ GJ ≡ 1 4π √ −g ∂ µ √ −g ρ 2 w g µν g ϕϕ (Ω F − ω)F ϕν .(6)
Far away from the horizon, r ≫ M , equation (6) reduces to the ordinary, special-relativistic expression of the GJ charge density [Goldreich & Julian (1969), Mestel (1971)],
ρ GJ ≡ − Ω · B 2πc + (Ω × r) · (∇ × B) 4πc .(7)
Therefore, the corrections due to magnetospheric currents, which are expressed by the second term of eq. (7), are included in equation (6).
If the real charge density ρ deviates from the rotationally induced Goldreich-Julian charge density, ρ GJ , in some region, equation (5) shows that Φ changes as a function of position. Thus, an acceleration electric field, E = −∂Φ/∂s, arises along the magnetic field line, where s denotes the distance along the magnetic field line. A gap is defined as the spatial region in which E is nonvanishing. At the null charge surface, ρ GJ changes sign by definition. Thus, a vacuum gap, in which |ρ| ≪ |ρ GJ |, appears around the null-charge surface, because ∂E /∂s should have opposite signs at the inner and outer boundaries [Cheng et al. (1986a),Ciang & Romani (1992, Romani (1996), Cheng et al. (2001)]. As an extension of the vacuum gap, a non-vacuum gap, in which |ρ| becomes a good fraction of |ρ GJ |, also appears around the null-charge surface ( § 2.3.2 of HP 16), unless the injected current across either the inner or the outer boundary becomes a substantial fraction of the GJ value.
In previous series of our papers, e.g., ], we have assumed ∆ ≪ M 2 in Equation (5), expanding the left-hand side in the series of ∆/M 2 and pick up only the leading orders. However, in the present report, we discard this approximation, and consider all the terms that arise at ∆ ∼ M 2 or ∆ ≫ M 2 .
It should be noted that ρ GJ vanishes, and hence the null surface appears near the place where Ω F coincides with the space-time dragging angular frequency, ω [Beskin et al. (1992)]. The deviation of the null surface from this ω(r, θ) = Ω F surface is, indeed, small, as figure 1 of [Hirotani & Okamoto (1998)] indicates. Since ω can match Ω F only near the horizon, the null surface, and hence the gap generally appears within one or two gravitational radii above the horizon, irrespective of the BH mass.
Results
We apply the method to a stellar-mass black hole with mass M = 10M ⊙ . To consider an efficient emission, we consider an extremely rotating black hole, a = 0.99r g , because the accelerator luminosity rapidly increases as a → r g (10). Owing to the frame-dragging effects, the Goldreich-Julian charge density decreases outwards around a rotating black hole. As a result, a negative E The lepton accelerator appears only in the polar funnel, θ < 60 • . The dimensionless accretion rate isṁ = 10 −4 . Magnetic field is assumed to be radial on the meridional plane, and be rotating with angular frequency Ω F = 0.5ω H , where ω H denotes the black holes spin angular frequency. The null-charge surface is located at radial coordinate, r = 1.73rg, whose θ dependence is weak.
arises near the null-charge surface, which is located very close to the event horizon ( fig. 2).
In fig. 3, we also plot E (r, θ) at four discrete colatitudes, θ = 0 • , 15 • , 30 • , and 45 • . It follows that E peaks slightly inside the null surface (vertical dashed line), and that it maximizes at θ = 0 • (i.e., along the rotation axis). The reason why E maximizes along the rotation axis is that magnetic fluxes concentrate towards the rotation axis as the black hole spin approaches its maximum value (i.e., as a → r g ) [Komissarov & McKinney (2007), Tchekhovskoy et al. (2010)]. Therefore, to consider the greatest gamma-ray flux, we focus on the emission along the rotation axis, θ = 0 • . The acceleration electric field, E , decreases slowly outside the null surface in the same way as pulsar outer gaps [Hirotani & Shibata (1999)]. This is because the two-dimensional screening effect of E works when the gap longitudinal (i.e., radial) width becomes nonnegligible compared to its trans-field (i.e., meridional) thickness.
The created e ± s are accelerated by the E in opposite directions, emitting copious gamma-rays via the curvature process in 0.01 − 3 GeV and via the 1.0 -0.4 Fig. 3 Distribution of the magnetic-field-aligned electric field, E , that is presented in figure 1, at four discrete colatitudes as labeled in the box, where θ = 0 • corresponds to the rotation axis. The abscissa denotes the distance along the magnetic field from the nullcharge surface, where the general relativistic Goldreich-Julian charge density vanishes due to the spacetime dragging around a rotating black hole. Black hole's mass (M = 10M ⊙ ), spin (a = 0.99rg), and the accretion rate (ṁ = 1.00 × 10 −4 ) are common with figure 1. The vertical dashed line shows the position of the null-charge surface along θ = 0 • ; however, its position little depends on θ because we assume Ω F = 0.5ω H (see the main text).
IC process in 0.01 − 1 TeV (fig. 4). The characteristic photon energy in the curvature process is given by hν c = (3/2)hcγ 3 /ρ c , where h denotes the Planck constant andh ≡ h/2π. At each place in the gap, electrons have Lorentz factors typically in the range 10 6 < γ < 3 × 10 6 . To evaluate the curvature radius ρ c , we assume that the horizon-threading magnetic field lines bend in the toroidal direction due to the frame dragging and adopt ρ c = r g . Since the pair production is sustained by the TeV photons (emitted via the IC process), the gap electrodynamics is little affected by the actual value of ρ c , which appears only in the curvature process. Thus, we adopt this representative value, ρ c = r g . The IC photon energy is limited by the electron kinetic energy whose upper bound is about 1.5 TeV. Thus, the IC photons have typical energies between 0.01 TeV and 1 TeV.
It also follows from figure 4 that the gamma-ray luminosity increases with decreasing accretion rates. This is because the decreased RIAF soft photon field increases the pair-production mean-free path, the accelerator width along the magnetic field lines, and hence the electric potential drop. What is more, the emission becomes detectable with Fermi/LAT 2 and IACTs such as CTA 3 , if the distance is within 1 kpc, and if the dimensionless accretion rate resides in the narrow range, 6 × 10 −5 <ṁ < 2 × 10 −4 . The gamma-ray spectrum exhibits a turnover around TeV, because electron Lorentz factors are limited below 1.6 TeV due to the curvature-radiation drag force. A caution should be Fig. 4 Spectrum of a black-hole lepton accelerator. The black hole mass and spin are common with figure 1. The red dotted, blue dashed, black solid, and green dash-dotted curves correspond to the dimensionless accretion rate of 10 −3.50 , 10 −3.75 , 10 −4 , and 10 −4.25 , respectively. The distance is assumed to be 1 kpc. The thin curves on the left denote the input spectra of the advection-dominated accretion flow, a kind of RIAF. Such soft photons illuminate the accelerator in the polar funnel. The thick lines denote the spectra of the gamma-rays emitted from the accelerator. The 0.1 − 10 GeV photons are emitted via the curvature process, while those in 0.01−1 TeV are via the inverse-Compton (IC) process. The detection limits of the Large Area Telescope (LAT) aboard the Fermi space observatory after ten-year observation are indicated by the thin solid curves. Also, the detection limits of the Cherenkov Telescope Array (CTA) after a 50-hour observation are shown by the thin dashed and dotted curves; (N) denotes the detection limit of the CTA in the northern-hemisphere, while (S) denotes those in the southern hemisphere.
made, however, on the assumption of a stationary electron-positron pair cascade. If the cascade takes place in a time-dependent manner as suggested with Particle-in-Cell simulations [Levinson & Segev (2017)], the spectra might appear different from the present stationary analysis. The present gap luminosity gives an estimate of the maximally possible luminosity of a gap, whose electrodynamic structure may be variable in time. The dependence of the solutions on the BH spin will be discussed in our subsequent paper [Hirotani et al. (2018)].
Let us analytically examine why the gap luminosity maximizes whenṁ ≈ 10 −4 . At radius r, the soft photon number density, n s , can be estimated as
n s = L s /c 4πr 2 hν s = 1.3 × 10 20 M 1 −2 10r g r d 10kpc 2 eV hν s ν s F νs eV cm −2 s −1 cm −3 ,(8)
where ν s F νs denotes the ADAF energy flux whose value lies around 10 −12 TeV cm −2 s −1 = eV cm −2 s −1 at distance d = 10 kpc; the ADAF spectrum peaks at hν s ≈ a few eV (thin four curves on the left in fig. 4). We evaluate the ADAF luminosity (in near-IR energies) with L s ≈ 4πd 2 ν s F νs , where d = 10 kpc. To compute n s , we assume that the photon density is uniform within r = 10r g , a typical radius in which the equatorial ADAF is confined vertically by the magnetic pressure [McKinney et al.(2012)].
The electrons Lorentz factors are limited above 10 6 [Hirotani et al. (2018)]. Thus, the Klein-Nishina cross section becomes σ IC ≈ 0.2σ T , where σ T denotes the Thomson cross section. Thus, the mean-free path for the IC scatterings, λ IC = 1/(n s σ IC ), becomes
λ IC r g ≈ 4.0×10 −2 M 1 10r g r d 10kpc −2 eV hν s ν s F νs eV cm −2 s −1 −1 σ IC σ T −1 ,(9)
The pair-production cross section becomes slightly below 0.2σ T for the collisions of TeV and eV photons with moderate angles. Thus, the sum of the IC and pair-production mean-free paths becomes
λ IC + λ pp r g ≈ 2 λ IC r g ≈ 0.08M 1 eV hν s ν s F νs eV cm −2 s −1 −1 ,(10)
Note that the Klein-Nishina and the pair-production cross sections are exactly computed in the numerical analysis, taking account of the photon specific intensity and the particle distribution functions at each point. Electrons are accelerated by E and attain the terminal Lorentz factor, γ ∼ 10 6 , after running the distance
λ acc = γm e c 2 eE = 1.7 × 10 5 |E | 10 4 statvolt cm −1 −1 γ 10 6 ,(11)
which is less than r g . We find that the gap becomes most luminous when λ IC + λ pp + λ acc ≈ λ IC + λ pp ≈ r g = 1.5 × 10 6 M 1 cm.
It follows from equation (10) that λ IC + λ pp ≈ 0.26r g (or ≈ 0.40r g ) is realized whenṁ ≈ 10 −4 (or ≈ 6 × 10 −5 ), which gives hν s ≈ eV and ν s F νs ≈ 0.3 (or ≈ 0.2) eV cm −2 s −1 . It might appear that λ IC + λ pp ≈ r g holds ifṁ ≪ 10 −4 . However, in this case, the gap width rapidly increases to diverge; that is, there exist no stationary solutions, ( fig. 8 of [Hirotani & Pu (2016b)]). Because of the simplification adopted around equations (8)-(12), we could not obtain λ IC + λ pp ≈ r g in this simplistic argument in a consistent manner with the numerical results of E , γ, and the gamma-ray spectrum. Nevertheless, we can analytically conclude that the gap longitudinal width becomes comparable to the horizon radius and its luminosity maximizes whenṁ ≈ 10 −4 for stellarmass BHs.
To further analytically estimate E , Lorentz factors, γ-ray energies, and so on without invoking on the numerical results, we have to perform the similar computations as described in § 2 of [Hirotani (2013)] for rotation-powered pulsars. In this case, we would have to replace the curvature process with the IC process, the neutron-star surface X-ray field with the ADAF IR field, the neutron-star magnetic field with that created/supported by the ADAF, and the light-cylinder radius with r g (to compute the spatial gradient of ρ GJ ). For pulsars, the pair-production optical depth, τ pp , is much less than unity for the out-going, curvature GeV photons, which tail-on collide with the neutron-star surface X-rays. However, for BHs, τ pp ∼ 1 holds for the IC TeV photons whose collision angles are typically 0.5-1.0 rad with the ADAF-emitted near-IR photons. It is noteworthy that the gap longitudinal width becomes approximately λ IC + λ pp + λ acc , because τ pp ∼ 1 holds for BH gaps. It is, however, out of the scope of the present paper to inquire the further details of this analytical method.
Discussion
Let us compare the related gamma-ray emission scenarios. In the protostellar jet scenario [Bosch-Ramon et al. (2010)], electrons and protons are accelerated at the termination shocks when the jets from massive protostars interact with the surrounding dense molecular clouds. Thus, the size of the emission region becomes comparable to the jet transverse thickness at the shock. In the hadronic cosmic ray scenario [Ginzburg & Syrovatskii (1964), Blandford & Eichler (1987)], protons and helium nuclei are accelerated in the supernova shock fronts and propagate into dense molecular clouds, resulting in a single power-law photon spectrum in 0.001 − 100 TeV through neutral pion decays. The size becomes comparable to the core of a dense molecular cloud. In the leptonic cosmic ray scenario [Aharonian et al. (1997),van der Swaluw et al. (2001), Hillas et al. (1998)], electrons are accelerated at pulsar wind nebulae or shelltype supernova remnants, and radiate gamma-rays via IC process and radio/Xrays via synchrotron process. Since the cosmic microwave background radiation provides the main soft photon field in the interstellar medium, the size may be comparable to the plerions, whose size increases with the pulsar age. In the black-hole lepton accelerator scenario [Beskin et al. (1992), Hirotani & Okamoto (1998),Neronov & Aharonian (2007, Levinson (2011),Globus & Levinson (2014, Broderick & Tchekhovskoy (2015), Hirotani et al. (2017), Levinson & Segev (2017)], emission size does not exceed 10r g . Noting that the angular resolution of the CTA is about five times better than the current IACTs, we propose to discriminate the present black-hole lepton accelerator scenario from other scenarios by comparing the gamma-ray image and spectral properties. Namely, if a VHE source has a point-like morphology like HESS J1800-2400C in a gaseous cloud (section S1), and has two spectral peaks in 0.01 − 3 GeV and 0.01 − 1 TeV, but shows (synchrotron) power-law component in neither radio nor X-ray wavelengths, we consider that the present scenario accounts for its emission mechanism.
Fig. 2
2Magnetic-field-aligned electric field, E , on the meridional plane. The filled black circle on the bottom left corner shows a black hole rotating along the ordinate. The mass and the spin parameter of the black hole are M = 10M ⊙ and a = 0.99rg. Both axes are normalized by the gravitational radius, rg = GM c −2 .
TeV Catalog (http:www.tevcat.uchicado.edu)
LAT Performance (https://www.slac.stanford.edu/exp/glast/groups/canda/lat Performance.htm) 3 CTA Performance (https://portal.cta-observatory.org/CTA Observatory/performance/SitePages/Hom.aspx)
. Aharonian, Mon. Not. R. Astron. Soc. 291162Aharonian et al. (1997). F. A. Aharonian, A. M. Atoyan, T. Kifune, Mon. Not. R. Astron. Soc., 291, 162 (1997).
. Beskin, Sov. Astron. V. S. Beskin, Ya. N. Istomin, V. Parev366642Beskin et al. (1992). V. S. Beskin, Ya. N. Istomin, V. Parev, Sov. Astron., 36(6), 642 (1992).
. ; J H Black & Fazio, G G Black, Fazio, ApJ. 185Black & Fazio (1973). J. H. Black, G. G. Fazio, ApJ. 185, L7-11 (1973)
. D Blandford & Znajek ; R, R L Blandford, Znajek, Mon. Not. R. Astron. Soc. 179433Blandford & Znajek (1977). R. D. Blandford, R. L. Znajek, Mon. Not. R. Astron. Soc., 179, 433 (1977).
. D Blandford & Eichler ; R, D Blandford, Eichler, Phys. Rep. 1541Blandford & Eichler (1987). R. D. Blandford, D. Eichler, Phys. Rep., 154, 1 (1987).
. ; H Bondi & Hoyle, F Bondi, Hoyle, Mon. Not. R. Astron. Soc. 104273Bondi & Hoyle (1944). H. Bondi, F. Hoyle, Mon. Not. R. Astron. Soc., 104, 273 (1944).
. & Broderick, Tchekhovskoy, A. E. Broderick, A. Tchekhovskoy809Broderick & Tchekhovskoy (2015). A. E. Broderick, A. Tchekhovskoy, 809, 97, (2015).
. Bosch-Ramon, Astron. Astroph. 511Bosch-Ramon et al. (2010). Bosch-Ramon, G. E. Romero, A. T. Araudo, J. M. Paredes, Astron. Astroph., 511, 8 (2010).
. H Boyer & Lindquist ; R, R W Boyer, Lindquist, J. Math. Phys. 265281Boyer & Lindquist (1967). R. H. Boyer, R. W. Lindquist, J. Math. Phys. 265, 281 (1967).
. Cheng, Astroph. J. 300500Cheng et al. (1986a). K. S. Cheng, C. Ho, M. Ruderman, Astroph. J. 300, 500 (1986a).
. Cheng, Astroph. J. 537964Cheng et al. (2001). K. S. Cheng, M. Ruderman, L. Zhang, Astroph. J. 537, 964 (2000).
. ; J Ciang & Romani, R W Chiang, Romani, Astroph. J. 400629Ciang & Romani (1992). J. Chiang, R. W. Romani, Astroph. J. 400, 629 (1992).
. Corral-Santana, Astron. Astroph. J. M. Corral-Santana, J. Casares, T. Munoz-Daias, F. E. Bauer, I. G. Martinez-Pairs, D. M. Russel587Corral-Santana et al. (2016). J. M. Corral-Santana, J. Casares, T. Munoz-Daias, F. E. Bauer, I. G. Martinez-Pairs, D. M. Russel, Astron. Astroph., 587, (2016).
. Wilt De, MNRAS. 468de Wilt et al. (2017). P. de Wilt, G. Rowell, A. J. Walsh, M. Burton, J. Rathborne, Y. Fukui, A. Kawamura, F. Aharonian, MNRAS 468, 2093-2113 (2017)
The Origin of Cosmic Rays. ; V L Ginzburg & Syrovatskii, S I Ginzburg, Syrovatskii, Maccmillan18New YorkGinzburg & Syrovatskii (1964). V. L. Ginzburg, S. I. Syrovatskii, 18. The Origin of Cosmic Rays (New York; Maccmillan), (1964)
. Globus & Levinson, N. Globus, A. Levinson, Astroph. J. 79626Globus & Levinson (2014). N. Globus, A. Levinson, Astroph. J., 796, 26 (2014).
. ; P Goldreich & Julian, W H Goldreich, Julian, ApJ. 157869Goldreich & Julian (1969). P. Goldreich, W. H. Julian, ApJ 157, 869 (1969).
. Hillas, Astroph. J. 503744Hillas et al. (1998). A. M. Hillas, C. W. Akerlof, S. D. Biller, J. H. Buckley, D. A. Carter- Lewis, M. Catanese, M. F. Cawley, D. J. Fegan, et al. Astroph. J., 503, 744 (1998).
. ; K Hirotani & Okamoto, I Hirotani, Okamoto, Astroph. J. 497563Hirotani & Okamoto (1998). K. Hirotani, I. Okamoto, Astroph. J., 497, 563 (1998).
. ; K Hirotani & Shibata, S Hirotani, Shibata, MNRAS. 30854Hirotani & Shibata (1999). K. Hirotani, S. Shibata MNRAS 308, 54 (1999).
. ; K Hirotani, Hirotani, Mod. Phys. Lett. A (Brief Review). 211319Hirotani (2006). K. Hirotani, Mod. Phys. Lett. A (Brief Review), 21, 1319 (2006).
. Hirotani, K. Hirotani Astroph. J. 76698Hirotani (2013). K. Hirotani Astroph. J., 766, 98 (2013).
. ; K Hirotani & Pu, H.-Y Hirotani, Pu, Astroph. J. 81850Hirotani & Pu (2016b). K. Hirotani, H.-Y. Pu, Astroph. J., 818, 50 (2016a).
. Hirotani, Hirotani et al. (2016a).
. K Hirotani, Astroph. J. 833K. Hirotani et al., Astroph. J., 833, 142, (2016b)
. Hirotani, ApJ. 84577Hirotani et al. (2017). K. Hirotani, H.-Y. Pu, L. C.-C. Lin, A. K. H. Kong, S. Matsushita, K. Asada, H.-K. Chang, P.-H. T. Tam, ApJ 845, 77 (2017).
. Hirotani, ApJ. submitted toHirotani et al. (2018). K. Hirotani, H.-Y. Pu, S. Outmani, H. Huang, D. Kim, Y. Song, S. Matsushita, A. K. H. Kong, submitted to ApJ (2018).
. Ichimaru, S. Ichimaru, Astroph. J. 214840Ichimaru (1979). S. Ichimaru, Astroph. J., 214, 840 (1979).
. ; M R Issa & Wolfendale, A W Issa, P Wolfendale ; R, Kerr, Nature. 292237PRPIssa & Wolfendale (1981). M. R. Issa, A. W. Wolfendale, Nature 292, 430-433 (1981) Kerr (1963). R. P. Kerr, PRP 11, 237 (1963).
. ; S S Komissarov & Mckinney, J C Komissarov, Mckinney, Mon. Not. R. Astron. Soc. 37749Komissarov & McKinney (2007). S. S. Komissarov, J. C. McKinney, Mon. Not. R. Astron. Soc., 377, L49 (2007).
A. Levinson, F. Rieger. Levinson, Astroph. J. 730123Levinson (2011). A. Levinson, F. Rieger, Astroph. J., 730, 123 (2011).
A. Levinson, N. Segev, PRD 96. & Levinson, Segev, 123006Levinson & Segev (2017). A. Levinson, N. Segev, PRD 96, id.123006 (2017).
. ; R Mahadevan, Mahadevan, Astroph. J. 477585Mahadevan (1997). R. Mahadevan, Astroph. J., 477, 585 (1997).
. ; L Mestel, Mestel, Nature. 233149Mestel (1971). L. Mestel, Nature 233, 149 (1971).
. ; R Narayan, I Narayan, Yi, Astroph. J. 42813Narayan (1994). R. Narayan, I. Yi, Astroph. J., 428, L13 (1994).
. Mckinney, MNRAS. 4233083McKinney et al.(2012). J. C. McKinney, A. Tchekhovskoy, R. R. Blandford, MNRAS, 423, 3083 (2012).
. ; A Neronov & Aharonian, F A Neronov, Aharonian, Astroph. J. 67185Neronov & Aharonian (2007). A. Neronov, F. A. Aharonian, Astroph. J., 671, 85 (2007).
. ; R Romani, Romani, Astroph. J. 470469Romani (1996). R. Romani, Astroph. J., 470, 469 (1996).
. Song, Mon. Not. R. Astron. Soc. 471135Song et al. (2017). Y. Song, H.-Y. Pu, K. Hirotani, S. Matsushita, A. K. H. Kong, H.-K. Chang, Mon. Not. R. Astron. Soc., 471, L135 (2017).
. Tchekhovskoy, Astroph. J. 71150Tchekhovskoy et al. (2010). A. Tchekhovskoy, R. Narayan, J. C. McKinney, Astroph. J., 711, 50 (2010).
. Tetarenko, Asoroph. J. Suppl. 22215Tetarenko et al. (2016). B. E. Tetarenko, G. R. Sivakoff, C. O. Heinke, J. C. Glandstone, Asoroph. J. Suppl., 222, 15 (2016).
. Swaluw Van Der, Astron. Astroph. 380309van der Swaluw et al. (2001). E. van der Swaluw, A. Achterberg, Y. A. Gallant, G. Toth, Astron. Astroph., 380, 309 (2001).
. ; E Wilhelmi, De Ona, Wilhelmi, AIP Conf. Ser. 1112Wilhelmi (2009). E. De Ona Wilhelmi, AIP Conf. Ser. 1112, p. 16-22 (2009)
|
[] |
[
"Characterization of the low temperature properties of a simplified protein model",
"Characterization of the low temperature properties of a simplified protein model"
] |
[
"Johannes-Geert Hagmann \nLaboratoire de Physique\nEcole Normale Supérieure de Lyon\nCNRS\n46 Allée d'Italie69364LyonFrance\n",
"Naoko Nakagawa \nDepartment of Mathematical Sciences\nIbaraki University\n310-8512MitoIbarakiJapan\n",
"Michel Peyrard \nLaboratoire de Physique\nEcole Normale Supérieure de Lyon\nCNRS\n46 Allé d'Italie69364LyonFrance\n"
] |
[
"Laboratoire de Physique\nEcole Normale Supérieure de Lyon\nCNRS\n46 Allée d'Italie69364LyonFrance",
"Department of Mathematical Sciences\nIbaraki University\n310-8512MitoIbarakiJapan",
"Laboratoire de Physique\nEcole Normale Supérieure de Lyon\nCNRS\n46 Allé d'Italie69364LyonFrance"
] |
[] |
Prompted by results that showed that a simple protein model, the frustrated Gō model, appears to exhibit a transition reminiscent of the protein dynamical transition, we examine the validity of this model to describe the low-temperature properties of proteins. First, we examine equilibrium fluctuations. We calculate its incoherent neutron-scattering structure factor and show that it can be well described by a theory using the one-phonon approximation. By performing an inherent structure analysis, we assess the transitions among energy states at low temperatures. Then, we examine non-equilibrium fluctuations after a sudden cooling of the protein. We investigate the violation of the fluctuation-dissipation theorem in order to analyze the protein glass transition. We find that the effective temperature of the quenched protein deviates from the temperature of the thermostat, however it relaxes towards the actual temperature with an Arrhenius behavior as the waiting time increases. These results of the equilibrium and non-equilibrium studies converge to the conclusion that the apparent dynamical transition of this coarse-grained model cannot be attributed to a glassy behavior.
|
10.1103/physreve.89.012705
|
[
"https://arxiv.org/pdf/1401.5954v1.pdf"
] | 37,370,288 |
1401.5954
|
0158ac58a8731acd457d940eabecf3e46851a3e8
|
Characterization of the low temperature properties of a simplified protein model
Johannes-Geert Hagmann
Laboratoire de Physique
Ecole Normale Supérieure de Lyon
CNRS
46 Allée d'Italie69364LyonFrance
Naoko Nakagawa
Department of Mathematical Sciences
Ibaraki University
310-8512MitoIbarakiJapan
Michel Peyrard
Laboratoire de Physique
Ecole Normale Supérieure de Lyon
CNRS
46 Allé d'Italie69364LyonFrance
Characterization of the low temperature properties of a simplified protein model
(Dated: January 24, 2014)PACS numbers:
Prompted by results that showed that a simple protein model, the frustrated Gō model, appears to exhibit a transition reminiscent of the protein dynamical transition, we examine the validity of this model to describe the low-temperature properties of proteins. First, we examine equilibrium fluctuations. We calculate its incoherent neutron-scattering structure factor and show that it can be well described by a theory using the one-phonon approximation. By performing an inherent structure analysis, we assess the transitions among energy states at low temperatures. Then, we examine non-equilibrium fluctuations after a sudden cooling of the protein. We investigate the violation of the fluctuation-dissipation theorem in order to analyze the protein glass transition. We find that the effective temperature of the quenched protein deviates from the temperature of the thermostat, however it relaxes towards the actual temperature with an Arrhenius behavior as the waiting time increases. These results of the equilibrium and non-equilibrium studies converge to the conclusion that the apparent dynamical transition of this coarse-grained model cannot be attributed to a glassy behavior.
I. INTRODUCTION
Proteins are fascinating molecules due to their ability to play many roles in biological systems. Their functions often involve complex configurational changes. Therefore the familiar aphorism that "form is function" should rather be replaced by a view of the "dynamic personalities of proteins" [1]. This is why proteins are also intriguing for theoreticians because they provide a variety of yet unsolved questions. Besides the dynamics of protein folding, the rise in the time averaged mean square fluctuation ∆r 2 occurring at temperatures around ≈ 200K, sometimes called the "protein dynamic transition" [2][3][4] is arguably the most considerable candidate in the search of unifying principles in protein dynamics. Protein studies lead to the concept of energy landscape [5,6]. According to this viewpoint a protein is a system which explores a complex landscape in a highly multidimensional space and some of its properties can be related to an incomplete exploration of the phase space. The protein glass transition, in which the protein appears to "freeze" when it is cooled down to about 200 K is among them. Protein folding too can be related to this energy landscape. The famous kinetic limitation known as the Levinthal paradox, associated to the difficulty to find the native state among a huge number of possible configurations, is partly solved by the concept of a funneled landscape which provides a bias towards the native state.
These considerations suggest that the dynamics of the exploration of protein phase space deserves investigation, particularly at low temperature where the dynamic transition occurs. But, in spite of remarkable experimental progress which allows to "watch protein in action in real time at atomic resolution" [1], experimental studies at this level of detail are nevertheless extremely difficult.
Further understanding from models can help in analyzing the observations and developing new concepts. However, studies involving computer modeling to study the dynamics of protein fluctuations are not trivial either because the range of time scales involved is very large. This is why many meso-scale models, which describe the protein at scales that are larger than the atom, have been proposed. Yet, their validity to adequately describe the qualitative features of a real protein glass remains to be tested.
In this paper we examine a model with an intermediate level of complexity. This frustrated Gō model [7,8] is an off-lattice model showing fluctuations at a large range of time scales. It is though simple enough to allow the investigation of time scales which can be up to 10 9 times larger than the time scales of small amplitude vibrations at the atomic level. The model, which includes a slight frustration in the dihedral angle potential which does not assume a minimum for the positions of the experimentally determined structure, exhibits a much richer behavior than a standard Gō model. Besides folding one observes a rise of fluctuations above a specific temperature, analogous to a dynamical transition [9,10], and the coexistence of two folded states. This model has been widely used and it is therefore important to assess to what extent it can describe the qualitative features of protein dynamics beyond the analysis of folding for which it was originally designed. This is why we focus our attention on its low-temperature properties in an attempt to determine if a fairly simple model can provide some insight on the protein dynamical transition. The purpose of the present article is to clarify the origin of the transition in the computer model, and to determine similarities and differences with respect to experimental observations. Although the calculations are performed with a specific model, the methods are more general and even raise some questions for experiments, especially concerning the non-equilibrium properties.
This article is organized as follows. The numerical findings relating to the "dynamical transition"from previous studies [9,10] are presented in Sec. II. As a very large body of experimental studies of protein dynamics emanates from neutron scattering experiments, it is rational to seek a connection between theory and experiment by studying the most relevant experimental observable for dynamics, the incoherent structure factor (ISF). We calculate the ISF from molecular dynamics simulations of the model in Sec. III. We show that its main features can be well reproduced by a theoretical analysis based on the one-phonon approximation, which indicates that, at low temperature, the dynamics of the protein within this model takes place in a single minimum of the energy landscape. Sec. IV proceeds to an inherent structure analysis to examine how the transitions among energy states start to play a role when temperature increases. As the freezing of the protein dynamics at low temperature is often called a "glass transition", this raises the question of the properties of the model protein in non-equilibrium situations. In Sec. V we examine the violation of the fluctuation-dissipation theorem after a sudden cooling of the protein. We find that the effective temperature of the quenched protein, deduced from the Fluctuation-Dissipation Theorem (FDT) deviates from the temperature of the thermostat, however it relaxes towards the actual temperature with an an Arrhenius behavior as the waiting time increases. This would imply that the dynamics of the protein model is very slow but not actually glassy. This method could be useful to distinguish very slow dynamics from glassy dynamics, in experimental cases as well as in molecular dynamics simulations. Finally Sec. VI summarizes and discusses our results.
II. A DYNAMICAL TRANSITION IN A SIMPLE PROTEIN MODEL?
Following earlier studies [9][10][11] we chose to study a small protein containing the most common types of secondary structure elements (α helix, β sheets and loops), protein G, the B 1 domain of immunoglobulin binding protein [12] (Protein Data Bank code 2GB1). It contains 56 residues, with one α helix and four β strands forming a β sheet. We describe it by an off-lattice Gō model with a slight frustration which represents its geometry in terms of a single particle per residue, centered at the location of each C α carbon in the experimentally determined tertiary structure. The interactions between these residues do not distinguish between the type of amino acids. Details on the simulation process and the parametrization of the model are presented in the Appendix. In spite of its simplicity, this model appears to exhibit properties which are reminiscent of the protein dynamical transi-tion. This shows up when one examines the temperature dependence of its mean-squared fluctuations [9] by calculating the variance ∆r 2 of the residue distances to the center of mass as a function of temperature, defined by
∆r 2 = 1 N N i=1 r 2 i0 − r i0 2 .
(1)
Here, N denotes the number of residues, and r i0 is distance of residue i with respect to the instantaneous center of of mass. The average A of the observable A(t) is the time average A = 1
T T 0 dt A(t).
The variances of 20 trajectories (Langevin dynamics simulations, each 3 · 10 7 time units long) were averaged for each temperature point ( · denotes the average over independent initial conditions). Figure 1 shows the evolution of ∆r 2 as a function of temperature. It exhibits a crossover in the fluctuations in the temperature range T /T f = [0.4, 0.5] resembling the transition observed for hydrated proteins in neutron scattering and Mössbauer spectroscopy experiments [2][3][4], the so-called dynamical transition. Above T /T f = 0.4, the fluctuations increase quickly with temperature whereas a smaller, linear, growth is observed below. One may wonder whether the complexity of the protein structure, reflected by the Gō model, is sufficient to lead to a dynamical transition or whether, notwithstanding the resemblance of the onset of fluctuations in the present model and the experimentally determined transition, different physical and not necessarily related events may contribute to the curves which by coincidence look similar. One can already note that, for a folding temperature in the range 330 − 350K, the range 0.4 − 0.5 T /T f corresponds to 132 − 175K, lower than the experimentally observed transition occurring around 180 − 200K.
If it had been confirmed the observation of a dynamical transition in a fairly simple protein model would have been very useful to shine a new light on this transition which is still not fully understood. It is generally agreed that it is hydration-dependent [13], but still different directions for a microscopic interpretation are being pursued, suggesting the existence [14] or non-existence [15] of a transition in the solvent coinciding at the dynamical transition temperature. Recently, a completely different mechanism based on percolation theory for the hydration layer has been proposed [16]. The precise nature of the interaction between the solvent and proteins, and the driving factor behind the transition, hence still remain to be understood. The dynamical transition has often been called the protein glass transition due its similarity with some physical properties of structural glasses at low temperatures. In particular, it was pointed out that, for both glasses and protein solutions, the transition goes along with a crossover towards non-exponential relaxation rates at low temperatures. The comparison is however vague since the glass transition itself and notably its mechanism are ongoing subjects of research and debate. Our goal in this paper is to clarify the origin of the numerically observed transition, which moreover gives hints on the possibilities and limits of protein computer models.
III. ANALYSIS OF THE INCOHERENT STRUCTURE FACTOR
If computer models of proteins are to be useful they must go beyond a simple determination of the dynamics of the atoms, and make the link with experimental observations. This is particularly important for the "dynamical transition" because its nature in a real protein is not known at the level of the atomic trajectories. It is only observed indirectly through the signals provided by experiments. Therefore a valid analysis of the transition observed in the computer model must examine it in the same context, i.e. determine its consequences on the experimental observations. Along with NMR and Mössbauer spectroscopy, neutron scattering methods have been among the most versatile and valuable tools to provide insight on the internal motion of proteins [17,18]. Indeed, the thermal neutron wavelength being of the order ofÅngströms and the kinetic energy of the order of meV s, neutrons provide an adequate probe matching the length-and frequency scales of atomic motion in proteins. An aspect brought forward in the discussion of the dynamical transition in view of the properties of glassy materials is the existence of a boson peak at low frequencies in neutron scattering spectra [19]. Such a broad peak appears to be a characteristic feature of unstructured materials as compared to the spectra of crystals.
A. Incoherent structure factor from molecular dynamics trajectories
In neutron scattering the vibrational and conformational changes in proteins appear as a quasielastic contribution to the dynamic structure factor S(q, ω) which contains crucial information about the dynamics on different time-and length scales of the system. In scattering experiments one measures the double-differential scattering cross section d 2 σ/(dΩdE) which gives the probability of finding a neutron in the solid angle element dΩ with an energy exchange dE after scattering. The total crosssection of the experiment is obtained by integration over all angles and energies. Neglecting magnetic interaction and only considering the short-range nuclear forces, the isotropic scattering is characterized by a single parameter b i , the scattering length of the atomic species i [20], which can be a complex number with a non-vanishing imaginary part accounting for absorption of the neutron. If one defines the average over different spin states b coh = | b | as the coherent scattering length, and the root mean square
deviation b inc = |b| 2 − | b | 2 as the incoherent scat-
tering length, the double-differential cross section arising from the scattering of a monochromatic beam of neutrons with incident wave vector k 0 and final wave vector k by N nuclei of the sample can be expressed as [20]
d 2 σ dΩdE = N |k| |k 0 | b coh 2 S coh (q, ω) + N |k| |k 0 | b inc 2 S inc (q, ω) ,(2)
where q = k − k 0 is the wave vector transfer in the scattering process and r i denote the time-dependent positions of the sample nuclei and the coherent and incoherent dynamical structure factors are
S coh (q, ω) = 1 2πN i,j ∞ −∞ dt e −iωt e −iq(ri(t)−rj (0)) ,(3)S inc (q, ω) = 1 2πN i ∞ −∞ dt e −iωt e −iq(ri(t)−ri(0)) .(4)
The coherent structure factor contains contributions from the position of all nuclei. The interference pattern of S coh (q, ω) contains the average (static) structural information on the sample, whereas the incoherent structure factor S inc (q, ω) monitors the average of atomic motions as it is mathematically equivalent to the Fourier transform in space and time of the particle density autocorrelation function. In experiments on biological samples, incoherent scattering from hydrogens dominates the experimental spectra [18] unless deuteration of the molecule and/or solvent are used.
Since the Gō-model represents a reduced description of the protein and the locations of the individual atoms in the residues are not resolved, we use "effective" incoherent weights of equal value for the effective particles of the model located in the position of the C α -atoms. Such a coarse grained view assumes that the average number of hydrogens atoms and their location in the residues is homogeneous, which is of course a crude approximation in particular in view of the extension and the motion of the side chains. These approximations are nevertheless acceptable here as we do not intend to provide a quantitative comparison with experimental results considering the simplifications and the resulting limitations of the model.
We generated Langevin and Nosé-Hoover dynamic trajectories of length t = 10 5 time units, i.e. about 1000 periods of the slowest vibrational mode of the protein, after an equilibration of equal length for protein G at temperatures in the interval T /T f = [0.0459, 0.9633]. To compute the incoherent structure factor for the Gō-model of protein G, we use nMoldyn [21] to analyze the molecular dynamics trajectories generated at different temperatures. The data are spatially averaged over N q = 50 wave vectors sampling spheres of fixed modules |q| = 2, 3, 4Å −1 , and the Fourier transformation is smoothed by a Gaussian window of width σ = 5% of the full length of the trajectory. Prior to the analysis, a root-mean square displacement alignment of the trajectory onto the reference structure at time t = 0 is performed using virtual molecular dynamics (VMD) [22]. Such a procedure is necessary in order to remove the effects of global rotation and translation of the molecule. Figure 2 shows the frequency dependence of the incoherent structure factor S(q, ω) for a fixed wave vector q = 4Å −1 for a simulation with the Nosé-Hoover thermostat. On Fig. 2-a, the evolution of the low frequency range of the structure is shown for a range of temperatures including the supposed dynamic transition region T /T f = [0.4, 0.5]. At low temperatures up to T /T f ≈ 0.51, individual modes are clearly distinguishable and become broadened as temperatures increases. The slowest mode, located around 4 cm −1 is also the highest in amplitude. It has a time constant of about τ = 80 in reduced units (≈ 8 ps). These well-defined lines are observed to be shifting towards lower frequencies with increasing temperature, similar to the phonon frequency shifts that are frequently observed in crystalline solids. As we show in the following section, the location of these lines can be calculated from a harmonic approximation associated to a single potential energy minimum. Therefore, the shift in frequency and the appearance of additional modes can be seen as a signature of increasingly anharmonic dynamics involving several minima associated to different conformational substates. If, instead of the Nosé-Hoover thermostat, we consider the results obtained with Langevin dynamics and a friction constant γ = 0.01, the stronger coupling to the thermostat leads to low energy modes which are significantly broader than with the Nosé-Hoover thermostat, so that they can hardly be resolved anymore. However the lo- cation of the peaks in the spectra remains the same as the one shown on Fig. 2-b. Besides the larger damping, Langevin calculations pose additional technical difficulties because Langevin dynamics does not preserve the total momentum of the system. The center of mass of the protein diffuses on the time scale of the trajectories. At low temperatures when fluctuations are small, the alignment procedure can efficiently eliminate contributions from diffusion as the center of mass is well defined for a rigid structure. At high temperatures however, it cannot be excluded that the alignment procedure adds spurious contributions to the structure factor calculations as the fluctuations grow in amplitude and the structure becomes flexible.
The analysis of the incoherent structure factor has shown that the low temperature dynamics of the Gōmodel is dominated by harmonic contributions. An increase of temperature leads to a broadening and a shift of these modes until they eventually become continuously distributed. However, for both strong and weak coupling to the heat bath, no distinct change of behavior can be detected within the temperature range T /T f = [0. 4, 0.5] in which Fig. 1 shows an apparent dynamical transition. Instead, the numerical results suggest a continuous increase of anharmonic dynamics, and the absence of a dynamical transition in this model, even though, in the range T /T f = [0.4, 0.5], the peaks of the structure factor in the Nosé-Hoover simulations broaden significantly. In the lowest temperature range the structure factor does not show any contribution reminiscent of a Boson peak.
B. Structure factor from normal mode analysis in the one phonon approximation A further analysis can be carried out to determine whether the low temperature behavior of the protein model shows a complex glassy behavior or simply the properties of an harmonic network made of multiple bonds. The picture of a rough energy landscape of a protein with many minima separated by barriers of different height does not exclude the possibility that, in the low temperature range, the system behaves as if it were in thermal equilibrium in a single minimum of this multidimensional space. This would be the case if the time scale to cross the energy barrier separating this minimum from its neighbor basins were longer than the observation time (both in numerical or real experiments). In this case, it should be possible to describe the low temperature behavior of the protein in terms of a set of normal modes. To determine if this is true for the Gō model that we study, one can compare the spectrum obtained from thermalized numerical simulations at low temperature (low temperature curves on Fig. 2) with the calculation of the structure factor in terms of phonon modes, in the spirit of the study performed in ref. [23] for the analysis of inelastic neutron scattering data of staphylococcal nuclease at 25 K on an all-atom protein model.
The theoretical basis for a quantitative comparison is an approximate expression of the quantum-mechanical structure factor S(q, ω) in the so-called one-phonon limit which only accounts for single quantum process in the scattering events assuming harmonic dynamics of the nuclei. In this approximation, the incoherent structure factor can be written as
S inc (q, ω) = i λ b 2 i e ω λ β/2 e −2Wi(q) |q.e λ,i | 2 × (4m i ω λ sinh(β ω λ /2)) −1 δ(ω − ω λ ) . (5)
Here, the indices i and λ denote the atom and normal modes indices respectively. e λ,i is the subvector relating to the coordinates of particle i of the normal mode vector associated to index λ. W i (q) denotes the Debye-Waller factor, which in the quantum calculation of harmonic motion reads [23]
W i (q) = λ |q.e λ,i | 2 m i ω λ [2n(ω λ ) + 1] ,(6)
n(ω) being the Bose factor associated to the energy level ω.
For the calculations of the structure factor in the Gōmodel within this approximation, we average S inc (q, ω) on a shell of q-vectors by transforming the Cartesian coordinate vector (q x , q y , q z ) into spherical coordinates q = q · (sin(θ)cos(φ), sin(θ)sin(φ), cos(θ)), and generate a grid with N q points for the interval φ = [0, 2π], and N q points for θ = [0, π]. With this shell of vectors, we can evaluate the isotropic average S inc (q, ω). In equation (5), appears as a prefactor to the Debye-Waller factor W i (q) in the exponentials and in the inverse hyperbolic function. In order to evaluate the structure factor in reduced units of the Gō-model, we therefore need to estimate the order of in a similar way as we did for the energy scale (see Appendix) by comparing the fractions
ω k B T f = ω (k B T f ) ,(7)
the non-primed variables denoting quantities in reduced units. In the numerical evaluation of equation (5), we discretize the spectrum of frequencies from the smallest eigenvalue to the largest mode into 10000 grid points to evaluate the δ-function. We use N q = 225 vectors to average on a shell of modulus |q| = 4Å −1 . The summation runs over all eigenvectors except for the six smallest frequencies which are numerically found to be close to zero, and result from the invariance to overall translation and rotation of the potential energy function.
In a first step, we use the coordinates of the global minimum of the Gō-model for protein G corresponding to the inherent structure with index α 0 to calculate the Hessian of the potential energy function. The second derivatives are calculated by numerically differentiating the analytical first derivatives at the minimum. As discussed in the Appendix, due to the presence of frustration in the potential, the experimental structure does not correspond to the global minimum of the model. The difference between the minimum and the experimental structure is however small, with root-mean square deviation 0.16Å and notable changes in position occurring only for a small number of residues located in the second turn.
To estimate the normal mode frequencies in absolute units, we use the conversion of the time unit of 0.1 ps introduced in the Appendix. The conversion into wave numbers, which is convenient for the comparison to experimental data and to the results from all-atom calculations, is achieved by noting that, from ck = f , we can assign the conversion 1 ps −1 → 33.3 cm −1 and multiply the frequencies by this scaling factor. Figure 3-a shows the results of the calculation of the incoherent structure factor S(q = 4Å, ω) in the one phonon approximation at the temperature T = 0.0459 T f . Since in this approximation the normal mode frequencies enter with a delta function into equation (5), there is no line width associated to these modes unless the structure factor is convoluted with an instrumental resolution function or a frictional model [18]. Comparing to the structure fac-tor calculated from a molecular dynamics trajectory at the same temperature ( Fig. 3-b), we find a good correspondence of the location of the lines and their relative amplitude with respect to each other. Therefore the analysis of the incoherent structure factor using a harmonic approximation quantitatively confirms the dominant contribution of harmonic motion at low temperatures. In particular, the motion at very low temperatures occurs in a single energy well associated to one conformational substate. To see how this behavior changes with increasing temperature, in the next section we analyze the distribution of inherent structures with temperature.
IV. INHERENT STRUCTURE ANALYSIS IN THE DYNAMIC TRANSITION REGION
The freezing of the dynamics of a protein at temperatures below the "dynamic transition" is also described as a "glass transition". This leads naturally to consider an energy landscape with many metastable states, also called "inherent states" in the vocabulary of glass transitions. In refs. [9,11] we showed that the thermodynamics of a protein can be well described in terms of its inher-ent structure landscape, i.e. a reduced energy landscape which does not describe the complete energy surface but only its minima. This picture is valid at all temperatures, including around the folding transition and above. For our present purpose of characterizing the low temperature properties of a protein and probe its possible relation with a glassy behavior, it is therefore useful to examine how the protein explores its inherent structure landscape in the vicinity of the dynamic transition. Here, we shall try to find how the number of populated minima changes with temperature around the transition region T /T f = [0.4, 0.5] for the Gō-model of protein G, and which conformational changes can be associated to these inherent structures.
For three selected temperatures Fig. 1, we generated 10 trajectories from independent equilibrated initial conditions for 2 · 10 7 reduced time units using the Nosé-Hoover thermostat. Along each trajectory, a minimization was performed every 2·10 4 time units such as to yield N m = 20000 minima for each temperature point. In the classification of these minima and their graphical representation, we only keep those minima which have been visited at least 2 times within the N m minima, which lead to discard less than 10 events from the total number of counts. Most of the counts are concentrated on a small number of inherent structures. In figure 4, we show the relative populations of the inherent structures on a two-dimensional subspace spanned by the inherent structure energy and the structural difference with the experimental structure measured by the dissimilarity factor [10,24]. The radius of the circles centered at the location of the minima on this plane is set proportional to 1/2 log(w) where w is the absolute number of counts of this minimum along the trajectories. This definition is necessary to allow the graphical representation on the plane, however, it may visually mask that linear differences in the radii translate into exponential differences of the frequency of visit of the minimum. As an example, the minima α 0 , α 1 , α 2 , α 3 have the occupation probabilities p(α 0 ) = w(α 0 )/N m ≈ 92%, p(α 1 ) ≈ 8%, p(α 2 ) ≈ 0.1% and p(α 3 ) ≈ 0.02% at T = T 1 .
T 1 = 0.275 T f , T 2 = 0.39 T f , T 3 = 0.482 T f shown on
From figure 4, we notice that already at T 1 more than one minimum is populated though the global minimum α 0 is dominant. In these figures, lines are drawn between minima that are connected along the trajectory, i.e. that form a sequence of events. It should however be noted that since the sampling frequency is low, it cannot be excluded that an intermediate corresponding to an additional connection line is skipped. Connections between all minima may therefore exist even though they did not appear in the sequences observed in this study. Moving to higher temperatures T 2 and T 3 , a larger number of minima which are both higher in energy and structural dissimilarity appear. As the temperature rises, their population numbers become more important, as can be seen changes associated with these minima, it is useful to align their coordinates onto the coordinates of the global minimum. The results of such an alignment are shown in figure 5. In this figure, the coordinates of the effective Gōmodel particles located at the positions of the C α -atoms for each amino-acid are drawn in red color. One notices that the conformational changes associated to α 1 -α 3 are small. It is interesting to notice that these small changes already appear in the range of temperatures where the rise in fluctuations seems to grow still linearly with the temperature. The next higher minima involve in particular a reorientation of a turn within the β-sheets of a protein. The temperature range at which these minima start to be populated coincides with the transition re- gion revealed by the mean-distance displacement ∆r 2 , suggesting that the anharmonic motion required to make transitions between the basins of these minima is at its origin.
We again observe that the dynamic transition region does not exhibit any particular change of behavior that could deserve the name of "transition", but rather a gradual evolution which gets noticeable in the range T /T f = [0.4, 0.5]. In the next section we use a nonequilibrium approach to reveal whether the dynamics below the transition range can be characterized as "glassy" or not.
V. TEST OF THE FLUCTUATION-DISSIPATION THEOREM (FDT) -A NON-EQUILIBRIUM APPROACH
An alternative approach to study the low temperature transition, for which equilibrium simulations take a significant amount of computer time, consists in the test of the response of the protein to external perturbations. Rather than waiting a long time to see rare fluctuations dominating the average fluctuation at low temperatures, the system is driven out of equilibrium on purpose to either observe the relaxation back to equilibrium and its associated structural changes, or the response to a continuous perturbation to be compared to fluctuations at equilibrium.
The fluctuation-dissipation theorem (FDT) relates the response to small perturbations and the correlations of fluctuations at thermal equilibrium for a given system. In the past years, the theorem and its extensions have become a useful tool to characterize glassy dynamics in a large variety of complex systems [25]. For glasses below the glass transition temperature, the equilibrium relaxation time scales are very large so that thermal equilibrium is out of reach [26]. Consequently, the FDT cannot be expected to hold in these situations, and the response functions and correlation functions in principle provide distinct information. In this section, we test the FDT for the Gō-model of protein G at various temperatures to see whether a signature of glassy dynamics is present in the system. To this aim, we first recall the basic defi-nitions and notations for the theorem.
In our studies we start from a given initial condition and put the system in contact with a thermostat during a waiting time t w . The end of the waiting time is selected as the origin of time (t = 0) for our investigation. If t w is large enough (strictly speaking t w → ∞) the system is in equilibrium at t = 0. We denote the Hamiltonian of the unperturbed system H 0 , which under a small linear perturbation of the order (t) acting on an observable B(t) becomes
H = H 0 − (t)B(t) ,(8)
where for = 0 we recover the unperturbed system. For any observable A(t), we accordingly define the two ensemble averages A(t) tw 0 and A(t) tw where the index references the average with respect to the unperturbed/perturbed system respectively and the exponent t w indicates how long the system was equilibrated before the start of the investigation. The correlation function in the unperturbed system relating the observables A(t), B(t ) at two instances of time t, t is defined by
C AB (t, t ) = A(t)B(t ) tw 0 − A(t) tw 0 · B(t ) tw 0 .(9)
The susceptibility χ AB (t), which measures the timeintegrated response of the of the observable A(t) at the instant t to the perturbation (t ) at the instant t , reads
χ AB (t) = t t0 dt δ A(t) tw δ (t ) .(10)
The index B in the susceptibility indicates that the response is measured with respect to the perturbation arising from the application of B(t), and the lower bound t 0 of the integral indicates the instant of time at which the perturbation has been switched on. The integrated form of the FDT states that the correlations and the integrated response are proportional and related by the system temperature at equilibrium
χ AB (t) = 1 k B T ∆C with ∆C = (C AB (t, t) − C AB (t, 0)) .(11)
In the linear response regime for a sufficiently small and constant field , the susceptibility can be approximated as
χ AB (t) ≈ A(t) tw − A(t) tw 0(12)
such that in practice, verifying the FDT accounts for the comparison of observables on both perturbed and unperturbed trajectories.
The basic steps for a numerical experiment aiming to verify the FDT can be summarized as follows:
• Initialize two identical systems 1 and 2; 1 to be simulated with and 2 without perturbation.
• Equilibrate both systems without perturbation during t w .
• At time t 0 (in practice t 0 = 0, i.e. immediately after the end of the equilibration period) switch on the perturbation for system 1 and acquire data for both systems for a finite time t FDT .
• Repeat the calculation over a large number of initial conditions to yield the ensemble averages · tw 0 and · tw ; combine the data according to equation (11).
The protocol may be modified to include an external perturbation which break the translational invariance in time. For instance the initial state can result from a quench from a high to a low temperature. Then the system is only equilibrated for a short time t w before the perturbation in the Hamiltonian is switched on. In this case the distribution of the realizations of the initial conditions is not the equilibrium distribution so that the correlation function defined by Eq. (9) depends on the two times t and t and not only on their difference.
A. Simulation at constant temperature
This case corresponds to t w → ∞. In our calculations we start from an initial condition which as been thermalized for at least 5000 time units. The first step is to make an appropriate choice for the perturbative potential (t)B(t). An earlier application of the FDT to a protein model [27] has used the perturbative term
(t)B(t) = N i=1 cos(k y i ) ,(13)
where k is a scalar, y i the y coordinate of amino-acid i and = 0 for t > t 0 a constant. This perturbation is invariant neither by a translation of the system nor by its rotation. Although this does not invalidate the FDT, this choice poses some problems for the accuracy of the calculations because, even in the absence of internal dynamics of the protein, the perturbation varies as the molecules diffuse in space or rotate. To avoid this difficulty we selected the perturbation
W := − B(t) = − N i=1,i =28 cos(k r i,28 ) ,(14)
where r i,28 is the distance between amino-acid i and the amino-acid 28 which has been chosen as a reference point within the protein because it is located near the middle of the amino-acid chain. Such a potential only depends on the internal state of the molecule, while it remains unaffected by its position in space. To test the FDT for the Gō-model of protein G using Eqs. (11) and (12), we add this potential W to the potential energy V of the model and we select A(t) = B(t). The thermal fluctuations are described with the same Langevin dynamics as previously. We switch on the perturbation for the equilibrated protein model and record 50000 to 400000 trajectories ( Figure 6 shows the evolution of the relation between the susceptibility and the variation of the correlation function ∆C = C AB (t, t) − C AB (t, 0). The straight lines represent the slopes expected from the FDT. One notices that, for = 0.05, at T = 0.826 T f , in the long term the value of χ/∆C stabilizes around a value which is away from the expected value 1/k B T . From a first glance, this result is reminiscent of the properties of a glass driven out of equilibrium. In this context, the deviation from the slope expected from the FDT is inter-preted as the existence of an "effective temperature" for non-equilibrium systems. For the case studied here, finding an effective temperature would be surprising as the results are obtained from measurements on a thermalized protein model, i.e. a system in a state of thermal equilibrium. How is it then possible to explain the apparent deviation from the FDT? The calculations performed with = 0.005 give the clue because they show that the deviation appeared because the perturbation was too large and outside of the linear response regime assumed to calculate the susceptibility because for this lower value of the deviation has vanished. If one computes the average value of the perturbation energy W and compares it to the protein average energy E(T ) , for the case shown on Fig. 6, = 0.05, one finds W / E(T ) = 1.3 · 10 −2 . This is small, but, at temperatures which approach T f the protein is a highly deformable object and even a small perturbation can bring it out of the linear response regime. This shows up by the the a rise of χ versus time for = 0.05. At low temperatures the protein is more rigid an therefore more resilient to perturbations. The insets on Fig. 6 show that for = 0.05 the calculations find that the fluctuation-dissipation relation at T = 0.275 T f is almost perfectly verified although a very small deviation can still be detected for this value of = 0.05. Therefore a careful choice of parameters is necessary to test the FDT under controlled conditions. In particular, the perturbation needs to be carefully chosen to only probe the internal dynamics and not to dominate them.
B. Simulation of quenching
A typical signature of a glassy system is its aging after a perturbation. In the context of the protein "glass transition", one can therefore expect to detect a slow evolution of the system as a function of the time after which it has been brought to the glassy state. This is usually tested in quenching experiments, which can be investigated by a sharp temperature drop in the numerical simulations. Our calculations start from an equilibrium state at high temperature T = 1.40 T f , which is abruptly cooled at a temperature T q below the temperature of the dynamical transition studied in the previous sections. The model protein is then maintained at this temperature T q by a Langevin thermostat. After a waiting time t w we start recording the properties of the system over a time interval t FDT = 25000 t.u. to probe the fluctuation dissipation relation. In order to avoid nonlinear effects we use a small value of = 0.005. For such a weak perturbation, the response is weak compared to thermal fluctuations and a large number of realizations (50000 or more) is necessary to achieve reliable statistical averages. To properly probe the phase space of the model, these averages must be made over different starting configurations before quenching. This is achieved by starting the simulations from a given initial condition properly thermalized at T = 1.40 T f in a preliminary calculation. Then we run a short simulation at this initial temperature, during which the unfolded conformations change widely from a run to another with different random forces because at high temperatures the fluctuations of the protein are very large. The conformations reached after this short high-T thermalization are the conformations which are then quenched to T q , for the FDT analysis. Typical results are shown on Figs. 7 and 8 for two values of t w . The time evolution of the energy shows that, after the waiting time t w , even for the largest value t w = 50000 t.u. the model protein is still very far from equilibrium because its energy is well above the ground state energy (chosen as the reference energy 0). This non-equilibrium situation sometimes leads to rapid energy drops, generally accompanied by a decrease of the dissimilarity with the native state, which superimpose to random fluctuations which have to be expected for this system in contact with a thermal bath. As expected the sharp variations of the conformations are more noticeable for the shortest waiting time. Figure 8 shows that, while for small values of ∆C = (C AB (t, t) − C AB (t, 0)), which also correspond to shorter times after we start to collect the data for the FDT test, the variation of χ AB (t) versus ∆C follows the curve given by the FDT relation, then at larger ∆C the curve shows a significant deviation from the slope 1/T , which defines an effective temperature T eff > T . The effective temperature is larger for short waiting times after the quench and decreases when t w increases. This should be expected because, in the limit t w → ∞ we should have T eff → T when the system reaches equilibrium.
It is not surprising to find a deviation from the FDT behavior after a strong quench of the protein model because we put the system very far from equilibrium. Therefore the observation of an effective temperature that differs from the actual temperature of the system is not a sufficient indication to conclude at the existence of a glassy state of the protein model. What is important is the timescale at which the system tends to equilibrium and how it depends on temperature. To study this we have performed a systematic study of the variation of T eff (t w ; T ) as a function of the waiting time t w and temperature T , at the temperatures T = 0.1875 T f , T = 0.2752 T f , T = 0.3670 T f , T = 0.4128 T f and T = 0.4817 T f . The temperature domain that we can study numerically is limited both from below and from above. At the lowest temperatures the relaxation of the system is very slow so that t w must be strongly increased. Moreover the speed at which the protein model explores its phase space by moving from an inherent structure to another becomes very low and statistically significant data cannot be obtained without a large increase of t FDT . Running enough calculations to get a good average on the realizations becomes unpractical. As discussed above, at high temperatures the protein becomes "soft" so that one quickly leaves the linearity domain of the FDT, unless the applied perturbation becomes very small. But then the large thermal fluctuations reduce the signal to noise ratio. Therefore the advantage of faster relaxation times at high temperature is whipped out by the need to make statistical averages over a much larger number of realizations. However the temperature range over which one can get statistically significant results overlaps the temperature T ≈ 0.45 T f above which the fluctuations of the model appear to grow faster ( Fig. 1) so that one could expect to observe a change in the properties of the system at this temperature, if it existed.
At a given temperature T we have defined a relaxation time τ (T ) by assuming that the effective temperature relaxes exponentially towards the actual temperature according to Figure 9 shows that this assumption is well verified by the numerical calculations. It should however be noticed that, for the longest waiting times, we may observe a large deviation from the exponential decay, as shown in Fig. 9 for the results at T = 0.4128 T f . We attribute this to the limitations of our observations because, when the system has sufficiently relaxed so that its effective temperature approaches the actual temperature, all subsequent relaxations become extremely slow and may exceed the observation time t FDT = 25000 t.u. so that the test of the FDT no longer properly probes the phase space. Increasing t FDT by an order of magnitude might allow us to observe the relaxation further but is beyond our computing possibilities as we have to study at least 50000 realizations or more to achieve a reasonable accuracy. A fit of the values of (T eff /T − 1) versus t w , which takes into account the statistical weight of each point according to its standard deviation obtained from the uncertainties on T eff determines the relaxation time τ (T ) and its corresponding standard deviation.
T eff T − 1 ∝ exp − t w τ (T ) .(15)τ (T ) = τ 0 exp E a T(16)
with an activation energy E a /(k B T f ) = 0.5668 ± 0.1799. At the lowest temperature, the relaxation temperature estimated from the Arrhenius law is τ (T = 0.1875 T f ) ≈ 100000 t.u. so that calculations with t w ≥ 100000 as well as t FDT 100000 would be necessary, which is unpractical. However the observed deviation from Arrhenius law at low T cannot be attributed to a low-temperature glass transition because such a transition would lead to a relaxation time larger than predicted by the Arrhenius relation, while we observe the opposite. In any case the Arrhenius relation is well verified for a temperature range which overlaps the temperature T ≈ 0.45 T f , i.e. T f /T ≈ 2.22, above which dynamical simulations suggested a possible increase of the fluctuations (Fig. 1). Therefore the relaxation of the protein model after a quench appears to follow a standard activated process, with an activation energy of the order of 0.57 k B T f , without any sign of a glassy behavior.
These observations can be compared with other studies of the fluctuations of the same Gō model of protein G [9,28]. Paper [9] investigated the fluctuation in equilibrium below the folding temperature T f . In these conditions, the numerical simulations of a protein which is near its native state detect small, up and down, jumps of the dissimilarity factor, which, in any case, stays very low (d ≈ 0.06 for the equilibrium temperature T = 0.55 T f ) but switches between values that differ by ≈ 0.01. In its equilibrium state the protein may jump from an inherent structure to another but these fluctuations are much slower than the one that we observed shorter after a temperature quench because they occur on a time scale of the order of 10 7 t.u. Their activation energy had been found to be E B = 6.2 T f , i.e. much higher than the activation energy E a that characterizes the relaxation of the effective temperature that we measured. Those results are not in contradiction because they correspond to fundamentally different phenomena. The non-equilibrium fluctuations that we discuss in the present paper appear because the potential energy surface has minima on the side of the "funnel" that leads to the native state. Such minima, corresponding to protein structures which are not fully folded, can temporarily trap the protein in intermediate states. However the lifetime of these highenergy minima is only of the order of 10 5 t.u. i.e. they are short lived compared to the residence time of the protein in an inherent structure close to the native state. When the protein escapes from one of these high-energy minima we observe an energy drop, as shown in Fig. 7.
The study that we presented here is neither a study of the protein near equilibrium, nor an investigation of the full folding process which also occurs on much longer time scales (typically 10 7 -10 8 t.u.) as observed in Refs.
[9] and [28]. Therefore the activation energy E a is also different from the energy barrier for folding. For the same reason the effective temperature after the quench T eff should not be confused with the configurational temperature T cnf defined in Ref. [28] which relates the entropy and energy of the inherent structures during folding. T cnf gives a global view of the phase space explored during folding, and it evolves with a characteristic time of 10 7 -10 8 t.u. as the folding itself. Compared to these scales, the FDT analysis that we presented here appears as a snapshot of the strongly non-equilibrium state created after a fast quench. It offers a new view of the time evolution of the protein model, which completes the ones which had been presented earlier.
VI. DISCUSSION
The starting point of our study has been the numerical observation of a low temperature transition in a simplified protein model resembling the experimentally observed dynamical transition in hydrated protein samples. This suggested that, in spite of its simplicity, the frustrated Gō model could be used not only to study the folding of proteins but also their dynamics in the low temperature range, opening the way for an exploration of the glassy behavior of proteins.
We have therefore used different approaches to further characterize the properties of the protein model in the low temperature range. Thermalized molecular dynamics simulations have been used to calculate the incoherent structure factor than one could expect to observe for the protein. It shows peaks that broaden as temperature increases, suggesting that the dynamics of the protein model is dominated by harmonic or weakly anharmonic vibrations. This has been confirmed by the calculation of the structure factor in the one-phonon approximation. All the main features of the structure factor obtained by simulations, such as the peak positions and even the power law decay of the amplitude of the modes with frequency, are well reproduced. In the low temperature range, the dynamics of the protein appears to occur in a single energy well of its highly multidimensional energy landscape. Of course this is no longer true when the temperature increases.
By analyzing the population of the inherent structures, we have shown that, in the temperature range of the dynamical transition, there is a continuous increase of the number of states which are visited by the protein. The transition seems to be continuous, and it is likely that numerical observations suggesting a sudden increase may have the origin in the limited statistics due to finite time observation. Indeed, as shown for the example of protein G, the conformational transitions become extremely slow at low temperatures, such that the waiting time between the jumps between conformations may exceed the numerical (or the experimental) observation time. It is this breakdown of the ergodic hypothesis together with the observation of non-exponential relaxation rates which may have lead to the emergence of the terminology "protein glass transition" in analogy to the phenomenology of glasses.
Non equilibrium studies allowed us to systematically probe a possible glassy behavior by searching for violations of the fluctuation-dissipation theorem. First we have shown that these calculations must be carried out with care because apparent violations are possible, even when the system is in equilibrium, due to nonlinearities in the response. Except at very low temperatures they can be observed even for perturbations as low as 1% of the potential energy of the system. Once this artifact is eliminated by the choice of a sufficiently small perturbative potential, we have shown that, after a sudden quench from an unfolded state to a very low temperature T q , one does observe a violation of the FDT in the protein model, analogous to what is found in glasses. The quenched protein is characterized by an effective temperature T eff > T q . But the relaxation of the model towards equilibrium, deduced from the evolution of the effective temperature T eff as a function of the waiting time after the quench, follows a standard Arrhenius behavior, even when the temperature crosses the value T ≈ 0.45T f at which dynamical simulations appeared to show a change in the amplitude of the fluctuations.
Although one cannot formally exclude that the results could be different for other protein structures or other simplified protein models, this work concludes that a coarse-grain model such as the Gō model is too simple to describe the complex behavior of protein G and particularly its glass transition. Indeed such a model does not include a real solvent, which plays an important role in experiments. The thermostat used in the molecular dynamics simulations only partially models the effect of the surrounding of the protein. The apparent numerical transition previously observed for protein G may simply be related to finite-time observations of the activation of structural transitions which appears in a particularly long time scale for proteins. This is an obvious limitation of molecular dynamics calculations, but this could also sound as a warning to experimentalists. Indeed experiments can access much longer time scales. But they also deal with real systems which are much more complex than the Gō model. In these systems relaxations may become very long, so that the experimental observation of a transition could actually face the same limitations as the numerical simulations. Such a "time window" interpretation has also been brought forward for the experimentally observed dynamical transition, suggesting that the transition may in fact depend on the energy-, and thus, on the time-resolution of the spectrometer [29]. In this respect, as shown by our non-equilibrium studies to test the validity of the FDT, such measurements, if they could be performed for a protein should tell us a lot about the true nature of the "glass transition" of proteins.
Appendix: Simulation and units
The forcefield and the parametrization of the simplified Gō-model are presented in [9,10]. In a standard Gō model the potential energy is written in such a way that the experimental energy state is the minimal energy state. We use here a weakly frustrated Gō model for which the dihedral angle potential does not assume a minimum in the reference position defined by the experimentally resolved structure: it favors angles close to π/4 and 3π/4 irrespective of the secondary structure element (helix, sheet, turn) the amino acid belongs to. This source of additional "frustration" affects the dynamics and thermodynamics of the model, leading to a more realistic representation [10]. This feature introduces additional complexity in the model because, besides its ground state corresponding to the experimental structure, the frustrated model exhibits another funnel for folding, which leads to a second structure which is almost a mirror image of the ground state, but has a significantly higher energy (Fig. 11). tein G model obtained by a non-equilibrium cooling protocol followed by energy minimization (the various colors show minima obtained with different speed of cooling). The horizontal axis shows the energy of each structure with reference to the global minimum, and the vertical axis indicates its dissimilarity with the ground state [10,24], lower values indicating more structural similarities between two conformations.
To control temperature in the molecular dynamics simulations, several types of numerical thermostats were used. Most of the calculations use underdamped Langevin simulations [30,31] with a time step dt = 0.1 t.u. and friction constants in the range γ = 0.01, 0.025. The mass of all the residues is assumed to be equal to m = 10. Some calculations were also performed with the multi-thermostat Nose-Hoover algorithm using the specifications defined in [32]. For simulation purposes, the variables in the Gō-model are chosen dimensionless (reduced units). Lengths are expressed in units ofl = 1 representingÅngströms, and the average mass of an amino-acid, 135 Da, has been expressed as 10 units of mass of the model. As the empirical potentials are defined at the mesoscopic scale of the amino-acids, values for the interaction constants in the effective potentials cannot be easily estimated in absolute units. It is possible to estimate the energy scale of the model by comparing the reduced folding temperature of the Gō-model with a realistic order of magnitude of the folding temperature T f in units of K and setting
k B T f =˜ k B T f , (A.1)
where the variables on the left-hand side are given in SI units (T f is the estimate of the folding temperature), and unprimed variables are written in reduced units;˜ is the required energy scale in units of J to match between both. In our calculations k B = 1 (meaning that reduced temperatures are expressed in reduced energy units) and T f = 0.218 is deduced from equilibrium simulations. Then a simple dimensional analysis gives the time unit of the model ast = ml 2 . One arrives at an estimated time unit oft ≈ 0.1 ps. In the paper we refer tot ≈ 0.1 ps as the unit of time (t.u.) for the simulations of the Gō-model, keeping in mind that this can merely be seen as an order of magnitude in view of the approximations that lead to this number.
FIG. 1 :
1Average mean-squared distance fluctuations ∆r 2 as a function of temperature for protein G. Data adapted from[9]. The temperatures marked T1, T2, T3 are the temperatures studied in Sec. IV.
FIG. 2 :
2(Color online) Incoherent structure factor Sinc(q = 4 A −1 , ω) as a function of temperature for protein G (Nosé-Hoover thermostat). The unit of time has been converted to absolute units using the approximate conversion factor 1 t.u. = 0.1 ps. Panel a) shows a magnification of the structure factor in the low frequency range. The structure factors on this panel have been shifted with an offset to avoid the overlap of curves at different temperatures. The different curves, from bottom to top, correspond to the temperatures T /T f listed on the side of each panel.
FIG
. 3: a) Frequency dependence of the incoherent structure factor S(q = 4Å, ω) (T = 0.0459 T f ) calculated from normal modes in the one-phonon approximation. This figure only shows the lower frequency part of the spectrum. b) Incoherent structure factor S(q = 4Å, ω) calculated from Nosé-Hoover constant temperature molecular dynamics at the same temperature.
FIG. 4 :
4e.g. by inspection of the radii of the block α 4 -α 7 on figure 4. To obtain a physical picture of the conformational Inherent structure population of the Gō-model for protein G at temperatures T1, T2, T3 (from top to bottom) and their associated structural dissimilarity. Lines are drawn between states that are connected within a MD trajectory. The width of the circles is proportional to 1/2 log(w) where w is the total number of occurrences of a given minimum.
FIG. 5 :
5(Color online) Shapes of inherent structures α1, α2, α6, α7. The reference coordinates of the global minimum α0 are shown by red balls surrounded by a thick black line. The coordinates of the global minimum are invisible for residues which overlap with the inherent structure coordinates.
FIG. 6 :
6depending on the value of ) of duration 2000 time units for temperatures in the range T /T f = [0.275, 0.826] covering both the low temperature domain and the approach of the folding transition of the protein. The perturbation prefactors chosen in this first set of simulations were = 0.05 and = 0.005, and the wave number of the cosine-term was k = 2π/10. (Color online) Variation of χ versus C for = 0.05 and = 0.005 at the equilibrium temperature T = 0.826 T f . The insets show χ versus ∆C at T = 0.275 T f . The oblique (red) lines show the slope 1/kBT that would be expected according to the fluctuation-dissipation theorem. The results presented on this figure have been obtained from 400000 realizations.
FIG. 7 :
7(Color online) Time evolution of the energy above the ground state, in units of kBT f , after a temperature jump from T = 1.4 T f to T = 0.367 T f for two different waiting times tw, indicated in the title of each panel. The figure shows the evolution of the energy for 15 realisations (corresponding to the different colors) for the time tFDT = 25000 t.u.
FIG. 8 :
8(Color online) Test of the FDT for the temperature jump from T = 1.4 T f to T = 0.367 T f and different waiting times tw, indicated in the title of each panel. The figures show the response function χ versus the variation ∆C of the correlation function. The full straight lines (red) show the result of the FDT.The results for tw = 10000 t.u. have been obtained by averaging over 115000 independent realizations, those for tw = 50000 t.u. have been obtained with 92000 realizations. The dotted (green) lines show the fit of the data for the domain ∆C > Cm where Cm is the value above which the curve deviates significantly from the line of slope 1/T . The inverse of the slope of these fits defines an effective temperature T eff . As indicated in the legend of each figure, the value obtained for T eff slightly depends on the choice of Cm. By varying Cm we can therefore estimate the standard deviation on the value of T eff .
FIG. 9 :
9Variation of the effective temperature T eff as a function of the waiting time tw at two temperatures, T = 0.2572 T f (open circles) and T = 0.4128 T f (closed squares). The figure shows ln (T eff /T − 1) versus tw. The lines show a linear fit that takes into account the error bars on the determination of T eff determined as explained in the caption of Fig. 8. Such a fit determines τ (T ) and its the standard deviation. For T = 0.4128 T f the point corresponding to the largest value of tw is not included in the fit (see the discussion in the text).
FIG. 10 :
10Variation of the relaxation time τ (T ) versus temperature. We plot ln(τ ) versus T f /T . The line is a fit of the values which takes into account the uncertainties on the values of τ (T ).
Figure 10
10shows the variation of τ (T ) with temperature, in logarithmic scale, versus T f /T . It shows that, except for the value at the lowest temperature T = 0.1875 T f , i.e. T f /T = 5.45, within the numerical errors evaluated at each stage of our calculation, the relaxation time obeys a standard Arrhenius relation
Acknowledgments M.P. and J.G.H would like to thank Ibaraki University, where part of this work was made, for support. This work was supported by KAKENHI No 23540435 and No 25103002 Part of the numerical calculations have been performed with the facilities of the Pôle Scientifique de Modélisation Numérique (PSMN) of ENS Lyon.
FIG. 11 :
11Stable and metastable conformations of the pro
. K Henzler-Wildman, D Kern, Nature. 450K. Henzler-Wildman and D. Kern, Nature 450 964-972 (2007)
. W Doster, S Cusack, W Petry, Nature. 337754W. Doster, S. Cusack, and W. Petry, Nature 337, 754 (1989)
. F Parak, E N Frolov, R L Mossbauer, V I Goldanskii, J. Mol. Biol. 145825F. Parak, E. N. Frolov, R. L. Mossbauer, and V. I. Goldanskii, J. Mol. Biol. 145, 825 (1981)
. H Frauenfelder, G A Petsko, D Tsernoglou, Nature. 280558Frauenfelder H, Petsko GA and Tsernoglou D, Nature, 280, 558 (1979)
. H Frauenfelder, S G Sligar, P G Wolynes, Science. 2541598H. Frauenfelder, S.G. Sligar and P.G. Wolynes, Science 254, 1598 (1991)
. J D Bryngelson, J N Onuchic, N D Socci, P G Wolynes, Proteins: Struct. Funct. Genetics. 21167J.D. Bryngelson, J.N. Onuchic, N.D. Socci and P.G. Wolynes, Proteins: Struct. Funct. Genetics 21, 167 (1995)
. C Clementi, H Nymeyer, J N Onuchic, J. Mol. Biol. 298937C. Clementi, H. Nymeyer and J.N. Onuchic, J. Mol. Biol, 298, 937 (2000)
. J Karanicolas, C L Brooks, J. Mol. Biol. 334309J. Karanicolas and C.L. Brooks, J. Mol. Biol. 334, 309 (2003)
N Nakagawa, M Peyrard, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1035279N. Nakagawa and M. Peyrard; Proc. Natl. Acad. Sci. USA 103, 5279 (2006)
. N Nakagawa, M Peyrard, Phys. Rev. E. 7441916N. Nakagawa and M. Peyrard, Phys. Rev. E 74, 041916 (2006)
. J.-G Hagmann, N Nakagawa, M Peyrard, Phys. Rev. E. 8061907J.-G. Hagmann, N. Nakagawa and M. Peyrard, Phys. Rev. E 80, 061907 (2009)
. A M Gronenborn, D R Filpula, N Z Essig, A Achari, M Whitlow, P T Wingfield, G M Clore, Science. 253657A.M. Gronenborn, D.R. Filpula, N.Z. Essig, A. Achari, M. Whitlow, P.T. Wingfield, and G.M. Clore, Science 253, 657 (1991)
. P W Fenimore, H Frauenfelder, B H Mcmahon, R D Young, Proc. Nat. Acad. Sci. USA. 10114408Fenimore PW, Frauenfelder H, McMahon BH and Young RD 2004 Proc. Nat. Acad. Sci. USA 101 14408
S.-H Chen, L Liu, E Fratini, P Baglioni, A Faraone, E Mamontov, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1039012S.-H.Chen, L. Liu, E. Fratini, P. Baglioni, A. Faraone, and E. Mamontov, Proc. Natl. Acad. Sci. USA 103, 9012 (2006)
. S Pawlus, S Khodadadi, A P Sokolov, Phys. Rev. Lett. 100108103S. Pawlus, S. Khodadadi and A.P. Sokolov, Phys. Rev. Lett. 100, 108103 (2008)
. H Nakagawa, M Kataoka, J. Phys. Soc. Jpn. 7983801H. Nakagawa, M. Kataoka, J. Phys. Soc. Jpn 79 083801 (2010)
. F Gabel, Quart. Rev. Biophys. 354F. Gabel et al., Quart. Rev. Biophys. 35, 4 (2002)
. J C Smith, Quart. Rev. Biophys. 243J.C. Smith, Quart. Rev. Biophys. 24, 3 (1991)
. H Leyser, W Doster, M Diehl, Phys. Rev. Lett. 822987Leyser H, Doster W and Diehl M 1999 Phys. Rev. Lett. 82 2987
Quasielastic Neutron Scattering: Principles and Applications in Solid State Chemistry. M Bée, Biology and Materials Science. IOP PublishingM. Bée, Quasielastic Neutron Scattering: Principles and Applications in Solid State Chemistry, Biology and Ma- terials Science, IOP Publishing, Bristol (1988)
. G R Kneller, V Keiner, M Kneller, M Schiller, Comp. Phys. Commun. 91191G.R. Kneller, V. Keiner, M. Kneller and M. Schiller, Comp. Phys. Commun. 91, 191 (1995)
. W Humphrey, A Dalke, K Schulten, J. Mol. Graphics. 1433W. Humphrey, A. Dalke, and K. Schulten, J. Mol. Graph- ics 14, 33 (1996)
. A V Goupil-Lamy, J C Smith, J Yunoki, S E Parker, M Kataoka, J. Am. Chem. Soc. 1199268A.V. Goupil-Lamy, J.C. Smith, J. Yunoki, S.E. Parker and M. Kataoka, J. Am. Chem. Soc. 119, 9268 (1997)
Let A and B be two conformations of the protein (for our purpose B will be the native structure) and aij, bij the distance between residues i and j in these conformations. The dissimilarity d(A, B) between the two conformations is defined by d(A, B) = and bij are the distances between residues i and j in the A and B conformations and p an integer which determines how much residues which are far apart contribute. We use here p = 2. The dissimilarity factor is a weighted distance map between two conformations. In this article the reference conformation A is chosen as the NMR structure, see [10] for detailsThe dissimilarity factor is a weighted distance map be- tween two conformations. Let A and B be two conforma- tions of the protein (for our purpose B will be the native structure) and aij, bij the distance between residues i and j in these conformations. The dissimilarity d(A, B) between the two conformations is defined by d(A, B) = and bij are the distances between residues i and j in the A and B conformations and p an integer which determines how much residues which are far apart contribute. We use here p = 2. In this article the reference conformation A is chosen as the NMR structure, see [10] for details.
. A Crisanti, F Ritort, J. Phys. A: Math. Gen. 36181A. Crisanti and F. Ritort, J. Phys. A: Math. Gen. 36, R181 (2003)
L Berthier, G Biroli, Glasses and Aging: A Statistical Mechanics Perspective in Meyers: Encyclopedia of Complexity and Systems Science. BerlinSpringerL. Berthier and G. Biroli Glasses and Aging: A Statis- tical Mechanics Perspective in Meyers: Encyclopedia of Complexity and Systems Science, Springer (Berlin), 2009
. K Hayashi, M Takano, Biophys. J. 93895K. Hayashi and M. Takano, Biophys. J. 93, 895 (2007)
. N Nakagawa, PRL. 98128104N. Nakagawa, PRL 98, 128104 (2007)
. T Becker, J A Hayward, J L Finey, R M Daniel, J C Smith, Biophys. J. 871436T. Becker, J.A. Hayward, J.L. Finey, R.M. Daniel and J.C. Smith, Biophys. J. 87, 1436 (2004)
. A Brunger, C L Brooks, M Karplus, Chem. Phys. Lett. 105495A. Brunger, C.L. Brooks III and M. Karplus, Chem. Phys. Lett. 105, 495 (1984)
. J D Honeycutt, D Thirumalai, Biopolymers. 32695J.D. Honeycutt and D. Thirumalai, Biopolymers 32, 695 (1992)
. G Martyna, M E Tuckerman, D J Tobias, M L Klein, Mol. Phys. 871117G. Martyna, M.E. Tuckerman, D.J. Tobias and M.L. Klein, Mol. Phys. 87, 1117 (1996)
|
[] |
[
"Signatures of very high energy physics in the squeezed limit of the bispectrum",
"Signatures of very high energy physics in the squeezed limit of the bispectrum"
] |
[
"Diego Chialva [email protected] \nService de Mécanique et gravitation\nUniversité de Mons\nPlace du Parc 207000MonsBelgium\n"
] |
[
"Service de Mécanique et gravitation\nUniversité de Mons\nPlace du Parc 207000MonsBelgium"
] |
[] |
We investigate the signatures in the squeezed limit of the primordial scalar bispectrum due to modifications of the standard theory at high energy. In particular, we consider the cases of modified dispersion relations and/or modified initial quantum state (both in the Boundary Effective Field Theory and in the New Physics Hyper-Surface formulations). Using the in-in formalism we study in details the squeezed limit of the contributions to the bispectrum from all possible cubic couplings in the effective theory of single-field inflation. We find general features such as enhancements and/or non-local shape of the non-Gaussianities, which are relevant, for example, for measurements of the halo bias and which distinguish these scenarios from the standard one (with Bunch-Davies vacuum as initial state and standard kinetic terms). We find that the signatures change according to the magnitude of the scale of new physics, and therefore several pieces of information regarding high energy physics could be obtained in case of detection of these signals, especially bounds on the scales of new physics.
|
10.1088/1475-7516/2012/10/037
|
[
"https://arxiv.org/pdf/1108.4203v4.pdf"
] | 116,530,566 |
1108.4203
|
4175c152b28909a67dbc54e2955fb276dc796710
|
Signatures of very high energy physics in the squeezed limit of the bispectrum
4 Oct 2012
Diego Chialva [email protected]
Service de Mécanique et gravitation
Université de Mons
Place du Parc 207000MonsBelgium
Signatures of very high energy physics in the squeezed limit of the bispectrum
4 Oct 2012
We investigate the signatures in the squeezed limit of the primordial scalar bispectrum due to modifications of the standard theory at high energy. In particular, we consider the cases of modified dispersion relations and/or modified initial quantum state (both in the Boundary Effective Field Theory and in the New Physics Hyper-Surface formulations). Using the in-in formalism we study in details the squeezed limit of the contributions to the bispectrum from all possible cubic couplings in the effective theory of single-field inflation. We find general features such as enhancements and/or non-local shape of the non-Gaussianities, which are relevant, for example, for measurements of the halo bias and which distinguish these scenarios from the standard one (with Bunch-Davies vacuum as initial state and standard kinetic terms). We find that the signatures change according to the magnitude of the scale of new physics, and therefore several pieces of information regarding high energy physics could be obtained in case of detection of these signals, especially bounds on the scales of new physics.
Introduction
The non-Gaussian features of the primordial cosmological perturbations are going to become one of the most important experimental and theoretical tools for investigating the early history of the Universe [1]. The scalar bispectrum, coming from the three-point correlator of the comoving curvature perturbation ζ, is the leading contribution to the scalar non-Gaussianities in the perturbative theory.
In recent times, several reasons of interests have emerged for investigating its so-called squeezed limit, when one of the three momenta it depends on in Fourier space is much smaller than the other two: k 1 ≪ k S , k 2,3 ≃ k S . In fact, large-scale probes of the primordial non-Gaussianities like the halo bias are expected to be soon competing with or even surpass in accuracy those of the CMBR [2][3][4][5][6][7][8][9], and this prompts for a better understanding of their sensitivity on the details of the bispectrum and the theoretical models. Furthermore, studying the squeezed limit provides powerful results capable of distinguishing (hopefully falsify) classes of models. This is particularly true for single-field models (or more generally, for models where there are only adiabatic perturbations). All single-field models (adiabatic perturbations) with Bunch-Davies vacuum and standard kinetic terms (we call this the standard scenario from now on) predict the so-called local form of the bispectrum and a very low level of non-Gaussianities in 1 As in [20], our scenario differs from that considered in [21,22], where the only modification to the field equations was a change in the speed of sound (the modifications at the level of the Lagrangian depended only on first order derivatives of the fields). In particular, we include also derivatives of higher order, and the effect on the field equations is more profound. In many scenarios with time-varying speed of sound the Maldacena's condition [10,11] on the squeezed limit of the bispectrum holds, see [12,[23][24][25]. Similarly, see [12], in the adiabatic ekpyrosis scenario of [26,27]. Some contrived model (non-attractor background) [28] do not satisfy the relation [12]. 2 The halo bias and the squeezed limit for this case have been studied in the literature mostly by using the template of [15] and not the actual field theoretical results for the bispectrum. We will show that this appreciably changes the outcomes (this was noted also in [29], which however still employed the template and did not study in details the field theoretical results). After the appearance of this article (August 2011), we were signalled reference [30], which also makes remarks, focusing on the presence of enhancements, about the squeezed limit in a particular formalism (density matrix approach) with a specific initial condition for the fields (akin to the one called BEFT in section 3.3, which explicitly breaks scale invariance) and dealing only with one specific cubic coupling (the interaction (18)). See also [31] which analyses the same scenario. Also in these articles the study of the general field theoretical results was not undertaken and part of the signatures were missed (see our sections 5 and 6).
We then present the technical computation of the bispectrum in the squeezed limit in section 5. For the reader's advantage, we proceed in three steps. First, in sections 5.1 and 5.2, we discuss in full details two examples of contributions from two well-known cubic coupling (with and without higherderivatives). These example are meant to help the reader to quickly grasp the features of the more general computations and results later on.
Second, in section 5.3, we move to the general analysis and study the contributions to the bispectrum in the squeezed limit from all allowed couplings in the effective theory (single-field).
As a third step, armed with these general results, in section 6 we analyse the features of the various contributions, individuating the leading ones and providing the predictions for the bispectrum in the squeezed limit for the modified scenarios we have considered.
Finally, we briefly comment on the consequences of our results on the halo bias in section 7, and conclude in 8. Some useful equations and material, which would have made heavy the main text, have been moved to the appendices.
Formalism and notation
Let us first introduce the basic notation and elements of the formalism of cosmological perturbation theory, for more details see [10]. Scalar perturbations in single-field slow-roll inflationary models are efficiently parametrized by the comoving curvature perturbation 3
ζ(η, x) = d 3 k (2π) 3 2 ζ k (η)e i k· x .(1)
The slow-roll parameters are defined as ǫ =φ
2 2H 2 M 2 Planck , η sl = −φ φH +φ 2 2H 2 M 2 Planck
, where H is the Hubble rate. φ is the inflaton (background), and a(η) ∼ − 1 Hη (η < 0) is the scale factor at zeroth order in slow-roll. Throughout the paper, dots (primes) indicate derivatives with respect to cosmic time t (conformal time η).
The two-point function is defined as ζ( k 1 )ζ( k 2 ) = (2π) 3 δ (3) ( k 1 + k 2 ) P (k 1 ),
k 1 = | k 1 | .(2)
and, following [56], we define the bispectrum B as
ζ k 1 (η)ζ k 2 (η)ζ k 3 (η) = (2π) 3 δ (3) ( i k i )B( k 1 , k 2 , k 3 , η) ,(3)
where, at leading order in the perturbative expansion,
ζ(η, x 1 )ζ(η, x 2 )ζ(η, x 3 ) = −2Re η η in dη ′ i ψ in |ζ(η, x 1 )ζ(η, x 2 )ζ(η, x 3 )H (I) (η ′ )|ψ in .(4)
Here, η in , |ψ in are the initial time and state, and H (I) is the interaction Hamiltonian.
A key ingredient in the computation of correlators is the Whightman function
(2π) 3 δ (3) ( k 1 + k 2 ) G k (η, η ′ ) ≡ ζ k 1 (η)ζ k 2 (η ′ ) = (2π) 3 δ (3) ( k 1 + k 2 ) H 2 φ 2 f k (η) a(η) f * k (η ′ ) a(η ′ )(5)
where f k (η), f * k (η) are two linearly independent solutions of the field equation
f ′′ k + ω(η, k) 2 − z ′′ z f k = 0 z = aφ H ,(6)
such that, once quantized,
ζ k (η) = f k (η) zâ † k + f * k (η) zâ k .(7)
Imposing the standard commutation relation on the operatorsâ † k ,â k , entails a certain normalization for the Wronskian of f * k (η), f k (η). Different choices for f * k (η), f k (η) correspond to different choices of initial state [10]. In equation (6), ω is the (comoving) frequency as read from the effective action: we will limit ourselves to the isotropic case, where ω depends only on k ≡ | k|, dropping the arrow symbol.
Brief review of the scenarios under consideration
To ease the discussion of the results for the three-point function and make the paper self-contained, let us recap here the main features of the single-field scenarios we will deal with. For more details see [15][16][17][18][19][20][32][33][34][35][36][37][38].
The standard scenario
In the standard scenario, as defined in the introduction, the kinetic terms are standard, the dispersion relation is the usual Lorentzian one and the mode functions f k (η) are determined by choosing the Bunch-Davies vacuum, so that
ω(η, k) 2 = k 2 , f k (η) = √ π 2 e i π 2 ν+i π 4 √ −ηH (1) ν (−kη) ,(8)
where H (1) is the first Hankel's functions with ν = 3 2 + 1−ns 2 , and the equations and solutions are extrapolated up to η in = −∞ where the initial state has been chosen as the empty adiabatic vacuum (Bunch-Davies). At leading order in slow-roll the spectral index reads n s = 1 − 6ǫ + 2η sl and the late time behaviour of the two-point function is
P st (k) ∼ η→0 H 2 4M 2 Planck ǫk 3(9)
where the suffix st indicates that we are considering the standard case.
Modified dispersion relations
Higher-derivative corrections to the action are deeply rooted in the well-established effective field theory framework. In fact, one expects that at certain times at least some of the higher derivative corrections to the action cannot be treated perturbatively, because, since the metric scale factor a(η) is rapidly decreasing back in time, the suppressing factors, given by powers of ratios such as p Λ = k a(η)Λ , where Λ is the relevant high-energy scale, are not small any more. These no longer small corrections can lead to modified dispersion relations. As recalled in the introduction, such possibility appears in a variety of scenarios, see for example those in [46][47][48][49][50][51][52][53][54][55].
As we said, we adopt a phenomenological approach and consider generic modifications to the dispersion relation due to higher-derivative corrections to the effective action, encoding them in a function F (− p Λ ) by writing the comoving frequency as ω(η, k) = a(η)ω phys (p) = a(η)p F − p
Λ = k F H Λ kη , F (x → 0) → 1 , H Λ ≪ 1 .(10)
In [20,57], it has been shown that if the modified dispersion relations always satisfy the WKB conditions (adiabaticity) at early times (that is, WKB is violated only when the dispersion is in the I II III IV Figure 1: Generic example of dispersion relation with violation of WKB at early times. standard linear regime ω ≃ k as usual) the bispectrum is similar to that of the standard scenario, and so it will be its squeezed limit. This is easily understood, since the particle production is practically absent in those cases.
Instead, [20] has shown that the bispectrum is particularly sensitive to modifications that lead to a violation of the WKB conditions (adiabaticity) at early times, where particle production is more substantial. We will therefore focus on those scenarios.
The generic shape of a dispersion relation with WKB violation once at early times is depicted in figure 1. We stress that we are not proposing any specific model, which would be a strong assumption on the high energy physics: we consider the most generic function F − p Λ in equation (10), with the only constraint that adiabaticity is once violated at early times (it is easy to extend our analysis to multiple violations). We are in fact interested in the general consequences on the bispectrum from this scenario. Similar phenomenological approach and generic dispersion relations have been considered and discussed, studying the spectrum, for example in the references [16][17][18][19], see also [52,53].
A general treatment of the field equation (6) in these cases yields the following solution [20]:
f k (η) = ς k u 1 (η, k) I : η < η (k) I B 1 U 1 (η, k) + B 2 U 2 (η, k) II : η (k) I < η < η (k) II α mdr k u 1 (η, k) + β mdr k u 2 (η, k) III : η (k) II < η < η (k) III D 1 V 1 (η, k) + D 2 V 2 (η, k) IV : η (k) III < η(11)
where u 1,2 (η, k) can be very well approximated using the WKB method, while V 1,2 (η, k) and U 1,2 (η, k) need to be found by other means, as the WKB conditions are not satisfied 4 . As known, given equation (6), the times η (k) I,II,III where the WKB approximation first fails are in proximity of the turning points of V (η, k) 2 ≡ ω(η, k) 2 − z ′′ z , see [58,59]. They depend on k. When clear from the context we will omit the label (k) . Looking at figure 1, these times will be in proximity of η 1,2,3 [16][17][18][19][20]. The coefficients D 1,2 , B 1,2 , α mdr k , β mdr k are obtained by asking for the continuity of the function and its first derivative, and by imposing the Wronskian condition W{f, f * } = −i in order to have the standard commutation relations in the quantum theory. The label "mdr" indicates that we are considering the scenario of modified dispersion relations.
The error made by using these approximations can be made small at will by going to higher order in the approximation, see for example [20] or [59]. The techniques are standard in cosmology and allow controlled global approximations of the equations solutions and/or Green functions.
The details of the solution can be found in [20], and are briefly reviewed in the appendix A.1. Some of the salient features needed here are that:
• backreaction constraints the interval [η I , η II ] of WKB violation to be very small, that is
∆ = η I − η II η I ≪ 1 ,(12)
• expanding for small ∆, we obtain generically (i.e., independently of the detailed form of ω(η, k))
α mdr k = 1−i V (η I , k) η I 2 Q V 2 η I ∆+O(∆ 2 ) ς k , β mdr k = i V (η I , k) η I 2 Q V 2 η I ∆+O(∆ 2 ) e − 2iΛ H Ω F |η I ς k ,(13)where V (η, k) 2 ≡ ω(η, k) 2 − z ′′ z . Here, Q V 2 η I
is the order 1 factor signalling the violation of the WKB conditions at η I , and Q is reported in equation (105) in appendix A.1. The magnitude of |β mdr k | is constrained by backreaction. We will discuss more of the k-dependence and the magnitude of these coefficients in section 3.4,
• we choose ς k = 1 picking up the usual adiabatic vacuum.
At late time the two-point function does not differ much from the standard one, since the Bogoliubov coefficient is quite constrained by backreaction (see section 3.4). This is good for the agreement with the observations on the (late time) spectrum. At leading order in ǫ, β mdr k , ∆,
P mdr (k) ∼ η→0 H 2 4M 2 Planck ǫk 3 1 + 2 Re(β mdr k ) .(14)
In the case of the bispectrum it will be important the behaviour of the Whightman function at earlier times, which shows more relevant modifications, see appendix A.2.
Modified initial state (BEFT and NPHS scenarios)
Let us review the two implementations of the modified initial state approach that have been first proposed in [32][33][34][35][36][37], reflecting the effects of the physics at higher energies/earlier times than inflation in setting the initial/boundary condition for the perturbation fields.
• BEFT approach: one follows an effective theory approach by fixing the boundary conditions for the fields at the finite time η c (beginning of inflation) independently of the modes k, through a Boundary Effective Field Theory (BEFT), which accounts for the high energy physics via renormalization [35][36][37]. This kind of boundary condition strongly breaks scale invariance, being imposed for all modes k at the same boundary time. The (cutoff) scale Λ, proper of the effective action formulation, is not related to the boundary time η c [35][36][37].
• NPHS approach: consider the effective theory valid up to a certain energy scale Λ, and choose the boundary conditions for the solution to the field equation at the New Physics Hyper-Surface (NPHS) corresponding to when the physical momentum reaches that cutoff scale [32][33][34]. In this case, the cutoff is imposed in a scale-invariant way via the condition k a(ηc) = Λ, so that the time η c when the initial state is picked is k-and Λ-dependent. At zero slow-roll order, η c = − Λ kH . To spare notation, we use the symbol η c for both the NPHS and BEFT cases and make clear in the context which approach we are following. In both cases, we stress that the initial condition is fixed when the modes k are well within the horizon, that is, for η c 's such that |kη c | ≫ 1.
The physical motivation why such modified initial states can be chosen is the fact that inflation and cosmological perturbation theory are effective theories valid below a certain energy scale (if the theory was valid at all scales, the only sensible choice would be the so-called Bunch-Davies vacuum) [40,41]. Many studies have proven that these initial states and the perturbation theory defined upon them are well-defined also in the formal sense (for example, see [40,41]). As in [15,38], when studying these scenarios we adopt a phenomenological approach.
A modified initial condition may present both a Gaussian part and an intrinsic non-Gaussian one. As we will discuss in section 4, using the result of [60,61] -see also the comment in [15] -one finds that the leading corrections to the bispectrum in the squeezed limit due to intrinsic non-Gaussianities of the initial condition, if present, would be in line with the standard result, with a local form and a suppressed amplitude, because of backreaction and lack of cumulation with time.
The Gaussian part of the initial condition, which modifies the Whightman functions of the perturbation theory, will instead turn out to be responsible for new and more dominant features of the squeezed limit. As for this part, both in the BEFT and NPHS cases the solution of the field equation is of the Bogoliubov form [15,[32][33][34][35][36][37][38]:
f k (η) = α mis k √ −ηH (1) ν (−kη) + β mis k √ −ηH (2) ν (−kη) ,(15)
where α mis k , β mis k are determined by the specific boundary conditions imposed on the solution at the time η c , and by the Wronskian condition, which translates into |α mis k | 2 − |β mis k | 2 = 1. The Bogoliubov coefficients depend on the mode k, the cutoff scale and the time η c (these latter are related in the NPHS case, but not in the BEFT, as we said). The label "mis" indicates that we are dealing with the case of modified initial state.
At late time the two-point function does not differ much from the standard one, because of the smallness of β mis k due to backreaction constraints (see section 3.4). At leading order in slow-roll and β
mis k P mis (k) ∼ η→0 H 2 4M 2 Planck ǫk 3 1 + 2 Re(β mis k e -iArg(α mis k ) ) .(16)
Once again, for the bispectrum it will be important the behaviour of the Whightman function at earlier times, which shows more relevant differences, see appendix A.2.
General constraints on Bogoliubov coefficients
The magnitude and scale (k) dependence of the coefficients β mdr k , β mis k entering respectively equations (14) and (16) are determined in full details by the specific model or boundary condition giving rise to the modified dispersion relation or modified initial state. However, to be in accordance with observations and to avoid backreaction stopping the slow-roll inflationary evolution, these coefficients are subject to a series of phenomenological constraints, see for example [20,38]:
• the observations on the spectral index show that −k d log(k 3 P (k)) dk is small, so that β mdr k , β mis k must be slowly varying with k at the observed scales;
• backreaction imposes two further constraints:
the total energy density must be finite. In general this demands that |β mdr k |, |β mis k | decay faster than k −2 at large k;
preserving the slow-roll inflationary evolution enforces the constraint [20,38] |β mdr k |, |β mis
k | ≤ ǫ|µ| HM Planck Λ 2 , µ ≡ η sl − ǫ .(17)
In the following we will keep indicating the scale dependence of these coefficients with the label k . We will also continue distinguishing the modified initial state scenario from that with modified dispersion relations using the labels "mis" and "mdr".
Bispectrum in the squeezed limit: general arguments
We will now discuss the three-point function. Before presenting the detailed analysis, it is useful to outline the points that make the difference with the standard scenario in the cases we consider.
To make the section self-contained, let us first recall briefly that the scalar bispectrum is a threepoint correlator for the scalar perturbation field evaluated at a late time after horizon exit of the observed modes. It is calculated on an initial state defined at an initial (conformal) time η in , which is then evolved up to the time the bispectrum is evaluated at, and back to η in (in-in formalism, we are using the interaction picture). Its leading perturbative formula has been presented in equation (4).
The standard result for the squeezed limit k 1 ≪ k 2,3 ∼ k S follows from very simple arguments [10][11][12]: a) for an initial Bunch-Davies vacuum state and a standard comoving Lorentzian dispersion relation the non-Gaussianities are essentially generated at horizon exit, and thus the most relevant part of the time evolution is from horizon exit of the modes until the time the bispectrum is evaluated at; b) since k 1 ≪ k 2,3 in the squeezed limit, the horizon-exit time for the perturbations depending on k 2,3 occurs much later than the one for the perturbation depending on k 1 . The latter then acts as a background for the other perturbations, shifting their horizon-exit time. This leads to a certain dependence of the result on k 1 (dubbed "local") and on the spectral index (which is related to the shift of horizon-exit time). It is called the Maldacena's consistency relation [10][11][12].
These pieces of physical information are encoded in the form of the Whightman functions (the twopoint functions, see section 2) of the standard scenario. Every higher-order correlator is written in terms of them because of Wick's theorem. In particular, their form is such that the time integral from η in to η ∼ 0 coming from the time evolution in the bispectrum formula (4) is dominated by the upper limit of integration (see sections 5, 5.1.1). This is how condition a) is encoded mathematically. It is therefore natural to expect that if the Whightman functions are modified, in particular in a way such that the time-integral in the bispectrum formula picks up other important contributions violating conditions a) and/or b), then the squeezed limit will be different from the standard one. This is what occurs in the scenarios of modified dispersion relation and of modified initial state/condition. At this point one could think that if the scenarios allow modifications to the Whightman functions, one would completely loose general predictivity on the squeezed limit. However, we will show that this is not the case: new general physical arguments based on the concepts of "particle creation/content" 5 , interference and accumulation in time, take the place of those operating in the standard scenario.
'Particles' arise in different ways in the two scenarios we consider, see section 3. In the case of modified initial state the initial condition for the field mode functions is fixed at η in when the modes are well within the horizon (adiabaticity is satisfied), but the physics at times/scales preceding η in can be parametrized and interpreted in terms of what we will call the 'particle content' of the state. The idea is that the physics at scale higher than inflation has generated an initial state that is not the adiabatic vacuum, but an excited one [42][43][44][45] -hence its nonzero energy and particle content. In the case of modified dispersion relation, instead, the time evolution of the dispersion relation can lead to 'particle production' even if the initial state is the standard empty adiabatic vacuum [20,57]. 5 As well-known, the concept of particle is not well-defined on time evolving background. Approximate concepts with respect to comoving observers have a standard use in connection with adiabaticity, see [62][63][64][65][66][67]. Recall that adiabaticity in this context concerns the time evolution of certain quantities (usually the effective frequency/mass, from the quadratic part of the action [62][63][64][65][66][67]) and is independent and conceptually different from non-Gaussianity.
It is the presence of this early energy and 'particle' content that generates the additional non-Gaussianities. Of course, this content is severely constrained by backreaction on inflation, as we will review.
The effects of particle content/creation will be stronger at early times (before dilution by cosmic expansion). The fact that the time-evolution integral extends to such early times explains why the bispectrum (and in principle all higher-order correlators) are particularly sensitive to these modifications to the Whightman functions. Indeed, the spectrum is affected by them as well, but much less so, because it does not involve an integration over time and, especially for observational reasons, it is computed at late times, after the horizon exit of the perturbation, where the modifications are negligible.
The modifications of the Whightman functions will affect all correlators and lead to different results, compared to the standard case, even for the simplest couplings. The new features in the squeezed limit will concern scale dependence as well as magnitude (enhancements). Indeed, although particle creation is certainly strongly constrained by backreaction, it can lead to interference and phase cancellation in the integrand of the time integral in the bispectrum, giving rise to enhancements.
As we will see, the greatest effect occurs when the largest contributions to the oscillating phase of the integrand cancel out so that the suppression due to oscillations is strongly reduced. This happens when the early particle content for the perturbations depending on the largest modes k 2,3 is relevant and interference occurs among them. At those times, the perturbation depending on k 1 ≪ k S , initially subhorizon, may or may not be superhorizon yet. As we will see, this condition will depend on the magnitude of k 1 , k S and the scale of the new physics. There can then occur two different cases.
If the perturbation depending on k 1 is not already superhorizon at those times, we do not expect (and do not get indeed) the same k 1 -dependence (local shape) as in the standard result, because that is entirely determined by the superhorizon condition.
If instead the k 1 -perturbation was already superhorizon, we obtain a local shape and an effect due to the shift of the horizon crossing time for the other perturbations as in the standard scenario. However, also in this case we have a new result, because the overall amplitude of the bispectrum does not match the standard one, and can also be enhanced. This is a consequence of the additional non-Gaussianities generated by the particle content/creation at early times for the perturbations depending on k 2,3 , see sections 5 and 6.
In the case of modified initial conditions, there can also be non-Gaussianities intrinsic to the initial condition. However, their effects are subdominant with respect to those due to the modifications of the Whightman functions. In fact, in the squeezed limit the contribution of intrinsic non-Gaussianities is in line with the standard one: local form and very suppressed amplitude (because of backreaction).
This can be seen in various ways. First of all, the general results for the leading contributions to the bispectrum from intrinsic non-Gaussianities of the initial conditions were calculated in [60,61], using the BEFT formalism -see also the comment in [15]-. By taking the squeezed limit of those results one obtains an outcome in line with the standard one. One can also argue that the contribution of non-Gaussianities intrinsic to the initial condition is negligible from general arguments. Indeed, those non-Gaussianities, already strongly constrained by backreaction, are nonzero only at the initial time. Thus, their contributions lack the integration over time. This makes them subdominant with respect to the contributions due to the modified Whigthman functions (Gaussian part), where interference and time accumulation occur and enhance the result, as we will see.
Generally predictable features of the squeezed limit in the modified scenarios indeed appear precisely because the most important new effects are played by the modifications of the Whightman functions and not by specific couplings or peculiar initial intrinsic non-Gaussianities. The results are then dominated (we will see in what measure) by the general features of particle content/creation, interference, accumulation and sub-/superhorizon evolution. This permits to constrain and possibly falsify entire classes of models as the differences between the specific single-field slow-roll models enter the subleading corrections.
We are now going to investigate these features of the squeezed limit of the bispectrum by performing the analysis at the rigorous level of the field theory description using the in-in formalism. We perform a thorough analysis from this point of view, providing general result for all single-field models of inflation within the scenarios of section 3, and studying all cubic couplings that arise in an effective theory formalismà-la Weinberg [39].
In the case of modified initial state, we will also show that the field theoretic result for the squeezed limit is very different from those obtained in previous studies, see [13,14], which used the folded template proposed for CMBR analysis in [15]. This disagreement could have been anticipated, since the standard evaluators (cosine and fudge factor) for the matching between the template [15] and theoretical prediction [38] indicate that the two depart more and more for large k L η c , where k L is the largest momentum and η c is the time when the boundary condition picking up the initial state is imposed, see [15]. Even more importantly, the template does not depend on the scale η c , and therefore taking the squeezed limit in the full result is different than taking it in the template, because of the presence of distinct scales.
Bispectrum in the squeezed limit: technical analysis
We present here the most technical part of the paper, where we calculate the contributions to the squeezed limit of the bispectrum. We subdivide our presentation in three parts.
• The first two parts (sections 5.1 and 5.2) consist of two detailed examples (for cubic couplings with and without higher derivatives), to illustrate in details what are the differences between the scenarios we discuss and the standard one.
Let us stress that at this point we are not setting apart the interactions in these examples as special or dominant compared to all the other possible ones. We choose them simply because they are two well-known and well-studied cubic interactions 6 , see [10,15,20,38,68], and thus the discussion should be easier to follow for the reader. Only after the general analysis has been performed in section 5.3 we will come back to the question whether these, or other, interactions play a predominant role (see section 6).
• In section 5.3 we then deal with the full general analysis of the squeezed limit of the bispectrum, considering all possible cubic interactions in the effective action for the inflaton.
The results we obtain will be then fully analysed and discussed in section 6. We will find out what are the leading features of the bispectrum in the squeezed limit for the modified scenarios and what are the prediction for each specific modified scenario.
Example 1: Minimal coupling cubic interaction
We begin by studying the example of the cubic interaction [10]:
H (I) = − d 3 x a 3 (φ H ) 4 H M 2 Planck ζ ′2 c ∂ −2 ζ ′ c .(18)
We call this interaction the "minimal coupling cubic interaction" because it is already present in the simplest case of a scalar field (inflaton) minimally coupled to gravity [10].
We have followed the practice of [10] writing this interaction in terms of the field redefinition 7
ζ = ζ c + 1 8φ 2 H 2 M 2 Planck ζ 2 c + 1 4φ 2 H 2 M 2 Planck ∂ −2 (ζ c ∂ 2 ζ c ) + 1 2φ φH ζ 2 c .(19)
The two-point function and the quadratic part of the action are the same for ζ and ζ c [10].
Using the definitions of the slow-roll parameters, the bispectrum of ζ, which is the relevant one for observations, is related to the three-point function for ζ c as
ζ k 1 (η)ζ k 2 (η)ζ k 3 (η) = ζ c, k 1 (η)ζ c, k 2 (η)ζ c, k 3 (η) + (2π) 3 δ (3) ( i k i ) 3 i=1 mod 3 ǫ 3 2 + k 2 i+1 + k 2 i+2 2k 2 i − η sl P (k i+1 ) P (k i+2 ) ,(20)
where
ζ c (η, x 1 )ζ c (η, x 2 )ζ c (η, x 3 ) = −2Re η η in dη ′ i ψ in |ζ c (η, x 1 )ζ c (η, x 2 )ζ c (η, x 3 )H (I) (η ′ )|ψ in ,(21)
and where η ∼ 0 is a late time when all modes k i are outside the horizon. As we know from the review in sections 3.2, 3.3 the modifications affecting the second line of (20) via the spectra P (k i )'s for the modified scenarios at late time are very suppressed. Hence, we focus on the connected contribution (21). From (18), (21)
ζ c, k1 (η)ζ c, k2 (η)ζ c, k3 (η) = 2Re −i(2π) 3 δ (3) ( i k i ) φ H 4 H M 2 Planck η ηin dη ′ a(η ′ ) 3 k 2 3 3 i=1 ∂ η ′ G ki (η, η ′ )+permutations(22)
We will now present the results for the standard and modified scenarios. Our notation is as follows.
We will write the result for the three-point function in powers of |β k i |. Up to linear order,
ζ c, k 1 ζ c, k 2 ζ c, k 3 = δ 0 ζ c, k 1 ζ c, k 2 ζ c, k 3 + δ β ζ c, k 1 ζ c, k 2 ζ c, k 3 .(23)
We then write
δ 0 ζ c, k 1 ζ c, k 2 ζ c, k 3 = A(k 1 , k 2 , k 3 )δF 0 (k 1 , k 2 , k 3 ) , δ β ζ c, k 1 ζ c, k 2 ζ c, k 3 = A(k 1 , k 2 , k 3 )δF 1 (k 1 , k 2 , k 3 ),(24)
where for the standard scenario
δF standard 0 = 1, δF standard 1 = 0,(25)
so that A(k 1 , k 2 , k 3 ) is indeed the standard three-point function for ζ c . This representation of the three-point function is useful because in this way one will read from δF 0 and δF 1 the corrections to the standard result, respectively of order |β k i | 0 and |β k i | 1 , in the modified scenarios, due to the new high energy physics.
Example 1: standard scenario (review)
In the standard scenario it is η in = −∞, and the Whightman functions are obtained from (5), (8). We list them in appendix A.2. Inserting them in equation (22), one finds the integral over time 8
η∼0 η in dη ′ e i(k 1 +k 2 +k 3 )η ′ (k 1 + k 2 + k 3 ) = −i,(26)
where we have inserted a factor of k t ≡ k 1 + k 2 + k 3 for later comparison with the modified scenarios, see equations (29), (39), and to make the integral dimensionless. The result from (22) is then [10] A
(k 1 , k 2 , k 3 ) ≡ ζ c, k 1 ζ c, k 2 ζ c, k 3 st = 4(2π) 3 δ (3) ( i k i ) H 6 φ 2 M 2 Planck k 2 1 k 2 2 k 2 3 3 i=1 (2k 3 i ) l 1 k t k 2 l , k t = 3 i=1 k i . (27)
Adding the disconnected contribution in equation (20) to obtain the bispectrum for ζ, and taking the squeezed limit k 1 ≪ k S , k 2,3 ∼ k S , finally [10]
ζ k 1 ζ k 2 ζ k 3 st = k 1 ≪k S (2π) 3 δ (3) ( i k i )(1 − n s )P st (k 1 )P st (k S ) .(28)
at leading order in k 1 k S ≪ 1 [10][11][12]. The result (28) shows Maldacena's consistency condition. The bispectrum in the form (28) is in the so-called local form [10][11][12].
Example 1 with modified initial state
As the computations are simpler in this case, we present it first. We have already discussed before that possible intrinsic non-Gaussianities of the initial condition lead to suppressed and standard-looking results (see section 4 and [15,60,61]). We focus here on the Gaussian part of the initial condition/state, which is responsible for the form of the Whightman functions, and will appear to give rise to much more important and dominant modifications to the squeezed limit of the bispectrum.
It is straightforward to calculate the contribution to the bispectrum from (18) for generic k 1 , k 2 , k 3 in this scenario. The result has been first presented in [38]. Let us quickly review it. We insert the relevant Whightman functions, see appendix A.2, in (22) and write the result in the form (23), (24) using (27). We also define k t ≡ k 1 + k 2 + k 3 , obtaining [38] δF 0 ∼ 1 ,
δF 1 = k j 1−e i( h =j k h −k j )ηc h =j k h -k j .
(29) While δF 0 matches the standard result, δF 1 leads to very different outcomes. The presence of a nonzero δF 1 part is a consequence of the negative-frequency component of the Whightman functions. Note the finite-time lower limit of integration at η in = η c for the integral 9 .
Starting from this result, we move now to the novel part of the analysis and study the squeezed limit (k 1 ≪ k S , k 2,3 ∼ k S ) for this example of interaction. It appears from (29) that δF 1 is the sum of three contributions. By taking the squeezed limit, we find that the contribution proportional to β k 1 is very small: at leading order
δF (1) 1 = k 1 ≪k S −Re β mis* k 1 1 − e i2k S ηc ≪ 1 .(30)
We consider then the contributions proportional to β k 2 , β k 3 in (29), where the perturbations depending on the large momenta in the squeezed limit are in opposition of phase (that is, k j = k 2 or k 3 ). These contributions will be much larger than (30). Indeed, expressing k h ={1,j} in terms of k 1 and k j as
k h ={1,j} = (k 2 1 + k 2 j + 2k 1 k j cos θ j ) 1 2 (31) 9
The initial conditions for the perturbations ζ k 1 , ζ k 2 , ζ k 3 are fixed respectively at η (k 1 )
c , η (k 2 ) c , η (k 3 )
c when modes are well within the horizon (|kiη (k i ) c | ≫ 1, i = 1, 2, 3). In the BEFT case the initial time is the same for all modes (ηc is independent of k, that is,
η (k 1 ) c = η (k 2 ) c = η (k 3 ) c = ηc).
In the NPHS case, instead, the initial times can be different as they depend on the different wavenumbers, therefore the overlap of the perturbations (and so their interaction) is nonzero only after the latest of the initial times. In the squeezed limit, this time is
η (k 2 ) c ∼ η (k 3 ) c ∼ η (k S ) c .
These points are also explained in section 6.3. We will often neglect the (k S ) label to avoid cluttering of formulas. using momentum conservation, and expanding in k 1 k j ∼ k 1 k S ≪ 1, one obtains from (29)
for j = 2 or 3 h =j k h − k j ≃ k 1 ≪k 2,3 k 1 (1 + cos θ j ) ≡ k 1 v θ j .(32)
so that at leading order
δF (j=2,3) 1 = k 1 ≪k S − k 1 + k 2 + k 3 k 1 v θ j Re β mis* k j 1 − e ik 1 ηcv θ j .(33)
It can be easily checked (for example by plotting) that the one in (32) is an almost perfect approximation for all values of θ j already for k 1
k j ∼ k 1 k S ∼ 0.1. The form of the correction δF (j=2,3) 1
, then, depends on the interplay between k 1 , η c and v θ j . Recalling that we consider the realistic limit, where k 1 is small but not zero, there are two asymptotic possibilities where the result can be evaluated most explicitly. From (33), at leading order
1) |k 1 η c v θ j | ≫ 1 ⇒ δF (j=2,3) 1 = k 1 ≪k S −2 k S k 1 v −1 θ j Re β mis* k S (34) 2) |k 1 η c v θ j | ≪ 1 ⇒ δF (j=2,3) 1 = k 1 ≪k S −2k S η c Im β mis* k S ,(35)
where in the case 1) we have eventually taken into account that in standard observables such as the halo bias, see section 7, the final result involves an integration/average over quantities such as θ j , k S and so the terms with large oscillations average to zero. We see that (34), (35) dominate over (30) because of the large factors k S k 1 and k S η c . This result is due to the interference leading to phase cancellation in the integrand of (29), and the cumulative effect due to the time integration. Comparing to (26), we see that the latter is instead ineffective in the standard scenario, where the result is dominated by the upper limit of integration (late time). It is also clear why intrinsic non-Gaussianities of the initial condition would have a different form and be subdominant: in that case there is no time-integration, as intrinsic non-Gaussianities are nonzero at the initial time only.
Thus, the leading correction to the squeezed limit of the three-point function for nonzero β k in the case of the "minimal cubic coupling" (18), neglecting subleading corrections in k 1 k S ≪ 1, reads
δ β ζ c, k 1 ζ c, k 2 ζ c, k 3 mis min cub ≡ A(k 1 , k 2 , k 3 ) δF 1 (k 1 , k 2 , k 3 ) = k 1 ≪k S (2π) 3 δ (3) ( i k i ) B mis min cub P st (k 1 )P st (k S ) ,(36)= −4 ǫ k S k 1 v −1 θ j Re β mis* k S if |k 1 η c v θ j | ≫ 1 −4 ǫ k S η c Im β mis* k S if |k 1 η c v θ j | ≪ 1(37)
For observational purposes, for example concerning the halo bias, one is interested in the magnitude of these corrections and in their dependence on the probed large scale k −1 1 . The k 1 -dependence of the leading contributions (36), (37) is fully determined, since β mis k S depends on k S and not k 1 . The magnitude will have to be estimated and bound using the phenomenological constraints reviewed in section 3.4.
In section 6 we will discuss in details the features of this result (and of the more general results we will obtain later on), and see if the different models of modified initial condition favour case 1) or 2) in (34), (35). For the moment, the main evident differences compared to the standard scenario are as follows. For |k 1 η c v θ j | ≪ 1 the result has the same dependence on k 1 and 1 − n s ∼ ǫ as the standard one, but with a new amplitude factor |k S η c | ≫ 1. Instead, for |k 1 η c v θ j | ≫ 1, the k 1 -dependence of this and the standard result are different.
Example 1 with modified dispersion relations
The contribution to the bispectrum in this case was first computed in [20] for generic k 1 , k 2 , k 3 . Before discussing the squeezed limit, we briefly review the computation, addressing the reader to [20] for more details.
The three-point function (22) in this scenario is computed using the relevant Whightman functions listed in appendix A.2. The latter ones are obtained using the field solution (11) in the general definition (5). Let us also recall that the error induced by the approximations in solving the equation for the mode functions is calculable and can be made small at will by going to higher order in the approximations. The results for the bispectrum are robust against that (small) error, see [20].
The time integral in the bispectrum (22) can then be divided into the different intervals of validity of the piecewise solutions (11). In [20], it was shown that the leading contribution to the bispectrum occurs when the Whightman functions have support in the intervals IV and III, see figure 1. Other contributions are indeed suppressed by ∆, defined in equation (12), or by a rapidly decaying integrand 10 , see [20]. We concentrate, therefore, only on the leading contributions from intervals IV and III.
It is convenient to write the result in terms of the variables
y ≡ − p max (η) Λ = H Λ k max η , x i ≡ k i k max .(38)
In particular, when taking the squeezed limit it will be k max ≈ k 2,3 ≈ k S and so in these variables the limit reads
x 1 ≪ 1, x 2,3 ≃ x S ≃ 1.
Since y is a rescaled time coordinate, sometimes we will call it simply "time" in this section. By inserting in equation (22) the relevant Whightman functions, listed in appendix A.2, and writing the result in the form (23), (24) using (27), the corrections δF m={0,1} read [20]
δF m={0,1} (x 1 , x 2 , x 3 , y)= 3 j=1 Re (β mdr* k j ) m Λ H y y II dy ′ x t g({x h =j }, x j , y ′ , m) e i Λ H S 0 ({x h =j },x j ,y ′ ,m) ,(39)where x t ≡ 3 i=1 x i , and g({x h =j }, x j , y ′ , m) is reported in equation (111) of appendix A.3. S 0 is given by 11 S 0 ({x h =j }, x j , y ′ , m) = y ′ dy ′′ h =j ω(x h , y ′′ ) + (−1) m ω(x j , y ′′ ) , h, j ∈ {1, 2, 3} ,(40)
The limits of integration for the variable y ′ in (39) have been discussed in [20], which we refer the reader to for the details, while reporting here the results. The upper limit is y ≈ 0 because the bispectrum is evaluated at late time η ≈ 0, while the lower limit y II is determined by the smallest among η
(k i=1,2,3 ) II
, which bound region III for the three momenta k i=1,2,3 , see equation (11). As shown in [20], it is
y II = H Λ k max η (k max ) II ≈ −1 12 .
Observe also that, in the squeezed limit,
|x 1 y| ≤ |x 1 y II | ≪ 1 ∀|y| ≤ |y II |, so effectively F ( H Λ k 1 η) = F (x 1 y) ≃ 1 and ω(x 1 y) ≃ x 1 .
10 Recall that the prescription in footnote 8 must be followed also here. 11 Observe that ω(x, y) ≡ xF (xy) is dimensionless. 12 This is indeed straightforward: the corrections to the linear dispersion relation at the time η (k) II that separates region III and II must be quite important in order to drive the frequency close to the turning point (see figure 1). As these corrections grow as powers of − p(η) Λ , see (10), it must then be |p(η (k)
II )Λ −1 | ∼ 1. Furthermore, since p(η (k) II ) = −kη (k) II H, the relevant boundary time η (k)
II (the smallest, as we recalled) is the one for the largest k, which is indeed k max , and thus, from the definition of y, it follows that |y II | = |pmax(η
(k max ) II ) Λ −1 | ∼ 1.
We move now to the novel part of the analysis and study the squeezed limit. The integral in (39) can be conveniently written as the sum of two parts, respectively when the Whightman functions depending on k 2,3 have support in their regions IV or III.
We begin studying δF m=0 . In region IV ζ k 2 and ζ k 3 are superhorizon, and the contribution to the bispectrum is straightforward to calculate using the Whightman functions listed in appendix A.2. In region III, instead, the integral in equation (39), which is a typical Fourier integral, is dominated by the contributions of points where the strong suppression ( Λ H ≫ 1) from the oscillating phase of the integrand is reduced. This happens for stationary points of S 0 and boundary points [20,69].
But in the case of δF m=0 there are no stationary points. Indeed, from (40) we see that for m = 0 the first derivative of S 0 (x 1,2,3 , y ′ , m = 0) is the sum of the positive-defined frequencies and thus is always positive and never zero [20]. Then, in this case the Fourier integrals can be approximated for Λ H ≫ 1 by integrating by parts (see [69] and section 4.1.2 in [20]).
Putting together all the contributions and using the information in appendix A.3, we find that in the squeezed limit, neglecting subleading corrections,
δF 0 = k 1 ≪k S (x 1 ≪1) 1 − 1 ω(1, −1) Im g(x 1 , x 2 , x 3 , −1, 0) e i Λ H S 0 (x 1 ,x 2 ,x 3 ,−1,0) x 1 ≃0 x 2,3 ≃x S ≃1 ≃ 1 ,(41)
where in the last passage, as before, we have taken into account the averaging to zero of the rapid oscillations in observables such as the halo bias. As we see, comparing to the first one of (25), this contribution does not lead to very significant corrections to the standard local form of the bispectrum. We expect a more interesting behaviour for what concerns δF m=1 . We start by discussing the contribution from region IV of ζ k 2,3 (the region where ζ k 2,3 are superhorizon). In the squeezed limit, that contribution is very much subdominant. This is because those modes become superhorizon only at η S ≃ −k −1 2,3 ≃ −k −1 max , that is, using (38), for y S = − H Λ ∼ 0. This time is then very close to the late time y ∼ 0 that the bispectrum is evaluated at. Computing then the contribution of region IV from (39), we find that it is of order β, hence very small.
We turn now to the contribution from region III of ζ k 2,3 . From (39) and (40) for m = 1, we see that in the squeezed limit x 1 ≪ 1 the dominant contributions are those where the mode function in the negative-energy branch is the one depending on k 2 or k 3 . The reason is that in those cases the suppression from the oscillations of the integrand is very reduced, because the perturbations depending on the large momenta are in opposition of phase and the overall phase of the integrand becomes very small. Indeed, for j = 2 or 3:
S 0 ({x h =j }, x j , y ′ , 1) = y ′ dy ′′ h =j ω(x h , y ′′ )−ω(x j , y ′′ ) ≃ x 1 ≪1 x 1 y ′ dy ′′ 1+∂ x ω(x, y ′′ ) x S cos θ j +O(x 2 1 ) = x 1 y ′ (1 + F (y ′ )) cos(θ j ) + O(x 2 1 ) ≡ x 1ṽ θ j (y ′ ) ,(42)
using equations (10), (31) and
x s ≃ 1 (O is Landau's big-O symbol).
Inserting (42) back in equation (39), the integral has then two asymptotic regimes [69], depending on the magnitude of the integrand's phase factor Λ H x 1 . Using the information listed in appendix A.3, table 1, we find at leading order
δF 1 = 3 j=2 -2 x 1 Re β mdr* k S 1+cos(θ j ) if v θ j > H Λ 1 x1 Λ H 1-1 κ+1 2Γ( 1 κ+1 ) x 1 κ+1 1 Im β mdr* k S e i π 2 sign(F (κ) ) κ+1 (κ+1)|F (κ) | 1 κ+1 if v θ j < H Λ 1 x1 − 2 x 1 Im β mdr* k S g(-1)e i Λ H x 1ṽ θ j (-1) ṽ (1) θ j (-1) + {y * } Λ H 1- µ * ν * 1 x µ * ν * 1 Re β mdr* k S I (µ * ,ν * ) θ j (y * )e i Λ H x 1ṽ θ j (y * ) if Λ H x 1 ≫ 1 2 Λ H Im β mdr* k S (O(1) + O(x 1 )) if Λ H x 1 ≪ 1 (43) where we have defined I (µ * ,ν * ) θ j (y * ) = Γ( µ * ν * ) ν * |ṽ (ν * ) θ j (y * )| µ * ν * g (µ * -1) (y * )( 1 l=0 (-1) l(µ * -1) e (±1) l i π 2 µ * ν * sign(ṽ (ν * ) θ j (y * )) ),(44)
and the O(1) coefficient in the last line of (43) is reported in detail in equation (114), appendix A.3. Equation (43) is the complete leading-asymptotic result of the integral. It is a complicated formula, which we write for illustrative purposes, but the final result will be simpler and neat. We have introduced some new notation, which we explain here below as well as in appendix A.3.
First of all, let us discuss the contribution from the possible presence of stationary points, which appear in the second line of (43), case Λ H x 1 ≫ 1. We have indicated those points by {y * }, and their orders of stationariness for the function (42) are given by the numbers {ν * }. In general terms, they are also zeros of orders {µ * -1} for the function g in (39) 13 .
The presence of stationary points is obviously model-dependent. However, their contributions will be suppressed in the final observables such as the halo bias, because of the averaging to zero of the large oscillations due to their phase, visible in (43). Thus, at the end we can neglect them, and they do not spoil the generality of the final results we will obtain.
We are left then with the other contributions in (43), which are always present [69]. Note that in the case of the contribution from the (nearly) folded configurations (the one for v θ j ≡ 1+cos(θ j ) < H Λ 1 x 1 in (43)), one finds that the scaling is determined by the lowest-order correction to the dispersion relation (see appendix A.3, table 1). That is, F (κ) ∼ O(1) and κ in (43) enter the expansion (10)
ω phys (p) ∼ p(1 + p Λ κ F (κ) + · · · ) ,(45)
thus they capture the leading correction to the dispersion relation of the specific models. Finally, from (23), using (24), (27) (41), (43) , the leading correction to the standard consistency relation due to nonzero β k in the case of the "minimal cubic coupling" (18) reads
δ β ζ c, k 1 ζ c, k 2 ζ c, k 3 mdr min cub = k 1 ≪k S (2π) 3 δ (3) ( i k i ) B mdr min cub P st (k 1 )P st (k S ) ,(46)
13 The coefficients of the leading behaviour of the functionsṽ θ j (y), g({x}, y) in the squeezed limit in proximity of a stationary point y * have been written asṽ (ν * ) θ j (y * ), g (µ * -1) (y * ), see table 1 in appendix A. 3 and (44). In particular, if ν * is an integer, thenṽ (n) θ j = n! −1 ∂ n yṽθ j (similarly for µ * and g (µ * -1) ). The sign ± in the phases in (44) depends oñ v θ j (y) −ṽ θ j (y * ) being even or odd under (y − y * ) → −(y − y * ), for y ∼ y * .
B mdr min cub = 3 j=2 −4 ǫ k S k 1 1 1+cos(θ j ) Re β mdr* k S if 1+cos(θ j ) > H Λ 1 x1 4 ǫ Λ H 1− 1 κ+1 k S k 1 1 κ+1 Γ( 1 κ+1 ) Im β mdr* k S e i π 2 sign(F (κ) ) κ+1 (κ+1)|F (κ) | 1 κ+1 if 1+cos(θ j ) < H Λ 1 x1 if Λ H x 1 ≫ 1 4 ǫ Λ H Im β mdr* k S O(1) if Λ H x 1 ≪ 1 ,(47)
where we have accounted for the averaging to zero of the rapid oscillations in (43) in observables such as the halo bias. Recall that the O(1) coefficient in the last line is specified in (114) appendix A.3.
Deferring comments to section 6, for the moment we just note that, once again, at leading order the dependence on k 1 of the result is fully specified, and there appear enhancement factors such as Λ H and k S k 1 thanks to interference and accumulation in time. Finally, the only relevant sensitivity to the specific models, a part from β mdr* k S , is due to F (κ) ∼ O(1) and κ, which capture the leading correction to the dispersion relation of the specific models, and thus are interesting for detection.
The final result (46), (47) is very similar to the one obtained in the modified initial state case. This points at the fact that, indeed, at leading order the squeezed limit is dominated by the generic features common to both scenarios (such as particle content/creation, interference effects). The only difference is that in the modified initial state case, the folded configurations (given by h =j k h − k j = 0, that is, v θ j = 0) yield a contribution corresponding to κ → ∞ in (47), since the phase of the integrand and all its derivatives are identically zero, see the second line in (37). In the case of modified dispersion relations, instead, the true analogous of the folded configuration (which would be h =j ω(x h , y ′ ) − ω(x j , y ′ ) = 0 for some x's but for all y ′ 's) does not occur 14 and κ is finite.
Example 2: Quartic derivative interaction
In this section, we present our second detailed example before passing to the more general discussion in section 5.3. We consider the higher-derivative cubic interaction coming from the correction L HDI = √ −detG g 8Λ 4 ((∇Φ) 2 ) 2 to the effective action of the inflaton Φ, where G is the metric, g the coupling. Expanding in perturbations and converting in terms of ζ as explained in details in [20,38], from L HDI one obtains the cubic term
H (I) = − d 3 xa gφ 4 2H 3 Λ 4 ζ ′ ζ ′ 2 − ∂ i ζ 2 .(48)
Because of its origins from the term L HDI , which contains quartic power of derivatives, we call this cubic interaction "quartic derivative interaction". It is allowed to treat these interactions perturbatively also when the physical momenta p = a −1 k are such that p Λ is not small any more, because, compared to the operators changing the dispersion relation, they are further suppressed by additional powers of the perturbation fields.
In the standard scenario, the contributions to the bispectrum from higher derivative interactions are suppressed in the squeezed limit by powers of k 1 k S [12]. It is interesting, then, to study if one finds the same suppressed contributions and the same overall scale dependence in the scenarios of modified initial state and modified dispersion relations. In the standard single-field slow-roll scenario, reference [68] has discussed that the one in (48) is the lowest-dimensional operator in the class of operators that is the most important for non-Gaussianities. Nonetheless, in the standard scenario its contribution is very suppressed in the squeezed limit, as we have said.
Using equations (4), the bispectrum for the coupling (48) reads
ζ k 1 (η)ζ k 2 (η)ζ k 3 (η) = 2Re −i(2π) 3 δ (3) i k i gφ 4 2H 3 Λ 4 η η in dη ′ a ∂ η G k 1 (η, η ′ )∂ η G k 2 (η, η ′ )∂ η G k 3 (η, η ′ )+ + k 1 · k 2 G k 1 (η, η ′ )G k 2 (η, η ′ )∂ η G k 3 (η, η ′ )) + permutations ,(49)
where, again, η ∼ 0 is a very late time when all modes k i are outside the horizon.
Example 2 with modified initial state
In this scenario the leading contribution to the bispectrum from (49) for generic external momenta has been computed in [15,38]. Writing (49) in the same form as (23), (24), using (27), the main modifications compared to the standard result come from δF 1 , which reads
δF 1 (k 1 , k 2 , k 3 , η ≃ 0) = Re iC k 3 S k 2 1 k 2 2 k 2 3 3 j=1 β mis* k j 0 ηc dη ′ e i( h =j k h −k j )η ′ P j (k 1 , k 2 , k 3 , η ′ ) + c.c. , (50) with C ≡ x t 3 l=1 x 2 l r>v x 2 r x 2 v gH 2 M 2 Planck 4Λ 4 ,(51)P j (k 1 , k 2 , k 3 , η ′ ) = −k t 3 i=1 ( h =i k h − k i ) − iη ′ (k 2 j+1 − k 2 j − k 2 j+2 )k 2 j+1 (k j+2 − k j ) + (k 2 j+2 − k 2 j − k 2 j+1 ) k 2 j+2 (k j+1 −k j )+(k 2 j −k 2 j+1 −k 2 j+2 )k 2 j (k j+1 +k j+2 ) +(−η ′ ) 2 ( h =j k h −k j )( 3 i=1 k i )(k 2 t −4k j+1 k j+2 ) ,(52)
where j is defined modulo 3. The factor C has been written in terms of the variables x i defined in equation (38). Now we study the squeezed limit. As in section 5.1.2, we see from (50) that in that limit the dominating contributions to the integral occur for j = 2, 3 in equation (50), when the perturbations depending on the large wavenumbers are in opposition of phase, so that the overall phase of the integrand is small and the suppression due to the oscillations is reduced. It is then straightforward to see that the largest contributions come from the terms of order η and η 2 in P j . Indeed, their integration yields
j = 2, 3 0 ηc dη ′ e i( h =j k h −k j )η ′ η ′ = 1 + e ik1ηcv θ j (−1 + ik 1 η c v θ j ) k 2 1 v 2 θ j ≡ I (j) 1 k 2 1 v 2 θ j ,(53)j = 2, 3 0 ηc dη ′ e i( h =j k h −k j )η ′ η ′2 = i 2 + e ik1ηcv θ j (−2 + 2ik 1 η c v θ j + k 2 1 η 2 c v 2 θ j ) k 3 1 v 3 θ j ≡ I (j) 2 k 3 1 v 3 θ j ,(54)
so that at leading order
δF 1 = k 1 ≪k S 3 j=2 2gH 2 M 2 Planck Λ 4 k S k 1 v −2 θ j −Im β mis* k j I (j) 2 − cos(θ j )Re β mis* k j I (j) 1(55)
Looking at (53), (54), we find again two asymptotic behaviours for the correction to the squeezed limit of the three-point function depending on whether |k 1 η c v θ j | ≫ 1 or |k 1 η c v θ j | ≪ 1. Therefore, at leading order we obtain, for the "quartic derivative" coupling (48),
δ β ζ k 1 ζ k 2 ζ k 3 mis quart der = k 1 ≪k S (2π) 3 δ (3) ( i k i ) B mis quart der P st (k 1 )P st (k S ) ,(56)
with
B mis quart der = 3 j=2 −4 ǫ gH 2 M 2 Planck Λ 4 k S k 1 v −2 θ j (2 + cos(θ j ))Re β mis* k S if |k 1 η c v θ j | ≫ 1 2 ǫ gH 2 M 2 Planck Λ 4 k 1 k S (k S η c ) 2 2 3 k 1 k S (k S η c )v θ j Im β mis k S +cos(θ j ) Re β mis k S if |k 1 η c v θ j | ≪ 1 .(57)
In the first of (57), we have again taken into account the averaging of large oscillations to zero in observables such as the halo bias. Recall that |k S η c | ≫ 1 (in the NPHS scenario, in particular, |k S η c | = Λ H ). We comment on the actual magnitude and scale dependence of these corrections to the bispectrum in section 6.
Example 2 with modified dispersion relations
In this scenario the three-point scalar correlator from (49) for generic k 1 , k 2 , k 3 was computed in [20]. We briefly review that computation. The dominant contribution occurs when the Whightman functions have support in regions IV/III, see (11), as shown in [20] and reviewed in section 5.1.3. Inserting in equation (49) the relevant Whightman functions listed in appendix A.2, and writing the result as in (23), (24) using (27), with the variables defined in (38), one obtains that the leading modification compared to the standard result is [20]
δF 1 = − j Re C Λ H 3 β mdr* k j y y II dy ′ y ′2 e i Λ H S 0 ({x h },x j ,y ′ ,1) q({x h }, x j , y ′ ) h = j ,(58)
where the limits of integration are those presented in section 5.1.3. The function q is reported in equation (115) of appendix A.3, while S 0 and C are defined respectively in equation (40) and (51). Now we study the squeezed limit. As we see, the correction (58) is again given by a Fourier integral. Once again, the suppression from the oscillations of the integrand is reduced when the perturbations depending on k 2 , k 3 are in opposition of phase. Then, S 0 ({x 1,2(3) }, x 3(2) , y ′ , 1) behaves as in equation (42). The integral can be solved by asymptotic techniques, and we obtain at leading order
δ β ζ k 1 ζ k 2 ζ k 3 mdr quart der = k 1 ≪k S (2π) 3 δ (3) ( i k i )B mdr quart der P st (k 1 )P st (k S )
with (recalling appendix A.3 and table 1)
B mdr quart der = −4ǫ gH 2 M 2 Planck Λ 4 (59) × 3 j=2 k S k 1 Γ(3)+cos(θ j ) (1+cos(θ j )) 2 Re β mdr* k S if v θ j > H Λ 1 x1 Λ H Λ H k 1 k S 1− 2 κ+1 Γ( κ+3 κ+1 ) (κ+1) 2 Im β mdr* k S F (κ) e i π(κ+3) 2(κ+1) sign(F (κ) ) |F (κ) | κ+3 κ+1 if v θ j < H Λ 1 x1 if Λ H x 1 ≫ 1 1 2 k 1 k S Λ H 2 k 1 k S Λ H O(1) + 2 3 cos(θ j ) Im β mdr* k S + cos(θ j )Re β mdr* k S if Λ H x 1 ≪ 1 ,
where we have again taken into account the averaging to zero of the strongly oscillating terms in final observables as the halo bias, and the O(1) term in the last line is specified in (117), appendix A.3. Equation (59) matches (57); to see that, note that in the modified initial state scenario, as we said, the contribution from the folded configuration corresponds to κ → ∞, and that the dominating term in the second line of (57) is the one proportional to Re[β mis k S ], since |k 1 η c v θ j | ≪ 1.
General analysis for all cubic couplings in the effective action
So far we have presented two examples to illustrate in details the kind of modifications that would occur in the squeezed limit of the bispectrum in the modified scenarios we consider. We now want to analyse the squeezed limit for all possible cubic coupling arising in a generic effective action for the inflaton. Let us first summarize the key-points concerning the violation of the standard result for the bispectrum in the squeezed limit, which have emerged from the detailed examples discussed above:
• the most important modification to the standard result are due to the differences in the Whightman functions between the standard and the modified scenarios
• non-Gaussianities can be enhanced 15 in the squeezed limit by two effects:
interference, which reduces the suppression due to the oscillating phase of the integrand in the time integral in the bispectrum formula, and is strongest when the perturbations depending on the large modes k 2,3 ∼ k S are in opposition of phase, -accumulation in time, which leads to larger enhancement from the time integration when interactions scale with higher powers of 1 a(η ′ ) ∼ −η ′ 16 .
• if the squeezed mode and the time/scale of new physics are such that |k 1 η
(k 2,3 ) c | ≪ 1 17 , or k 1 k S Λ H ≪ 1
, the perturbation depending on k 1 is already outside the horizon when the effects of new physics are relevant for the perturbations depending on k 2,3 (responsible for the maximum interference and enhancement, see previous point). Then the result is of the local form but with enhanced non-Gaussianities (because of the particle content/creation, interference, and accumulation in time for ζ k 2 , ζ k 3 )
• if instead |k 1 η (k 2,3 ) c | ≫ 1 or k 1 k S Λ H ≫ 1 (in a realistic squeezed limit k 1 = 0), the perturbation depending on k 1 is inside the horizon when the effects of the new physics on the k 2,3 -perturbations are relevant and the bispectrum in the squeezed limit is not of the local form.
We want now to generalize our analysis beyond the two examples of interactions discussed above. It is convenient to adopt a different gauge than the one we have used so far: in the new gauge it is ζ = 0, and the perturbations are accounted for by a "matter field" perturbation ϕ. This gauge is fully equivalent, of course, to the one used before, and it is useful because it leads to a technically simpler expansion in perturbations [10]. It also makes clear from the outset the actual order in slowroll parameters of the cubic couplings, see [10]. In particular, we can choose ϕ to be the inflaton perturbation, expanding the inflaton in background plus perturbations 15 The actual magnitude of the enhancements depends on the precise value of |β k S | and the scales of "new" physics, and thus can only be estimated, as we will show in section 6, in our phenomenological approach. 16 This is quite intuitive, due to the importance of early times (large |η ′ |) around ηc for the modified initial state scenario, or around the time of WKB violation/particle creation ηII ∼ −Λk −1 S H −1 for the modified dispersion one. Indeed, if |k1ηcv θ j | ≪ 1 or k 1 k S Λ H ≪ 1, the integral for the bispectrum is approximately of the form 0 η c,II dη ′ η ′n ∼ η n+1 c,II , and one easily sees the growth with n. We will soon show a similar growth for |k1ηc| ≫ 1, k 1 k S Λ H ≫ 1. 17 We recall that the initial conditions for each mode of the perturbations are set when they are subhorizon, so that it is always |k1η
Φ(x, t) = φ(t) + ϕ(x, t) .(60)(k 1 ) c | ≫ 1, see section 6.
The results in terms of the variable ζ are then obtained at the end by performing a gauge transformation as described in [10]:
ζ = −H φ φ + O ǫ 2 , ϕ 2 ,(61)
so that
ζζζ = − H 3 φ 3 ϕϕϕ + O ǫ 2 , ϕ 2 ϕϕ + permutations .(62)
In particular, it is enough to consider the gauge-transformation at leading linear order, because the higher-order corrections will not yield time-integrated contributions to the bispectrum at tree-level, and, as it appears from the previous sections, it is the time-integrated contribution that leads to the leading modified results (see also [15,20,38]). The most generic effective action for the inflaton Φ in single-field slow-roll inflation has been presented in [39] 18 . In a Lorentz invariant theory, the terms in the action will be function of the Lorentz invariant objects Φ, g µν ∂ µ Φ∂ ν Φ, R µν ∂ µ Φ∂ ν Φ, Φ, R. In particular, those that are functions of Φ only, and not of the objects involving derivatives, constitute the scalar potential V (Φ). Recall also that all second order time derivatives (and first order derivatives of auxiliary fields) in higher order terms in the action (more than quadratic in the field) have to be eliminated using the equations of motion, see [39]. This applies for example to Φ and R µν . In a Lorentz-broken theory (such as it is the case with modified dispersion relations), there are also additional terms that are not built up only from the Lorentz-invariant operators listed above. These terms involve higher derivatives of the fields that can be written as purely spatial derivatives in a convenient system of coordinates 19 .
By expanding the effective action in the inflaton perturbation (60), given the fundamental building blocks listed above (including the higher-derivative Lorentz-breaking ones), and using conformal time, the cubic couplings for the perturbation ϕ have the most general and schematic form
dηd 3 x a 4 λ nms Λ n+m+s−1 ∂ n ϕ a n ∂ m ϕ a m ∂ s ϕ a s .(63)
Thus, a cubic coupling is individuated by the integers n, m, s and by the coupling constant λ nms . In equation (63) we have sloppily indicated both time and space derivatives (comoving coordinates) with the same symbol ∂ for notational simplicity and because it will simplify the counting of powers of momenta in the correlators. However, we will distinguish more carefully the two kinds of derivatives in the detailed calculations we are about to perform. We recall again, however, that in the case of time derivatives, their order is at most 1 (as we said, higher orders must be eliminated by using the equations of motion, see [39]). Finally, the dimensionless coefficients λ nms can depend onφ and in general on the slow-roll parameters, and are constrained by backreaction. Let us comment on the possible values of n, m, s for the cubic couplings (63). In a Lorentz-invariant theory, given the Lorentz-invariant objects listed above, n, m, s can be at most equal to 1 20 , while in a Lorentz-broken theory their values can be higher (for the higher spatial derivatives couplings, in the convenient frame adapted to the Lorentz breaking).
Furthermore, observe that in a gauge-fixed theory, when solving for the gravitational constraints (for example those related to the lapse N and shift-vector N i in the ADM formalism) there will also appear non-local operators. In the gauge we are using, they arise because of the solution 21 18 Another way of writing the most generic effective action for a "matter scalar" takes advantage of the so-called Stueckelberg trick using the Goldstone mode for the broken time diffeomorphisms in the unitary gauge, see [70]. 19 Lorentz-breaking always implies a privileged frame, the convenient system of coordinates is precisely the one adapted to this frame. 20 Obviously, integrating by parts, derivatives can be "moved around", in which case the more appropriate condition is that max(n + m + s) = 3. What we will say in the following is valid, of course, also for the terms obtained by integration by parts. 21 The standard notation ( ∂) −2 for the inverse of j ∂ j ∂j is formal. It becomes clear after a Fourier transform.
N i ≃ ǫ 1 2 ∂ i ( ∂) −2φ + O(ǫ 3
2 ), see [10]. Thus, when expanding in perturbations, n, m, s can assume value -1 because of the combination ( ∂) −2φ in the leading order contribution from N i , relevant for the bispectrum. However, in the solution for N i this combination is acted upon by ∂ i , which increases by one the value of n + m + s and thus "compensates" the -1.
Moreover, N i enters the action only via the metric g µν or the extrinsic curvature. In the first case, the spatial index of N i will result always contracted with a spatial derivative, as it can be seen from g µν ∂ µ Φ∂ ν Φ considering that the shift vector enters the components g 0i = N −2 N i and g ij = N −2 N i N j . Also when coming from the extrinsic curvature a component N i will always be accompanied by a spatial derivative, because the extrinsic curvature is defined as
K ij = (2N ) −1 (ḣ ij − 2 (3) ∇ {i N j} ) = (2N ) −1 (ḣ ij − h ik ∂ j N k − h jk ∂ i N k − N k ∂ k h ij ), where h ij ,(3)
∇ are the ADM three-metric and relative covariant derivative. These properties imply that in a coupling where originally N i was present, the combination ∂ i ( ∂) −2φ will multiply at least one other factor with a spatial derivative, which increases further the value of n + m + s by one.
Couplings with n = m = s = 0 arise instead from the expansion of terms such as
V (Φ) → V ′′′ (φ) 3! ϕ 3 , or ∂ r Φ a r u Φ m → (∂ r t φ) u φ m−3 ϕ 3 ,(64)
and thus are higher-order in slow-roll and would give a negligible contribution to the bispectrum, see [10,39,70]. We will neglect them in the following. Moreover, couplings of the form λ 011 Λ ϕ( ∂ϕ) 2 , where ∂ denotes spatial derivatives, are reabsorbed (eliminated) by field redefinitions such as ϕ → ϕ+ λ 011 2Λ ϕ 2 . Finally, couplings with a single time or spatial derivative are absent because of isotropy.
The two detailed examples of cubic couplings that we have analysed in sections 5.1, 5.2 fall into the scheme we are describing. Indeed, once written as in (63)
From equations (63), (61) and (4) the contribution to the bispectrum for a generic cubic coupling (63) reads (67) where we have neglected the momenta conserving delta function, 2π factors and overall minus signs. We have used equation (61) to cast the result in terms of the perturbation ζ (the Whightman function in (67) are those of the ζ-variable). (∂ r G) k indicates the Fourier transform of ∂ r G.
ζ k 1 ζ k 2 ζ k 3 λ ∝φ 3 H 3 λ nms Λ n+m+s−1 0 η i dη ′ a(η ′ ) 4−n−m−s (∂ n G) k 1 (∂ m G) k 2 (∂ s G) k 3 + permutations + complex conjugate,
We will now study the dominant contributions to (67) in the modified scenarios. We recall that, as it has emerged from the detailed examples, such contribution is due to the modification of the Whightman functions and occurs i) when the interference effects are stronger -that is, for ζ k 2 , ζ k 3 in opposition of phase -, and ii) for the terms in the integrand with higher powers of η ′ ∼ − 1 a and, obviously, lower powers of k 1 . In the following we will not pay attention to numerical factors of order O(1) in the formulas as they are irrelevant for our considerations.
In section (6), equipped with the results we are about to obtain, we will finally discuss which contributions, from all the possible couplings, determine the leading features of the bispectrum.
Modified initial state
The relevant Whightman functions go as, see appendix A.2,
(∂ η ′ G ± ) k (0, η ′ ) = ± H 4 φ 2 k 2 η ′ e ±ikη ′ 2k 3 , (∂ r≥0 i G ± ) k (0, η ′ ) = ± H 4 φ 2 (±ik i ) r (1 ∓ ikη ′ ) e ±ikη ′ 2k 3 ,(68)
so that when they are inserted in (67) we find the leading contribution linear in β k i
δ β ζ k 1 ζ k 2 ζ k 3 mis λ ∝ λ nms Λ n+m+s-1 3 j=2 β * k j H n+m+s+5 φ 3 2 3 k 3 1 k 3 2 k 3 3 k n+1 a k m+1 b k s+1 c ≃0 ηc dη ′ (-η ′ ) -1+n+m+s 1 + (-ik 1 η ′ ) n (1) t -1 × e ik 1 (1+cos θ j )η ′ 2 n (1) t + hermitian,(69)
where we have used equations (31), (32) and a, b, c run over 1, 2, 3 accounting for the permutations in (67), from which we take the dominant ones 22 . Finally, n (1) t = {0, 1} accounts for the presence or not of time derivatives acting on G(k 1 , η ′ ).
We concentrate now on the time-integral, which can be computed in closed form. By writing v ≡ n + m + s(+n (1) t − 1), the integral is well defined for the values attainable by n, m, s, discussed in section 5.3, and gives (j = 2, 3)
0 ηc dη ′ η ′−1+v e ik 1 (1+cos θ j )η ′ = (−1) v (ik 1 ) v v−1 r=0 v − 1 r r!(−ik 1 η c ) v−1−r (1 + cos θ j ) r+1 e ik 1 ηc(1+cos θ j ) − (v − 1)! (1 + cos θ j ) v ≈ − η v c v if |k 1 η c (1+cos θ j )| ≪ 1 (−1) v−1 (ik 1 ) v (v−1)! (1+cos θ j ) v if |k 1 η c (1+cos θ j )| ≫ 1 ,(70)
where in the last passage we have written the leading contribution, also considering the averaging to zero of large oscillations in the final observables as the halo bias. Inserting this result in equation (69), we obtain the leading contribution (reinstating the delta function from conservation of momentum)
δ β ζ k 1 ζ k 2 ζ k 3 mis λ = (2π) 3 δ (3) ( i k i ) B mis λnms P st (k 1 )P st (k S )(71)
where, having substituted back n + m + s in place of v,
B mis λnms = ǫ β * k S 2 n (1) t λ nms √ 2ǫ ΛM Planck H 2 k 1 k S 1+min(n,m,s) 3 j=2 ( H Λ |k S η c |) n+m+s c nms (1)+ c nms (n (1) t ) (-i|k 1 ηc|) 1-n (1) t if |k 1 η c v θ j | ≪ 1 -i H Λ k S k 1 n+m+s d nms (1) v n+m+s θ j + d nms (n (1) t ) v n+m+s+n (1) t -1 θ j if |k 1 η c v θ j | ≫ 1 + complex conjugate (72) with v θ j ≡ (1+cos θ j ), c nms (x) ≡ c λ (n+m+s−1+x) , d nms (x) ≡ (-1) x-1 (n+m+s−2 + x)!d λ ,(73)
and c λ , d λ are numerical factors of order one, possibly depending on θ j . We observe that the generic contributions present both enhancement factors (such as |k S η c | ≫ 1, or k S k 1 ≫ 1) as well as suppressing ones (such as H Λ n+m+s−1 and λ nms ). We also observe that for |k 1 η c v θ j | ≫ 1 the contribution will not be in general of the local form. However, such contributions could be suppressed and thus subleading. We will discuss these points in details in section 6. As a check, note that when specialized to the couplings (65), (66) that we studied in details previously, equation (72) leads to the results (37), (57) that we found before, with the proper c λ , d λ .
Modified dispersion relations
As discussed in section 5.1.3 the leading contribution to the bispectrum arises when the Whightman functions have support mostly in the interval of times in regions IV/III, given the solution (11). They go as, see appendix A.2,
(∂ η ′ G ± ) k (0, η ′ ) = -i H 3 φ 2 k γ ( * ) (k, η ′ ) a(η ′ ) e ±i Λ H Ω(k,η ′ ) 2k 2 , (∂ r≥0 i G ± ) k (0, η ′ ) = H 3 φ 2 (±ik i ) r χ ( * ) (k, η ′ ) a(η ′ ) e ±i Λ H Ω(k,η ′ ) 2k 2 ,(74)
where χ ( * ) (k, η ′ ) is defined in equation (108) and γ ( * ) (k, η ′ ) in (109). Once again, it is convenient to write the contribution to the bispectrum in terms of the variables defined in equation (38). Inserting the Whightman functions in (67), the leading contribution linear in β k i reads
δ β ζ k 1 ζ k 2 ζ k 3 mdr λ ∝ λ nms Λ n+m+s−1 3 j=2 β * k j H n+m+s+5 φ 3 2 3 k 3 1 k 3 2 k 3 3 Λ H n+m+s k n+1 a k m+1 b k s+1 c k n+m+s S × y -H Λ y II ∼−1 dy ′ (-y ′ ) n+m+s−1 h={a, b, c} (iγ ( * ) (x h , y ′ )) n (h) t (χ ( * ) (x 1 , y ′ )) 1-n (1) t e i Λ H x 1ṽ (n) θ j (y ′ ) + hermitian,(75)
where we have used equation (42), n (a, b, c) t = {0, 1} accounts for the presence or not of time derivatives in the coupling acting on ζ k a,b,c , and a, b, c run over 1, 2, 3 taking into account the permutations, from which we take the dominant ones according to footnote (22). Depending on the specific Lorentz-broken model, n, m, s can assume values larger than 1 such that n + m + s can be larger than 3 23 .
The solution for the integral can be again obtained by asymptotic techniques as those employed in sections 5.1.3, 5.2.2, so that the leading contribution reads
δ β ζ k 1 ζ k 2 ζ k 3 mdr λ = (2π) 3 δ (3) ( i k i ) B mdr λnms P st (k 1 )P st (k S )(76)
where, having written n + m + s = v and n + m + s + n with
(1) t -1 = v t , and used table 1 in appendix A.3, B mdr λnms = ǫ β * k S λ nms √ 2ǫ ΛM Planck H 2 k 1 k S 1+min(n,m,s) i n (1) t (77) × 3 j=2 ( 1 2 ) n (1) t O(1) + Λ H k 1 k S n (1) t −1 O(1) if Λ H k 1 k S ≪ 1 H Λ k S k 1 v (-i) v Γ(v) (1+cos θ j ) v 1+C (0) θ j ( H Λ k S k 1 ) + (-i) v t Γ(v t ,iv θ j ) 2 n (1) t (1+cos θ j ) v t if v θ j > H Λ kS k1 H Λ k S k 1 v κ+1 Γ( v κ+1 )e i πv 2 sign(F (κ) ) (κ+1) (κ+1)|F (κ) | v κ+1 1+C (κ) θ j ∼π ( H Λ k S k 1 ) if v θ j < H Λ kS k1 if Λ H k 1 k S ≫ 1 + complexC (ρ) θ j ( H Λ k S k 1 ) ≡ ∞ r,n=0 c r,n θ j H Λ k S k 1 κ+r+n ρ+1 Γ( v+κ+r+n ρ+1 )e i π(κ+r+n) (ρ+1) sign(F (ρ) ) 2 Γ( v ρ+1 )|F (ρ+n) | κ+r+n ρ+1(78)
containing all the subleading asymptotic contributions (the c r,n θ j are calculable order one coefficients, whose precise form is irrelevant for us, obtained from the expansion of the integrand in (75)). Finally, v θ j has been defined in (32), Γ(v t , iv θ j ) is the lower incomplete gamma function, and the O(1) factors in the second line of (77) are written in details in (118), appendix A.3 .
This result is very similar to that obtained in the modified initial state scenario, except for two differences. One is that now n + m + s can also assume values larger than 3 in the case of couplings deriving from the Lorentz-breaking terms in the action. The other is that for (nearly) folded configurations κ is finite and determined by the first correction to the standard dispersion relations (see (45)), whereas in the modified initial state scenario it is κ → ∞ for the folded configuration. A part from this, the similar structure of the two results can again be seen as showing that indeed the squeezed limit is dominated, at leading order, by the generic features common to both scenarios (such as particle content/creation, interference and time accumulation).
As a check, one finds that when equation (77) is specialized to the couplings (65), (66) that we studied previously as detailed examples, it leads to the results we have found before, see (47), (59). Note, in particular, that since the one in equation (48) is the sum of two elementary cubic couplings (one with only time derivatives, the other with one time and two space derivatives) such that for the folded configurations the leading contribution is cancelled out, for those configurations the result (59) is given by the first subleading correction in equation (77).
Signatures of very high energy physics in the squeezed limit
In the previous sections we have studied the contributions to the bispectrum in the squeezed limit in scenarios with modified initial state or modified dispersion relations at high energies. This has lead to the formulas (71)-(72) and (76)-(77) for the leading corrections to the standard result for all possible cubic couplings, see (63), in the effective Lagrangian.
Armed with these results, we are now going to
• discuss the leading features of the contributions to the bispectrum in the squeezed limit, individuating also which cubic couplings yield the largest contributions;
• obtain the specific predictions for the modified scenarios reviewed in section 3.
Concerning the analysis of the leading features of the bispectrum in the squeezed limit, we will focus on the dependence on the probed large scale k −1 1 and on the magnitude of the non-Gaussianities. Indeed, those are the most interesting features for observational purposes (for example regarding the halo bias) [13,14].
The results(71)-(72) and (76)-(77) show that the bispectrum has different kinds of behaviour subject to the interplay between the squeezed momentum scale k 1 and the scales/times Λ/η c of new physics. In particular, the different behaviours occur depending on whether |k 1 η (k S ) c | or k 1 k S Λ H are larger or smaller than 1. We will study the various cases separately.
Enhancements
We investigate here the possibility of enhancements. As we have adopted a phenomenological approach, we do not have the full-detailed knowledge of the magnitude of the Bogoliubov coefficient β k S , which the leading contributions to the bispectrum depend upon, see (71)-(72) and (76)-(77). However, we know the constraints that it has to satisfy for phenomenological reasons and self-consistency of the theory (see section 3.4). Therefore we can estimate the largest possible enhancements.
Also, it is noteworthy that the leading contributions depend only on β k S , which is independent of k 1 , see (72), (77). This means that our ignorance of the specific form of β k S does not affect our knowledge of the k 1 -dependence of the leading contributions.
We turn now to the detailed analysis.
p S (η (k S )
II
) associated with k S at the particle creation/WKB breaking time η (k S )
II
, which indeed has a magnitude of the order of Λ, see (82). This very simple exercise of rewriting is not useless, as it is another indication of the universality of the features of the squeezed limit in the modified scenarios.
Coming back to the question whether there can be actual overall enhancements, potentially interesting for observations, one needs to take into account all the factors in the equations (81), (82) for B mis, mdr λ : both the enhancing ( M 2 Planck HΛ ) and the suppressing (λ nms ǫ |µ| k 1 k S 1+min(n,m,s) ). As for the latter ones, it is immediately evident that terms with higher number of derivatives are more suppressed since min(n, m, s) will be higher. This tells us that the contributions from the higher-derivative Lorentz-breaking couplings in the scenarios with modified dispersion relations, where n, m, s can assume values also much larger that one, will be suppressed.
Considering then the possible values of n, m, s, see section 5.3, it turns out, as it was in fact imaginable, that the cubic couplings (18) and (48) that we have discussed as detailed examples yield (among) the least suppressed contributions. Indeed, they lead to a suppression respectively of order k 1 k S 0 and k 1 k S 2 , see (65), (66). In particular, the minimal cubic coupling (65) is not actually suppressed by factors k 1 k S . In fact, that coupling is leading also in the standard scenario, see [12]. Having obtained these field theory results, one should study the implications for the actual experiments. This goes beyond the scope of this work, and we leave it for future research. However, we observe that, according to the analysis of [12], values as k 1 k S ∼ 10 −2 are indeed within the reach of LSS investigations, such as for example EUCLID, which in the most optimistic estimates can probe down to k 1 k S < 10 −3 [12].
| ∼ ǫ |β k S | λ nms √ 2ǫ ΛM Planck H 2 k 1 k S 1+min(n,m,s) E (n,m,s) ( H Λ k S k 1 ) (85) ≤ ǫ |µ| λ nms √ 2 M 2 Planck HΛ k 1 k S 1+min(n,m,s) E (n,m,s) ( H Λ k S k 1 )(86)
where in the last inequality we have used the constraint (17) on |β k S |. Here,
E (n,m,s) ( H Λ k S k 1 ) = 3 j=2 H Λ k S k 1 n+m+s O(1)+ ∞ r=0 O ( H Λ k S k 1 ) κ+r [θ j ] if 1+cos(θ j ) > H Λ kS k1 H D Λ n+m+s 1+ k 1 D k S n (1) t -1 O(1) [θ j ∼π] mis H Λ k S k 1 n+m+s κ+1 1+ ∞ r=0 O ( H Λ k S k 1 ) κ+r κ+1 [θ j ∼π] mdr if 1+cos(θ j ) < H Λ kS k1 ,(87)
where we have emphasized the orders of magnitude, as we are interested in the enhancements -for the detailed expressions see (72) (79)), although less so in the scenario of modified dispersion relations. It follows, as we see from (87), that the result has a non-trivial θ j dependence (different enhancements for different values of θ j , recall also that we consider a realistic squeezed limit where k 1 is small but nonzero).
Finally, we see that the higher the number of derivatives, the more the suppression because of the factors in (86), (87). In this respect, considering the possible values of n, m, s, the cubic couplings (18) and (48)
3 j=2 10 −1 k S k 1 (1 − n s ) if 1+cos(θ j ) > H Λ 1 x1 10(1 − n s ) mis 10 κ-1 κ+1 k S k 1 1 κ+1 mdr if 1+cos(θ j ) < H Λ 1 x1 .(88)
We see that for k 1 k S ≃ 10 −2 the folded configurations, both in the modified initial state and modified dispersion relations scenarios, could be ten times enhanced with respect to the standard result, whereas the other configurations would present only a moderate enhancement compared to the result in the standard scenario.
In the case of the quartic-derivative interaction (48), (66), for a coupling g ∼ 10 −2 , we find from (72),(77) -taking into account the comment at the end of section 5.3.2 -, or directly from (57), (59) that |B mis, mdr
quart der | 3 j=2 10 −1 k S k 1 (1 − n s ) if 1+cos(θ j ) > H Λ 1 x1 10 3 k 1 k S (1 − n s ) mis 10 3-4 κ+1 k S k 1 2 κ+1 −1 (1 − n s ) mdr if 1+cos(θ j ) < H Λ 1 x1 .(89)
Note that for k 1 k S ≃ 10 −2 , (88) and (89) are of the same magnitude, which again shows that in the modified scenarios the different a −1 scaling of the higher-derivative interactions can compensate for the stronger suppression by higher powers of k 1 k S and make the squeezed limit sensitive to higher-derivative interactions.
Dependence on k 1
We discuss now the dependence on the squeezed wavenumber k 1 . Recall that the leading contributions depend only on β k S , which is independent of k 1 , see (71)-(72) and (76)-(77). Thus, the k 1 -dependence of the leading corrections to the bispectrum in the squeezed limit is not affected by our ignorance of the specific form of β k S .
In the following, we will rewrite equations (71)-(72) and (76)-(77) highlighting the dependence on k 1 . We will neglect the momenta conserving delta function to avoid cluttering formulas. Again, we find that the field theoretical results for the modified scenarios have two different kinds of behaviour, depending on the magnitude of the scale of new physics as compared to the sensitivity of the observations (smallest k 1 that can be probed). We will stress the differences with respect to the standard scenario, where the bispectrum grows in the squeezed limit as ∼ k −3 1 [10][11][12], and also compare to the result obtained with the approximated template proposed in [15] for scenarios with modified initial state, which leads to a bispectrum growing as ∼ k −2 1 , see [13,14].
Except for the case of folded configurations in modified initial state scenarios, it appears from (92) that higher-derivative couplings (larger values for n, m, s) would lead to a more pronounced growth for small k 1 since its exponent will be more negative. However, these contributions are very suppressed because of the factors H Λ n+m+s , H Λ n+m+s κ+1 . The most interesting cases are thus those that are least suppressed. Considering the possible values for n, m, s, see section 5.3, this happens for instance for the minimal cubic coupling that we studied as a detailed example, where n + m + s = 1 and min(n, m, s) = −1, see (65), in which case (roughly, see (36)-(37), (46)-(47) for the details)
δ β ζ k 1 (η)ζ k 2 (η)ζ k 3 (η) min cub = ǫ |β k S | 3 j=2 k S k 1 if 1+cos(θ j ) > H Λ 1 x1 Λ H mis Λ H κ κ+1 k S k 1 1 κ+1 mdr if 1+cos(θ j ) < H Λ 1 x1 P st (k 1 )P st (k S ) .
(93) We see from (93) that in this case the form of the bispectrum in the squeezed limit has i) a nonlocal shape, with a scaling as k −4 1 and with amplitude not too severely suppressed for non-folded configurations, ii) a non-local shape and enhanced amplitude (see also section 6.1) for (nearly) folded configurations in the case of modified dispersion relations, and iii) a local shape and greater enhancement for folded configurations in the modified initial state case. Interestingly, the scaling with k 1 of the contribution from (nearly) folded configurations in the case of modified dispersion relation captures the power κ of the first momentum correction to the standard dispersion relation, see (92).
Predictions for the different modified scenarios
We have seen that, most interestingly, in modified scenarios the leading features of the bispectrum in the squeezed limit depend on very general aspects of the theory (exit from horizon, particle production, interference and accumulation with time), due to the modification of the Whightman functions in these scenarios. These aspects cannot determine the bispectrum in the squeezed limit in all details, but, even adopting a wide-range phenomenological approach, we have been able to determine both the leading dependence on the squeezed scale k 1 and the presence of enhancements of non-Gaussianities and their bounds.
We have seen that these features depend on the values (less or more than 1) of the quantities |k 1 η (ks) c | and k 1 k S Λ H . Note that k 1 k S represents the sensitivity of the observation (the smallest ratio of scales that can be probed). We would like now to ask ourselves which kind of conditions or predictions the different theoretical scenarios impose or make about the values of |k 1 η (ks) c | and k 1 k S Λ H , and, thus, what features of the bispectrum would be realized in the squeezed limit.
• BEFT modified initial state. In the Boundary Effective Field Theory approach, the initial condition for the fields is fixed at a time η c independently of the modes k ("beginning of inflation"). Thus
η (k S ) c = η (k 1 ) c
= η c and since initial conditions are always fixed when modes were subhorizon, then
|k 1 η (k 1 ) c | ≫ 1 ⇒ |k 1 η (k S ) c | ≫ 1.
Looking therefore at the analysis in sections 6.1, 6.2, we are in the cases of equations (85), (92), and the leading possible contribution is given by (93), see also (88). This means that if the BEFT is the correct physical approach to model the effects of very high energy physics from a low-energy point of view, then the largest possible contribution to the bispectrum for non-folded configurations would grow at large scales as k −4 1 , but with only moderately enhanced non-Gaussianities, whereas the contribution from folded configurations would be more enhanced, but growing only as k −3 1 .
• NPHS modified initial state. In the NPHS approach the boundary condition is mode-dependent, and imposed at the time given by |k η (k) c | = Λ H ≫ 1. The picture is that different modes are created at as [13] 26
∆b h (k 1 , R) b h = δ c D(z)M R (k 1 ) 1 8π 2 σ 2 R ∞ 0 dk 2 k 2 2 M R (k 2 ) × 1 −1 dξ M R k 2 1 + k 2 2 + 2k 1 k 2 ξ B(k 2 , k 2 1 + k 2 2 + 2k 1 k 2 ξ, k 1 ) P (k 1 ) ,(94)
where σ 2 R is the variance of the dark matter density perturbations smoothed on a scale of Lagrangian radius R, δ c is the critical threshold for the collapse of a spherical object (for a matter-dominated universe δ c = 1.686), D(z) is the linear growth factor normalized to be D(z) = (1 + z) −1 during matter domination, P and B are the two-point correlator 27 and bispectrum for ζ, see equations (2)-(3), and M R is the linear relation between the dark matter density perturbations smoothed on a scale R and the primordial curvature perturbation:
M R (k) ≡ 2k 2 5Ω m H 2 0 T (k)W R (k) ,(95)
where Ω m is the present time fractional density of matter. Finally, H 0 is the present Hubble rate, T is the transfer function normalized to one on large scales and W R is the filter function with characteristic scale R.
The halo bias depends on both the primordial bispectrum and the subsequent gravitational processing which generates additional non-Gaussianities. These General Relativity corrections have been studied for example in [13] and it has been shown that they give a typical contribution to the halo bias, which could permit to distinguish them. We will therefore comment only the primordial contributions.
The behaviour of the halo bias at large scales has been computed for the local, equilateral and folded templates for the primordial non-Gaussianities in [13] (see also [6][7][8][9]14]), finding that the bias goes approximately as k −2 1 with the local bispectrum template, as a constant in k 1 with the equilateral, and as k −1 1 with the folded. The latter result is the one that can be compared with those we have found for the scenario of the modified initial state scenario, since that template was indeed proposed as an approximation to model the bispectrum in that case [15]. In fact, we have shown that it leads to the wrong result for the squeezed limit: the folded bispectrum template goes as k −2 1 , which is very different from the true bispectrum behaviour found in sections 5 and 6. The reason for this is that the template does not depend on the additional scale(s) η c , Λ and thus its squeezed limit is different than the one that actually occurs in the rigorous field theory computation.
We have indeed found in sections 5 and 6 that in the squeezed limit the leading contribution to the bispectrum grows as k −4 1 or k −3 1 , depending on the compared magnitudes of the scale of new physics and the sensitivity of the observation (the smallest observable squeezed momentum scale). Therefore, from (94) and the results in sections 5 and 6, the halo bias for modified initial states would approximately grow either as k −3 1 or k −2 1 for small k 1 , and not as k −1 1 as predicted by the template. Similarly, in the scenario of modified dispersion relations violating WKB at early times, equation (94) and the results in sections 5 and 6, would also lead to a bias that approximately grows either as k −3 1 or k −2 1 for small k 1 for non-folded configurations, again depending on the relative magnitudes of the high-energy scales and the sensitivity of the observations. (Nearly) folded configurations would 26 The formula (94), obtained in [13], is calculated in a local-Lagrangian-biasing scheme. Reference [14] has a similar but not identical formula calculated using a point-break splitting method. As discussed in [14], the formulas agree for large scales, but differ for intermediate scale, where the point-break splitting method involves more important approximations. It seems however that the formula (94) is the one that is better in agreement with the simulations [8], see also [12]. 27 In the literature on the halo bias, the two-point correlator is often called the spectrum [13,14], not to be confused with the quantity P = k 3 2π 2 P , also called spectrum, usual in the CMBR literature.
lead to different, subleading scalings with k 1 , determined by the exponent of the first correction to the standard dispersion relation for small momenta.
As for the magnitude of the halo bias, a feature relevant for the observations of the bias is that, as we have seen, the high-energy modifications could enhance the single-field non-Gaussianities in the squeezed limit, compared to the practically undetectable level that is present in the standard scenarios of single-field models with Bunch-Davies vacuum and standard kinetic terms. However, the final magnitude of the non-Gaussianities depends also on the specific model, which determines the precise form and magnitude of the coefficients β k S entering the equations.
Final discussion and conclusion
In this article we have analysed the squeezed limit of the bispectrum in single-fields models of inflation when modifications of the theory at very high energy are present. Due to the large number of inflationary models available, it is useful to look at general features that can be obtained when these modifications are rather generic and simple.
In particular, we have considered 1) the presence of terms with higher derivatives, which are not suppressed any more beyond a certain momentum scale and modify the dispersion relations of the fields, and 2) the possibility of initial condition for the solution of the field equation that parametrize in a simple and generic way our ignorance on the physics at very high energies. In the case 1) the most significant results occur for dispersion relation that violate WKB (for a short time) at early times; in the case 2) we have considered both NPHS and BEFT approaches, where the initial condition is imposed respectively in a scale-invariant and non-scale-invariant way.
The leading contribution to the squeezed limit appears dominated by very general features of the scenarios (particle content/creation at early times, interference and cumulation with time) and thus it is possible to obtain general results and bounds. From the more mathematical point of view, the reason for the leading differences with the standard result is the modifications of the Whightman function of the theory.
The result and bounds that we have found are interesting in two respects: on the one hand, they show that non-Gaussianities could be enhanced by high energy modifications of the theory, which would improve the possibility of detection of those phenomena. On the other, the behaviour of the bispectrum at large scale in these scenarios is distinctive, and differs from the one that has been found in the standard scenarios, with Bunch-Davies vacuum and standard dispersion relation. It is also different from that found in previous works using the approximate template proposed in [15] for the modified initial state approach.
Our results show that it could be possible to obtain various different pieces of information about the features of the high energy physics theory from large scale observations such as those of the halo bias, if the signatures we have found were detected. In fact, the enhancements and the dependence of the bispectrum on the small wavenumber k 1 in the realistic (observable) squeezed limit change in relation with the magnitude of the scale of new physics. Hence, if able to detect certain behaviours of the bispectrum, one can place bounds on the new scale of physics.
Of course, other sources could give origin to sizable non-Gaussianities, for example in multi-field models. An interesting question for future research would then be if the effects due to high-energy modifications of the theory that we have considered can be distinguished from those. For example, the dependence on the small mode k 1 of the halo bias could be different, or the enhancement factors could be absent or of different magnitude. We leave this as an interesting outlook for future research.
By asking for the continuity of the function and its first derivative, and imposing the Wronskian condition W{f, f * } = −i to have the standard commutation relations in the quantum theory, we obtain in full generality
D 1 = √ π 2 e i π 2 ν+i π 4 α k D 2 = √ π 2 e −i π 2 ν−i π 4 β k (102) B 1 = W{ς k u 1 , U 2 } W{U 1 , U 2 } η I B 2 = − W{ς k u 1 , U 1 } W{U 1 , U 2 } η I (103) α k = W{B 1 U 1 + B 2 U 2 , u 2 } W{u 1 , u 2 } η II β k = − W{B 1 U 1 + B 2 U 2 , u 1 } W{u 1 , u 2 } η II ,(104)
where W is the Wronskian. We choose ς k = 1 picking up the usual adiabatic vacuum. The Wronskian condition also imposes |α k | 2 − |β k | 2 = 1. By expanding for small ∆ around η I , we obtain equation (13) in the text. Finally, the parameter signalling the WKB violation is [20]
Q = − − ω ′′ 4ω 2 + 3 ω ′ 2 8ω 3 − 1 ωη 2 2 + ω ′′ 2ω − 3 ω ′ 2 4ω 2 − U ′′ 2U + 3 U ′ 2 4U 2 .(105)
A.3 Time integral in bispectrum computations with modified dispersion relations
When computing the bispectrum, we have encountered various Laplace integrals of the form
J (w) j = y y II dy ′ y ′w−1 e i Λ H x 1ṽ θ j (y ′ ) h(x 1 , x S , θ j , y ′ ) ,(110)
see equations (39), (58), (75), where w and h(x 1 , x S , θ j , y ′ ) change in the different integrals and the variables are defined in (38). In particular, in the case of the minimal coupling interaction (18), the integral present in the correction of order β k to the bispectrum, see equation (39), has the form of (110) with w = 1 and h({x}, θ j , y ′ ) → g({x}, θ j , y ′ ), where [20]
g({x h =j }, x j , y ′ , m) ≡ h =j γ * (x h , y ′ )γ(x j , y ′ ) e i H Λ y ′ h =j S 2 (x h )+(−1) m S 2 (x j )(111)
with γ defined in (109) and
S 2 (x i , y) = − ∂ 2 y ω(x i y) 4ω(x i y) 2 + 3 (∂ y ω(x i y)) 2 8ω(x i y) 3 − 1 ω(x i y)y 2 ,γ(x, y ′ ) = γ * (x, y ′ ) if m = 0 γ(x, y ′ ) if m = 1 .(112)
In the squeezed limit, for m = 1 and j = 2, 3, looking at (98) and (109), we obtain g(y) ≡ g(x 1 ≪1, x j=2,3 ≃x S ≃1, y, 1) ≃ −i |γ(x S , y)| 2 e i H
Λ O(x 1 , H Λ ) ≃ −i|γ(1, y)| 2(113)
In particular, the O(1) coefficient appearing in the result for the contribution to the bispectrum for Λ H x 1 ≪ 1 in the last line of (43) and of (47), is a purely numerical factor, which reads in details:
In the case, instead, of the higher derivative interaction in equation (48), the integral appearing in equation (58) has the form of (110) with w = 3 and h({x}, θ j , y ′ ) → q({x}, θ j , y ′ ), where [20] q({x h }, x j , y)= 6 h =j γ * (x h )γ(x j )+ 2 x j+1 · x j+2
x j+1 x j+2 χ * (x j+1 )χ * (x j+2 )γ(x j )+ 2 x j+1 · x j x j+1 x j χ * (x j+1 )χ(x j )γ * (x j+2 )
+ 2 x j+2 · x j x j+2 x j χ * (x j+2 )χ(x j )γ * (x j+1 ) e i H Λ y h =j S 2 (x h )−S 2 (x j ) ,(115)
and S 2 has been defined in equation (112) while χ in (108). We write explicitly only the dependence on x for γ, χ to avoid cluttering the formula. The labels of the x's are defined modulo 3, here.
In the squeezed limit, for j = 2, 3, looking at (98) and (109), the integral of the leading contribution has q(θ j , x 1 , y) ≡ q(x 1 ≪1, x j=2,3 ≃x S ≃1, y) ≃ −6i|γ * (1, y)| 2 +2i|χ(1, y)| 2 − 4iχ * (x 1 , y) cos(θ j ), (116) so that, again, the O(1) coefficient appearing in the result for the contribution to the bispectrum for
|y| ≪ 1 y = y II y ≃ y * F (x S , y) 1 + (−y) κ F (k) F (−1) F * + F (ν * ) * |y − y * | ν * v θ j (y) y v θ j − (−y) κ+1 F (k) cos(θ j ) −1 − F (−1) cos(θ j )ṽ θ j (y * ) +ṽ (ν * ) θ j (y * ) |y − y * | ν * or v θ j (y * ) +ṽ (ν * ) θ j (y * ) ×|y-y * | ν * sign(y-y * )
g(y) −i(1 + |y| κ F (k) ) −iF (−1) (y − y * ) µ * −1 g (µ * −1) (y * ) q(θ j , x 1 , y) −4i(v θ j +2|y| κ F (k) )
-4i(χ * (x 1 ,y)-1) cos(θ j ) −6i|γ * (1, -1)| 2 + 2iU (1, -1) -1 -4i cos(θ j )χ * (x 1 , -1) (y − y * ) τ * q (τ * ) (θ j , y * ) (39), (58), (75) close to the boundary points (limits of integration) y II ∼ -1 and y ∼ -H Λ ≪ -1, and close to the possible stationary point(s) y * in the squeezed limit x 1 ≪ 1, x S ≃ 1.
Similarly, we find that the O(1) coefficients appearing in the result for the contribution (75) to the bispectrum for Λ H x 1 ≪ 1, see last line of (77), are given by
0 −1 dy ′ (y ′ ) u γ * (1, y ′ ) n (h) t γ * (1, y ′ ) n (h+1) t h =j ∼ O(1).(118)
where n (h) t has been defined below (75) and u is equal to v/v t , defined above (75), in the first/second coefficient.
For Λ H x 1 ≫ 1, the integrals like (110) are dominated by the contribution of boundary and stationary points, where the oscillations due to the large phase of the integrand are reduced. It is therefore important to study the behavior at leading order of the integrands in (39), (58), (75) close to the boundary points (limits of integration) y II ∼ −1 and y ∼ − H Λ ≪ −1 and to the possible stationary point(s) {y * }. In table 1, we report the leading and next-to-leading behaviour.
In the table, we include also the function F (x, y ′ ), defined in (96), and now written in terms of the variables (38). From its definition (96), it appears that its expansion in powers of p Λ = −y for |y| ≪ 1 is in fact the expansion for the effective frequency from the effective action. In particular, κ+1 is thus the power of the first correction to the standard physical dispersion relation (model dependent):
ω phys (p) ∼ p(1 + p Λ κ F (κ) + · · · ) .(119)
In expanding g(y), q(θ j , x 1 , y) -see (113), (116) -we have taken into account the WKB conditions. The powers ν * , µ * , τ * are model dependent. We wish to be generic, so κ, ν * , µ * , τ * can be fractional or integer 28 . Then, since F = ω phys p is a real function, its expansion must be clearly in powers of |y|, |y−y * |, with real coefficients, for generic ν * , κ. Similarly, it must be so forṽ θ j (x, y)Λ = −(p + ω phys (p)), which is a real function as well (in table 1 we have been very generic concerning its expansion around y * ).
using the relation between ζ and ϕ, they are of the type (see also the comment at the end of section 5.3.2) minimal cubic : {n, m, s} = {1, 1,
conjugate23 However, time derivatives are at most of order 1, see section 5.3 and[39].
(
If we concentrate on these dominant (less suppressed) couplings, which we had studied in details as examples in sections 5.1 and 5.2, we find that there can indeed be actual overall enhancement of the non-Gaussianities with respect to the standard scenario for certain values of the scale of new physics. Let us discuss a concrete example: take ǫ ∼ |µ| ∼ 10 −2 , H ∼ 10 −5 M Planck , compatible with the results of WMAP, and Λ ∼ 10 −3 M Planck , which corresponds to the supersymmetric GUT scale. Then, from (82), if we can probe down to k 1 k S 10 −2 , the largest possible enhanced amplitude that we could observe would be roughly of the order of |B NPHS, mdr min cub | ∼ 10 ǫ ∼ 10 (1 − n s ) . (83) in the case of the minimal coupling (65), while it will be |B NPHS, mdr quart der | ∼ 10 3 g (1 − n s ) ∼ 10 (1 − n s ) (84) in the case of the quartic derivative interaction (66) with g ∼ 10 −2 .It is noteworthy that for the higher (quartic)-derivative interaction the additional essentially due to the power of a −1 scaling) compensate each other to give a result of the same magnitude as that of the minimal cubic coupling. This result is quite different from the one in the standard scenario, where the a −1 scaling of the interactions does not have any important effect since the time integral in the bispectrum is dominated by late times.
case the correction (71)-(72), (76)-(77) to the bispectrum for a generic cubic coupling contributes a parameter f N L ∼ B
,(77) -(the θ j dependence is indicated by the label [θj] ). A few features are evident at first sight from (86), (87). First of all, the powers of H Λ k S k 1 in equations (86), (87) are suppressing factors, because we are considering the case k 1 k S Λ H ≫ 1. Second, the (nearly) folded configurations (singled out by the condition 1+cos(θ j ) < H Λ k S k 1 ≪ 1) are enhanced with respect to the other configurations (D has been defined below equation
(see their schematic form in(65),(66)), which we have discussed as detailed examples, turn out again to be the most dominant ones.Equation (86) also includes enhancement factors such asM 2Planck HΛ , therefore the presence of actual overall enhancements depends once more on the values of the scales and quantities at play. To give some pointers about the possibilities, let us again consider the same example as before and the couplings(18) and(48), summarized in(65),(66). Recall that we take ǫ ∼ |µ| ∼ 10 −2 , H ∼ 10 −5 M Planck , compatible with the results of WMAP, and Λ ∼ 10 −3 M Planck (supersymmetric GUT scale).Then, if we can only probe values down to k 1 k S 10 −2 , we are in the case k 1
′ |γ(1, y ′ )| 2 ∼ O(1).
′ 3|γ(1, y ′ )| 2 − |χ(1, y ′ )| 2 ∼ O(1).
withB
mis
min cub =
3
j=2
B
mis, min cub
(j)
,
B
mis, min cub
(j)
Table 1 :
1Behaviour of functions entering the integrands in equations
We follow the conventions in[10,15,20,38]; other authors call this perturbation R to distinguish it from the uniform energy-density curvature perturbation.
Some different general approximations are possible in this case, see[20] and the appendix A.1. With particular choices of time coordinate, also the WKB approximation can still be applied sometimes, see[58].
However, the squeezed limit of their contributions to the bispectrum has not been studied in the modified scenarios we consider.
In[38], ζc is simply written as ζ, see their equation(3.11). The same is done in many other papers, such as[15].8 The path of integration in time must be chosen such that the oscillating piece of the integrand becomes exponentially decreasing for η → −∞. This corresponds to taking the vacuum of the interacting theory[10].
j=1 δF (j) 1 = − 3 j=1 Re iβ mis* k j k t η∼0 ηc dη ′ e i( h =j k h −k j )η ′ = − 3 j=1 k t Re β mis*
It can be easily verified at least if ω is given by elementary functions and/or power series.
Recall that, since we are considering only scalar perturbations, after the expansion the spatial derivatives indexes are contracted by the background metric, leading thus to contributions proportional, by permutation, to k1 · k2, k1 · k3, k2 · k3. But then, in the squeezed limit, the contributions proportional to k1 · k2 and k1 · k3 give an overall contribution proportional to k1 · ( k2 + k3) ∼ k 2 1[12] that is subdominant with respect to the contribution proportional to k2 · k3, which goes as k2 · k3 → −k2k3 = −k 2 S in the limit. The latter is thus the leading one that we consider here and in the following.
As before, 'mis' indicates the modified initial state scenarios, while 'mdr' those with modified dispersion relations.25 However, in the NPHS approach |kSη(k S ) c | = Λ H .
AcknowledgmentsThe author is supported by a Postdoctoral F.R.S.-F.N.R.S. research fellowship via the Ulysses Incentive Grant for the Mobility in Science and is "chercheur de recherche" F.R.S.-F.N.R.S..and where D = |k S η (k S ) c | in the modified initial state case, while D = Λ H in the scenario with modified dispersion relations25. For the final inequality (80) we have used the constraint (17) on |β k S |.Writing the above expressions using a common symbol D for both kinds of modified scenarios is useful to make evident the similarity of the results in the two different cases. As we remarked, this similarity is an indication that the squeezed limit is dominated by the generic features common to both of them (particle content/creation, interference effects).We analyse now in details the features of (80), paying particular attention to the possibility of enhancements (we will focus on the scale dependence in section 6.2).If we consider first D = |k S ηH is the physical momentum associated with the wavenumber k S(1) at the boundary time η (k S ) c . When instead D = Λ H , such as in the cases of modified dispersion relations or NPHS initial state, the expression (80) simplifies and we obtain |B NPHS, mdrFrom (81), (82), we can understand various characteristics of the amplitude of the non-GaussianitiesFirst of all, we see that (81) is maximized for p S (η c ) = Λ, that is, when the physical momentum associated with k S at η (k S ) c is really at the largest scale that we can trust our effective theory at: the scale of new physics Λ. This is precisely what occurs in the NPHS scenario of initial state modification: the initial condition at η (k S ) c is fixed by construction when the physical momentum is at the cutoff scale Λ, and we get (82).In the case of modified dispersion relations, equation (82) can be re-written in a way that makes the similarity with the modified initial case even more evident. It is straightforward to obtain an equation which is analogous to (81), but where the place of p S (η c ) is taken by the physical momentumH , respectively for modified initial state or modified dispersion relations.The corrections (90) scale as k −3+min(n,m,s)+n (1)). We thus find that higher-derivative couplings (larger values for n, m, s) grow more slowly for small k 1 . The contribution that grows the fastest is of the formthat is, it has a local shape k −3 1 , and can be found for couplings such as the minimal coupling cubic, where min(n, m, s) = −1 (recall that this value occurs due to the combination ( ∂) −2φ , therefore it is always accompanied by n (1) t = 1). Although it has a local form, the result is still different from the one in the standard scenario because non-Gaussianities are/can be enhanced by the particle content/creation concerning the perturbations depending on k 2,3 as we have discussed in section 6.1.In this case, equations (71)-(72) and (76)-(77) yield, at leading order,different times in a certain condition that fixes the initial state of the perturbations in the effective theory at scales lower than Λ.When computing the three-point function, therefore, we cannot trace the interaction of the modes backward in time for a time earlier than the "creation time" for the largest of the considered modes: the prior evolution (and our ignorance about it) is encoded in the initial state. Thus, in the case of the squeezed limit, since η (k S )H is larger or smaller than 1 (super-or sub-horizon condition) despite the fact that |k 1 η (k 1 ) c | ≫ 1, depending on the values of the scales at play in the specific model.Therefore the bispectrum will behave in different ways for the various possibilities. Looking at the analysis in sections 6.1, 6.2, we see that the leading correction to the bispectrum would grow at large scales as k −4 1 for non-folded configurations if the scale of new physics is such that Λ > k S k 1 H, where k 1 k S is the smallest ratio of scales that can be probed, see (92), (93). On the other hand, for folded configurations, or when Λ < k S k 1 H (superhorizon condition), the contribution would be of the local type, growing at most as k −3 1 as in the standard case, but will be enhanced, see (90), (91), (92), (93), (83).An interesting outcome of this result is that since we have different signatures depending on whether k 1 k S Λ H is larger or smaller than 1, in case we could detect one or the other, say in the halo bias, and separate them from other kinds of non-Gaussianities (such as the secondary ones), we would obtain information on the magnitude of Λ given our knowledge of the observation sensitivity k 1 k S .• Modified dispersion relations. In the scenarios with modified dispersion relations violating WKB at early times, k 1 k S Λ H can be either smaller or greater than 1, depending on the value of the Lorentzbreaking scale Λ and the sensitivity k 1 k S of the observation (smallest ratio that can be probed). In the case when The least-suppressed possible contributions (which could actually be enhanced) show at leading order a growth as k −4 1 for non-folded configurations, and as slower powers k −r 1 , with r fractional, for the (nearly) folded ones, see (92), (93). Remarkably, in the latter case, the exponent of k 1 is determined by the momentum power of the lowest high-energy correction to the standard dispersion relation, see (93),(45), and it would thus be of great observational interest.On the other hand, in the case Λ < k S k 1 H, the leading contribution would be of the local type growing at most as k −3 1 , see (91), but it could be enhanced, see (83). Once again, if we could observe one or the other behaviour of the bispectrum, we could infer whether k 1 k S Λ H is either larger or smaller than 1, and thus obtain information on the magnitude of the Lorentz-breaking scale from our knowledge of the probed k 1 .Comments on the halo biasWe will briefly comment here on the implications of our results for the halo bias, leaving a more detailed analysis for future research. The halo bias depends on the bispectrum and the spectrumA AppendicesA.1 Solution of the field equation in the case of modified dispersion relationsWe parametrize the modifications to the dispersion relation via a generic function F H Λ kη :with the only request that the WKB conditions be violated for a short period of time at early times. The generic shape of such a dispersion relation is infigure 1. The approximate solution of the equation of motion (6) in these cases is given in equation(11), where, see[20]:• in regions I and III, in terms of the convenient variable H Λ kη ≡ y k :with• in region IV, where ω(η, k) ∼ k and kη 1,• in region II, the partial solution U 1,2 (η, k) can be found either by solving in details the specific equation, if one has a favourite model, or, more generally, by expanding the frequency around the minimum ω 0 at η = η min asso thatwhere W (a, b, z) is the Whittacker function. As explained in[20], this approximation is welljustified also because backreaction constraints the interval [η I , η II ] to be very small (∆ < 1), so that η ∼ η min for η, η min ∈ [η I , η II ]. In any case, it actually turns out that the details of the solution in region II are not important for what concerns the leading contribution to the bispectrum[20].A.2 Formulas for Whightman functions and their time derivativesThe bispectrum is computed at a time η when all momenta have exit the horizon, therefore the Whightman functions have the general formWe list here the explicit formulas obtained by substituting in the above equations the field modes functions f k for ζ in the various scenarios we consider. Due to the presence of positive-and negativeenergy solutions, the Whightman function will also have positive-and negative-energy parts, which we indicate with G ± k . • Standard scenario and modified initial state. From equations(8),(15)• Modified dispersion relations. From equation(11), we find that in region IV the Whightman function is the same as the one in equation(107), while in region III it is obtained from (97). Thus,where Ω(k, η ′ ) is equal to Ω F (k, η ′ ) in region III and to kη ′ in region IV. The derivative of the Whightman function isin region IV (109) where the suffix ( * ) means that the positive-energy solution is associated with γ * , χ * , while the negative-energy with γ, χ. Finally, H is the conformal Hubble scale.
. E Komatsu, arXiv:0902.4759[astro-ph.COE. Komatsu et al., arXiv:0902.4759 [astro-ph.CO]
Given the WKB conditions. it is µ * ≥ 1, while τ * ≥ 0Given the WKB conditions, it is µ * ≥ 1, while τ * ≥ 0.
. V Desjacques, U Seljak, arXiv:1003.5020Class. Quant. Grav. 27124011astroph.COV. Desjacques and U. Seljak, Class. Quant. Grav. 27 (2010) 124011 [arXiv:1003.5020 [astro- ph.CO]].
. N Dalal, O Dore, D Huterer, A Shirokov, arXiv:0710.4560Phys. Rev. D. 77123514astro-phN. Dalal, O. Dore, D. Huterer and A. Shirokov, Phys. Rev. D 77, 123514 (2008) [arXiv:0710.4560 [astro-ph]].
. S Matarrese, L Verde, arXiv:0801.4826Astrophys. J. 67777astro-phS. Matarrese and L. Verde, Astrophys. J. 677 (2008) L77 [arXiv:0801.4826 [astro-ph]].
. A Slosar, C Hirata, U Seljak, S Ho, N Padmanabhan, arXiv:0805.3580JCAP. 080831astro-phA. Slosar, C. Hirata, U. Seljak, S. Ho and N. Padmanabhan, JCAP 0808, 031 (2008) [arXiv:0805.3580 [astro-ph]].
. S Shandera, N Dalal, D Huterer, arXiv:1010.3722JCAP. 110317astro-ph.COS. Shandera, N. Dalal and D. Huterer, JCAP 1103 (2011) 017 [arXiv:1010.3722 [astro-ph.CO]].
. V Desjacques, U Seljak, arXiv:0907.2257Phys. Rev. D. 8123006astro-ph.COV. Desjacques and U. Seljak, Phys. Rev. D 81 (2010) 023006 [arXiv:0907.2257 [astro-ph.CO]].
. C Wagner, L Verde, L Boubekeur, arXiv:1006.5793JCAP. 101022astro-ph.COC. Wagner, L. Verde and L. Boubekeur, JCAP 1010 (2010) 022 [arXiv:1006.5793 [astro-ph.CO]].
. K M Smith, M Loverde, arXiv:1010.0055JCAP. 11119astro-ph.COK. M. Smith and M. LoVerde, JCAP 1111 (2011) 009 [arXiv:1010.0055 [astro-ph.CO]].
. J M Maldacena, arXiv:astro-ph/0210603JHEP. 030513J. M. Maldacena, JHEP 0305, 013 (2003) [arXiv:astro-ph/0210603].
. P Creminelli, M Zaldarriaga, arXiv:astro-ph/0407059JCAP. 04106P. Creminelli and M. Zaldarriaga, JCAP 0410 (2004) 006 [arXiv:astro-ph/0407059].
. P Creminelli, G D'amico, M Musso, J Norena, arXiv:1106.1462astro-ph.COP. Creminelli, G. D'Amico, M. Musso and J. Norena, arXiv:1106.1462 [astro-ph.CO].
. L Verde, S Matarrese, arXiv:0909.3224Astrophys. J. 70691astro-ph.COL. Verde and S. Matarrese, Astrophys. J. 706 (2009) L91 [arXiv:0909.3224 [astro-ph.CO]].
. F Schmidt, M Kamionkowski, arXiv:1008.0638Phys. Rev. D. 82103002astroph.COF. Schmidt and M. Kamionkowski, Phys. Rev. D 82 (2010) 103002 [arXiv:1008.0638 [astro- ph.CO]].
. P D Meerburg, J P Van Der Schaar, P S Corasaniti, arXiv:0901.4044JCAP. 090518hep-thP. D. Meerburg, J. P. van der Schaar and P. S. Corasaniti, JCAP 0905, 018 (2009) [arXiv:0901.4044 [hep-th]].
. J Martin, R H Brandenberger, arXiv:hep-th/0005209Phys. Rev. D. 63123501J. Martin and R. H. Brandenberger, Phys. Rev. D 63, 123501 (2001) [arXiv:hep-th/0005209].
. J Martin, R H Brandenberger, arXiv:hep-th/0201189Phys. Rev. D. 65103514J. Martin and R. H. Brandenberger, Phys. Rev. D 65, 103514 (2002) [arXiv:hep-th/0201189].
. J Martin, R Brandenberger, arXiv:hep-th/0305161Phys. Rev. D. 6863513J. Martin and R. Brandenberger, Phys. Rev. D 68 (2003) 063513 [arXiv:hep-th/0305161].
. M Lemoine, M Lubo, J Martin, J P Uzan, arXiv:hep-th/0109128Phys. Rev. D. 6523510M. Lemoine, M. Lubo, J. Martin and J. P. Uzan, Phys. Rev. D 65 (2002) 023510 [arXiv:hep-th/0109128].
. D Chialva, arXiv:1106.0040JCAP. 120137hep-thD. Chialva, JCAP 1201, 037 (2012) [arXiv:1106.0040 [hep-th]].
. D Seery, J E Lidsey, arXiv:astro-ph/0503692JCAP. 05063D. Seery and J. E. Lidsey, JCAP 0506 (2005) 003 [arXiv:astro-ph/0503692].
. X Chen, M X Huang, S Kachru, G Shiu, arXiv:hep-th/0605045JCAP. 07012X. Chen, M. x. Huang, S. Kachru and G. Shiu, JCAP 0701 (2007) 002 [arXiv:hep-th/0605045].
. S Renaux-Petel, arXiv:1008.0260JCAP. 101020astro-ph.COS. Renaux-Petel, JCAP 1010 (2010) 020 [arXiv:1008.0260 [astro-ph.CO]].
. C Burrage, R H Ribeiro, D Seery, arXiv:1103.4126JCAP. 110732astro-ph.COC. Burrage, R. H. Ribeiro and D. Seery, JCAP 1107 (2011) 032 [arXiv:1103.4126 [astro-ph.CO]].
. J Khoury, F Piazza, arXiv:0811.3633JCAP. 090726hep-thJ. Khoury and F. Piazza, JCAP 0907 (2009) 026 [arXiv:0811.3633 [hep-th]].
. J Khoury, P J Steinhardt, arXiv:0910.2230Phys. Rev. Lett. 10491301hep-thJ. Khoury and P. J. Steinhardt, Phys. Rev. Lett. 104 (2010) 091301 [arXiv:0910.2230 [hep-th]].
. J Khoury, P J Steinhardt, arXiv:1101.3548Phys. Rev. D. 83123502hep-thJ. Khoury and P. J. Steinhardt, Phys. Rev. D 83 (2011) 123502 [arXiv:1101.3548 [hep-th]].
. D Baumann, L Senatore, M Zaldarriaga, arXiv:1101.3320JCAP. 11054hep-thD. Baumann, L. Senatore and M. Zaldarriaga, JCAP 1105 (2011) 004 [arXiv:1101.3320 [hep-th]].
. C Wagner, L Verde, arXiv:1102.3229[astro-ph.COC. Wagner and L. Verde, arXiv:1102.3229 [astro-ph.CO].
. I Agullo, L Parker, arXiv:1010.5766Phys. Rev. 8363526astro-ph.COI. Agullo, L. Parker, Phys. Rev. D83 (2011) 063526. [arXiv:1010.5766 [astro-ph.CO]].
. J Ganc, arXiv:1104.0244Phys. Rev. D. 8463514astro-ph.COJ. Ganc, Phys. Rev. D 84 (2011) 063514 [arXiv:1104.0244 [astro-ph.CO]].
. U H Danielsson, arXiv:hep-th/0203198Phys. Rev. D. 6623511U. H. Danielsson, Phys. Rev. D 66, 023511 (2002) [arXiv:hep-th/0203198].
. U H Danielsson, arXiv:hep-th/0205227JHEP. 020740U. H. Danielsson, JHEP 0207 (2002) 040 [arXiv:hep-th/0205227].
. R Easther, B R Greene, W H Kinney, G Shiu, hep-th/0204129Phys. Rev. D. 6623518R. Easther, B. R. Greene, W. H. Kinney and G. Shiu, Phys. Rev. D 66 (2002) 023518 [hep-th/0204129].
. K Schalm, G Shiu, J P Van Der Schaar, arXiv:hep-th/0401164JHEP. 040476K. Schalm, G. Shiu and J. P. van der Schaar, JHEP 0404 (2004) 076 [arXiv:hep-th/0401164].
. K Schalm, G Shiu, J P Van Der Schaar, arXiv:hep-th/0412288AIP Conf. Proc. 743362K. Schalm, G. Shiu and J. P. van der Schaar, AIP Conf. Proc. 743 (2005) 362 [arXiv:hep-th/0412288].
. F Nitti, M Porrati, J. -W Rombouts, hep-th/0503247Phys. Rev. D. 7263503F. Nitti, M. Porrati and J. -W. Rombouts, Phys. Rev. D 72 (2005) 063503 [hep-th/0503247].
. R Holman, A J Tolley, arXiv:0710.1302JCAP. 08051hep-thR. Holman and A. J. Tolley, JCAP 0805, 001 (2008) [arXiv:0710.1302 [hep-th]].
. S Weinberg, arXiv:0804.4291Phys. Rev. D. 77123541hep-thS. Weinberg, Phys. Rev. D 77 (2008) 123541 [arXiv:0804.4291 [hep-th]].
. U H Danielsson, arXiv:hep-th/0210058JHEP. 021225U. H. Danielsson, JHEP 0212 (2002) 025 [arXiv:hep-th/0210058].
. M B Einhorn, F Larsen, arXiv:hep-th/0305056Phys. Rev. D. 6864002M. B. Einhorn and F. Larsen, Phys. Rev. D 68 (2003) 064002 [arXiv:hep-th/0305056].
. A Vilenkin, L H Ford, Phys. Rev. D. 261231A. Vilenkin and L. H. Ford, Phys. Rev. D 26 (1982) 1231.
. A Dey, S Paban, arXiv:1106.5840JCAP. 120439hep-thA. Dey and S. Paban, JCAP 1204 (2012) 039 [arXiv:1106.5840 [hep-th]].
. G Shiu, J Xu, arXiv:1108.0981Phys. Rev. D. 84103509hep-thG. Shiu and J. Xu, Phys. Rev. D 84 (2011) 103509 [arXiv:1108.0981 [hep-th]].
. M G Jackson, K Schalm, arXiv:1007.0185Phys. Rev. Lett. 108111301hep-thM. G. Jackson and K. Schalm, Phys. Rev. Lett. 108 (2012) 111301 [arXiv:1007.0185 [hep-th]].
. P Horava, arXiv:0901.3775Phys. Rev. 7984008hep-thP. Horava, Phys. Rev. D79, 084008 (2009). [arXiv:0901.3775 [hep-th]].
. D J H Chung, K Freese, arXiv:hep-ph/9906542Phys. Rev. D. 6123511D. J. H. Chung and K. Freese, Phys. Rev. D 61 (2000) 023511 [arXiv:hep-ph/9906542].
. D J H Chung, K Freese, arXiv:hep-ph/9910235Phys. Rev. D. 6263513D. J. H. Chung and K. Freese, Phys. Rev. D 62, 063513 (2000) [arXiv:hep-ph/9910235].
. D J H Chung, E W Kolb, A Riotto, arXiv:hep-ph/0008126Phys. Rev. D. 6583516D. J. H. Chung, E. W. Kolb and A. Riotto, Phys. Rev. D 65, 083516 (2002) [arXiv:hep-ph/0008126].
. C Csaki, J Erlich, C Grojean, arXiv:hep-th/0012143Nucl. Phys. B. 604312C. Csaki, J. Erlich and C. Grojean, Nucl. Phys. B 604, 312 (2001) [arXiv:hep-th/0012143].
. S L Dubovsky, arXiv:hep-th/0103205JHEP. 020112S. L. Dubovsky, JHEP 0201, 012 (2002) [arXiv:hep-th/0103205].
. D Mattingly, arXiv:gr-qc/0502097Living Rev. Rel. 85D. Mattingly, Living Rev. Rel. 8 (2005) 5 [arXiv:gr-qc/0502097].
. T Jacobson, S Liberati, D Mattingly, arXiv:astro-ph/0505267Annals Phys. 321150T. Jacobson, S. Liberati and D. Mattingly, Annals Phys. 321 (2006) 150 [arXiv:astro-ph/0505267].
. D Baumann, D Green, arXiv:1102.5343hep-thD. Baumann and D. Green, arXiv:1102.5343 [hep-th].
. A J Tolley, M Wyman, arXiv:0910.1853Phys. Rev. D. 8143502hep-thA. J. Tolley and M. Wyman, Phys. Rev. D 81 (2010) 043502 [arXiv:0910.1853 [hep-th]].
. D Babich, P Creminelli, M Zaldarriaga, arXiv:astro-ph/0405356JCAP. 04089D. Babich, P. Creminelli and M. Zaldarriaga, JCAP 0408, 009 (2004) [arXiv:astro-ph/0405356].
. A Ashoorioon, D Chialva, U Danielsson, arXiv:1104.2338JCAP. 110634hep-thA. Ashoorioon, D. Chialva, U. Danielsson, JCAP 1106 (2011) 034. [arXiv:1104.2338 [hep-th]].
. J Martin, D J Schwarz, arXiv:astro-ph/0210090Phys. Rev. D. 6783512J. Martin and D. J. Schwarz, Phys. Rev. D 67 (2003) 083512 [arXiv:astro-ph/0210090].
Asymptotic Methods for Scientists and Engeneers. C M Bender, S A Orszag, SpringerC. M. Bender, S. A. Orszag, "Asymptotic Methods for Scientists and Engeneers", McGraw Hill, 1978, (Springer 1999).
M Porrati, arXiv:hep-th/0409210Proceedings of 13th International Seminar on High-Energy Physics: Quarks. 13th International Seminar on High-Energy Physics: QuarksM. Porrati, Proceedings of 13th International Seminar on High-Energy Physics: Quarks 2004, Pushkinskie Gory, Russia, 24-30 May 2004 [arXiv:hep-th/0409210].
. M Porrati, arXiv:hep-th/0402038Phys. Lett. B. 596306M. Porrati, Phys. Lett. B 596 (2004) 306 [arXiv:hep-th/0402038].
N D Birrell, P C W Davies, Quantum Fields in Curved Space. Cambridge University PressN. D. Birrell,P. C. W. Davies, "Quantum Fields in Curved Space" Cambridge University Press (1984-1994).
. E Mottola, Phys. Rev. D. 31754E. Mottola, Phys. Rev. D 31 (1985) 754.
. L Parker, Phys. Rev. Lett. 21L. Parker, Phys. Rev. Lett. 21 (1968) 562.
. L Parker, Phys. Rev. 1831057L. Parker, Phys. Rev. 183 (1969) 1057.
. L Parker, Phys. Rev. D. 3346Erratum-ibid. D 3 (1971) 2546L. Parker, Phys. Rev. D 3 (1971) 346 [Erratum-ibid. D 3 (1971) 2546].
S A Fulling, Aspects of Quantum Field Theory in Curved Space-Time. Cambridge University PressS. A. Fulling, "Aspects of Quantum Field Theory in Curved Space-Time", Cambridge University Press, 1989
. P Creminelli, arXiv:astro-ph/0306122JCAP. 03103P. Creminelli, JCAP 0310 (2003) 003 [arXiv:astro-ph/0306122].
Asymptotic Expansions. A Erdélyi, Dover (New YorkA. Erdélyi, "Asymptotic Expansions", Dover (New York) 1956.
. C Cheung, A L Fitzpatrick, J Kaplan, L Senatore, arXiv:0709.0295JCAP. 080221hep-thC. Cheung, A. L. Fitzpatrick, J. Kaplan and L. Senatore, JCAP 0802 (2008) 021 [arXiv:0709.0295 [hep-th]].
|
[] |
[
"Interactive Launch of 16,000 Microsoft Windows Instances on a Supercomputer",
"Interactive Launch of 16,000 Microsoft Windows Instances on a Supercomputer"
] |
[
"Michael Jones \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Jeremy Kepner \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Bradley Orchard \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Albert Reuther \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"William Arcand \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"David Bestor \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Bill Bergeron \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Chansup Byun \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Vijay Gadepally \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Michael Houle \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Matthew Hubbell \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Anna Klein \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Lauren Milechin \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Julia Mullen \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Andrew Prout \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Antonio Rosa \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Siddharth Samsi \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Charles Yee \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n",
"Peter Michaleas \nMIT Lincoln Laboratory\nLexingtonMAU.S.A\n"
] |
[
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A",
"MIT Lincoln Laboratory\nLexingtonMAU.S.A"
] |
[] |
Simulation, machine learning, and data analysis require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight virtual machines that can be inefficient and slow to launch on modern manycore processors. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly and simultaneously launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability significantly broadens the range of applications that can be run at large scale on a supercomputer.
|
10.1109/hpec.2018.8547782
|
[
"https://arxiv.org/pdf/1808.04345v1.pdf"
] | 51,977,113 |
1808.04345
|
a6b55effcf3268003e912f17e39583ac00d8c784
|
Interactive Launch of 16,000 Microsoft Windows Instances on a Supercomputer
Michael Jones
MIT Lincoln Laboratory
LexingtonMAU.S.A
Jeremy Kepner
MIT Lincoln Laboratory
LexingtonMAU.S.A
Bradley Orchard
MIT Lincoln Laboratory
LexingtonMAU.S.A
Albert Reuther
MIT Lincoln Laboratory
LexingtonMAU.S.A
William Arcand
MIT Lincoln Laboratory
LexingtonMAU.S.A
David Bestor
MIT Lincoln Laboratory
LexingtonMAU.S.A
Bill Bergeron
MIT Lincoln Laboratory
LexingtonMAU.S.A
Chansup Byun
MIT Lincoln Laboratory
LexingtonMAU.S.A
Vijay Gadepally
MIT Lincoln Laboratory
LexingtonMAU.S.A
Michael Houle
MIT Lincoln Laboratory
LexingtonMAU.S.A
Matthew Hubbell
MIT Lincoln Laboratory
LexingtonMAU.S.A
Anna Klein
MIT Lincoln Laboratory
LexingtonMAU.S.A
Lauren Milechin
MIT Lincoln Laboratory
LexingtonMAU.S.A
Julia Mullen
MIT Lincoln Laboratory
LexingtonMAU.S.A
Andrew Prout
MIT Lincoln Laboratory
LexingtonMAU.S.A
Antonio Rosa
MIT Lincoln Laboratory
LexingtonMAU.S.A
Siddharth Samsi
MIT Lincoln Laboratory
LexingtonMAU.S.A
Charles Yee
MIT Lincoln Laboratory
LexingtonMAU.S.A
Peter Michaleas
MIT Lincoln Laboratory
LexingtonMAU.S.A
Interactive Launch of 16,000 Microsoft Windows Instances on a Supercomputer
High Performance ComputingManycoreMi- crosoft WindowsWineWindows EmulationKnight's Landing
Simulation, machine learning, and data analysis require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight virtual machines that can be inefficient and slow to launch on modern manycore processors. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly and simultaneously launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability significantly broadens the range of applications that can be run at large scale on a supercomputer.
I. INTRODUCTION
With the slowing down of Moore's Law [1], [2], parallel processing has become a primary technique for increasing application performance. Physical simulation, machine learning, and data analysis are rapidly growing applications that are utilizing parallel processing to achieve their performance goals. These applications require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Parallel computing directly in the Windows platform has a long history [3]- [7]. The largest supercomputers currently available almost exclusively run the Linux operating system [8], [9]. Using these Linux powered supercomputers, it is possible to rapidly launch interactive applications on thousands of processors in a matter of seconds [10].
A common way to launch multiple Microsoft Windows applications on Linux computers is to use virtual machines (VMs) [11]. Windows VMs replicate the complete operating system and its virtual memory environment for each instance of the Windows application that is running, which imposes a great deal of overhead on the applications. Launching many Windows VMs on a large supercomputer can often take several seconds per VM [12]- [14]. While this performance is adequate for interactive applications that may require a handful of processors, scaling up such applications to the thousands of processors typically found in a modern supercomputer is prohibitive.
This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability significantly broadens the range of applications that can be run at large scale on a supercomputer.
The primary goal of this paper is to illustrate the feasibility and provide a set of baseline measurements showing the kind of performance gain that can be realized by scaling a Windows application in an pleasingly parallel manner on a modern supercomputer. The organization of the rest of this paper is as follows. Section II goes into more detail on the various technologies for running Windows in a Linux environment. Section III describes the LLMapReduce technology used to launch thousands of simultaneous Windows instances. Section IV provides details on the hardware and software environment used to perform the launch time measurements. Section V presents the performance results and an overview of the findings. Section VI summarizes the work, the benefits gained by this approach, and describes future directions.
II. VIRTUALIZATION, CONTAINERIZATION, AND WINE
As of November 2017, 100% of the world's Top 500 supercomputers are running on the Linux operating system. [15] As a result, the vast majority of the modeling and simulation codes commonly used in science and engineering either natively run in Linux, or have a dedicated compute server component that can be separately deployed on a large number of Linux computers to render tractable the enormous computational complexity of modern models and simulations.
While the commoditization of the x86 platform and it's rapidly expanding hardware capabilities have led to expo-nential growth in the use of virtualization as a means of maximizing the efficient use of system resources and allowing for different operating systems to coexist on a single physical machine, the overhead of full hardware virtualization -running an entire completely separate operating system kernel and full complement of system libraries subordinate to a hypervisoris significant. In addition to these resource overhead concerns, a recent study measured the launch times of a stripped down Ubuntu Linux image on three of the most popular virtual machine provisioning systems and found that various overheads could account for up to 120 seconds of additional processing time on a modern hardware platform. [14] Operating system-level virtualization methods such as chroot [16], FreeBSD jail [17], OpenVZ [18], User Mode Linux [19], and Linux Containers as made popular by Docker [20] are designed to be an improvement in this regard, sacrificing the security and safety of complete operating system kernel separation for a greatly diminished footprint, much quicker launch times and lower management overhead [21].
The large-scale deployment of 'containerized' applications on a supercomputer is quite feasible, but for one notable caveat: the guest application is still using a limited subset of the interfaces exposed by the host operating system's kernel, and thus running a containerized Windows application in this manner would require a Windows host.
Luckily there exists a third option when it comes to launching a Windows environment on a Linux based supercomputer. The Wine project [22], [23] is an open source software compatibility layer designed to translate Windows semantics into their POSIX equivalents; from system calls to library APIs, file paths and named pipes to network and UNIX sockets, Wine seamlessly enables a myriad of Microsoft Windows applications to run on Linux and FreeBSD systems, in many cases with near-native performance and equivalent functionality. Additionally, because Wine is merely translating one software interface to another, there is very little in the way of environmental setup to perform prior to launching an application when compared to an equivalent task in a hardware virtual machine or with OS-assisted virtualization.
The goal of this approach is to present an unmodified Windows application with all of the appropriate software interfaces and a runtime environment that should be virtually indistinguishable from a real Windows operating system. A simplified depiction of Wine's layered architecture is presented in Figure 1.
III. LLMAPREDUCE: MULTI-LEVEL MAP-REDUCE
The Wine environment provides a potentially efficient means for running Windows applications on a Linux-based supercomputer. Interactively launching many simultaneously Wine environments requires an effective means of coordinating the launch of thousands of these environments. Recent experiments have shown that the naive, serial job submission performance of a modern job scheduler can significantly slow down processing for jobs with a very large number of tasks [24]. To achieve maximal job launch performance for large HPC This environment is supported by the Wine server, depicted on the right, which provides inter-process communication, synchronization and process management.
(high performance computing) or HPDA (high performance data analysis) jobs requires employing a technique known as multilevel scheduling which involves modifying our analysis code slightly to be able to process multiple datasets or files with a single job launch [25]. The map-reduce parallel programming model [26] has become extremely popular in the big data community. The output of many workloads can increase greatly when run in an pleasingly parallel manner on a supercomputer. The LLMapReduce tool developed as part of the MIT SuperCloud software stack provides access to this familiar map-reduce programming model via a dramatically simplified interface and efficiently launches large, multi-level array jobs onto a cluster often reducing complex parallel scheduling, job submission and dependency resolution tasks into a single line of code while simultaneously maximizing job launch performance by reducing per-task latency [27]. Most importantly, LLMapReduce is not bound to a particular language and works with any executable, which makes LLMapReduce ideal for launching many simultaneous Wine instances. A notional illustration depicting the life cycle of the various components constituting a scheduler array job as generated by the LLMapReduce tool is presented in Figure 2. [27]). LLMapReduce scans an input directory and, for each file contained within, generates a job submission script within a scheduler array job and submits the aggregate batch job to the scheduler. Upon successful termination of all tasks, a "reduce" job is launched which can perform post-processing or epilog cleanup tasks.
Supercomputing systems require efficient mechanisms for managing the operational tasks involved in a program's life cycle on a large shared computing infrastructure: rapidly identifying available computing resources, allocating those resources to programs, scheduling the execution of those programs on their allocated resources, launching them, monitoring their execution and performing epilog clean-up tasks upon the program's termination (see Figure 3). The open source SLURM software provides these services and is independent of programming language (C, Fortran, Java, Matlab, etc.) or parallel programming model (message passing, distributed arrays, threads, map/reduce, etc.), which makes it ideal for launching Wine instances.
SLURM is an extremely scalable, full-featured Linux job scheduler with a modern, multi-threaded core scheduling engine and a very high-performance plug-in module architecture. [28] The combined feature set and serial launch latency of the SLURM scheduler compares favorably with other HPC resource managers [25], and it is well suited to managing a heterogeneous environment like the MIT SuperCloud.
IV. EXPERIMENTAL ENVIRONMENT
The MIT Lincoln Laboratory Supercomputing Center provides (LLSC) a high-performance computing platform to over 1000 users at MIT, and is heavily focused on highly iterative interactive supercomputing and rapid prototyping workloads [29], [30]. A part of the LLSC mission to deliver new and innovative technologies and methods for enabling scientists and engineers to quickly ramp up the pace of their research. By leveraging supercomputing and big data storage assets the LLSC has built the MIT SuperCloud, a coherent fusion of the four largest computing ecosystems: supercomputing, enterprise computing, big data, and traditional databases. The MIT SuperCloud has spurred the development of a number of cross-ecosystem innovations in high performance databases [31], [32], database management [33], data protection [34], database federation [35], [36], data analytics [37] and system monitoring [38].
All the experiments described in this paper were performed on the LLSC TX-Green Supercomputer using the MIT Su-perCloud environment. The TX-Green supercomputer is a petascale system that consists of a heterogeneous mix of AMD Opterton, Intel Xeon, Nvidia, and Intel Xeon Phi processors connected to a single, non-blocking 10 Gigabit Ethernet Arista DCS-7508 core switch. All of the compute nodes used to launch the Windows instances were Intel Xeon Phi 7210 (Knight's Landing) processors with 64 cores, 192 GB of system RAM, 16 GB of on-package MCDRAM configured in 'flat' mode, and 4 TB of local storage. The Lustre [39] central storage system uses a 10 petabyte Seagate ClusterStor CS9000 storage array that is directly connected to the core switch, as is each individual cluster node. This architecture provides high bandwidth to all the nodes and the central storage, and is depicted in Figure 4.
V. PERFORMANCE RESULTS
Rapid launching of Windows instances is prerequisite for running Windows applications interactively on a supercomputer. The launch times of the Wine environment on the MIT SuperCloud were obtained by running on a supercomputer consisting of 648 compute nodes, each with at least 64 Xeon Phi processing cores, for a total of 41,472 processing cores. In all cases, a single Windows instance was run on 1, 2, 4,. . . , and 256 compute nodes, followed by running 2, 4, . . . , and 64 Windows instances on each of the 256 compute nodes to achieve a maximum of 16,384 simultaneous instances. An essential step for enabling rapid interactive launch is copying a typically several MB Windows executable and it's supporting environment (e.g. libraries and configuration files) from the user's home directory on the central storage to the local storage on each node. The copy time for this operation is shown in Figure 5 and is small compared to the launch time. This short copy time is achievable because the central file system is capable of a large amount of parallel I/O to the compute nodes. This parallel I/O rate is attainable when the copy is initiated from each of the target compute nodes and thus requires coordination within the overall parallel execution.
The launch times and launch rates of the Windows instances are shown in Figures 5, 6 and 7 along with data taken from the literature for launching for Windows instances on Azure [12] and Linux VM instances using Eucalyptus [14]. These results show that high launch Windows instance launch rates are achievable using Wine with LLMapReduce on the MIT Su-percCloud. Launching over 16,000 instances in approximately 5 minutes directly enables interactive simulation, machine learning, and data analysis applications that require Windows executables.
VI. SUMMARY AND FUTURE WORK
Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight platform virtualization software that can be inefficient and slow to launch on modern manycore processors. Simulation, machine learning, and data analysis require a wide range of software which often depends upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This Linux VM on Eucalyptus [14] Windows VM on Azure [12] Fig. 6: Launch time of Wine on the MIT SuperCloud versus the number of simultaneous instances launched. Launch times for Windows instances on Azure [12] and Linux VM instances using Eucalyptus [14] are also shown.
capability significantly broadens the range of applications that can be run at large scale on a supercomputer. Future work will focus on extending this capability to larger numbers of cores running more diverse applications. Rate (launches/second)
WINE on MIT SuperCloud
Linux VM on Eucalyptus [14] Windows VM on Azure [12] Number of Simultaneous Instances Launch rates for Windows instances on Azure [12] and Linux VM instances using Eucalyptus [14] are also shown.
This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secretary of Defense for Research and Engineering.
Fig. 2 :
2Schematic depicting the life cycle of tasks launched using LLMapReduce (adapted from
1
Scalable System Scheduling for HPC and Big Data, Reuther et al, Journal of Parallel and Distributed Computing, January 2018.
Fig. 3 :
3Schematic depicting key components of a canonical cluster scheduler including job lifecycle management, resource management, task scheduling and job execution (adapted from[25]). The SLURM scheduler used on the MIT SuperCloud systems behaves according to this model.
Fig. 4 :
4Architecture of the MIT SuperCloud system. Users connect to the system over either a local area network or a wide area network. At the time of connection, their system joins the MIT SuperCloud and can act as a compute node in order to run parallel programs interactively. The centerpiece of the MIT SuperCloud is several file systems (Seagate, DDN, Dell, Hadoop, and Amazon S3) running on several different network fabrics (10 GigE, InfiniBand, OmniPath).
Fig. 5 :
5Copy time of the Windows application from central storage to the local storage on each compute node versus the number of simultaneous instances launched.
Fig. 7 :
7Launch rate of Wine on the MIT SuperCloud versus the number of simultaneous instances launched.
Fig. 1: Schematic depicting the architecture of a Windows program (APPLICATION.EXE) run on a UNIX-like system through the Wine emulator, diagram rendered from an ASCII version located in the Wine developer's manual. At the bottom of the figure is the native UNIX kernel, device drivers and system libraries of the host operating system. From there, the Wine UNIX binaries and user-space libraries are loaded, creating the virtual Windows environment which is then able to load Wine's implementation of the basic Win32 functionality provided by NTDLL.DLL, KERNEL.DLL, GDI.DLL and USER.DLL. This emulated Windows environment then loads the unmodified native Windows application used to invoke Wine and any Windows userspace libraries it requires.UNIX Device Drivers
UNIX Kernel
System Libraries
WINE Drivers
WINE Executable
NTDLL.DLL
Win32 Subsystem
GDI32
USER32
Windows Libraries
APPLICATION.EXE
Other
Windows
Subsystems
(POSIX)
WINE Server
ACKNOWLEDGMENTSThe authors wish to acknowledge the following individuals for their contributions: Bob Bond, Alan Edelman, Jack Fleischman, Charles Leiserson, Dave Martinez, and Paul Monticciolo.
Moore's law at 50: Are we planning for retirement?. G Yeric, Electron Devices Meeting (IEDM). IEEEG. Yeric, "Moore's law at 50: Are we planning for retirement?," in Electron Devices Meeting (IEDM), 2015 IEEE International, pp. 1-1, IEEE, 2015.
The end of moore's law: A new beginning for information technology. T N Theis, H.-S P Wong, Computing in Science & Engineering. 192T. N. Theis and H.-S. P. Wong, "The end of moore's law: A new begin- ning for information technology," Computing in Science & Engineering, vol. 19, no. 2, pp. 41-50, 2017.
Hpc on dec alphas and windows nt. D Nicole, K Takeda, I Wolton, High-Performance Computing. SpringerD. Nicole, K. Takeda, and I. Wolton, "Hpc on dec alphas and windows nt," in High-Performance Computing, pp. 551-557, Springer, 1999.
High performance computing with microsoft windows 2000. D A Lifka, Proceedings of the 3rd IEEE International Conference on Cluster Computing. the 3rd IEEE International Conference on Cluster ComputingIEEE Computer Society47D. A. Lifka, "High performance computing with microsoft windows 2000," in Proceedings of the 3rd IEEE International Conference on Cluster Computing, p. 47, IEEE Computer Society, 2001.
Parallel geospatial analysis on windows hpc platform. Y Xia, X Shi, L Kuang, J Xuan, Environmental Science and Information Application Technology (ESIAT), 2010 International Conference on. IEEE1Y. Xia, X. Shi, L. Kuang, and J. Xuan, "Parallel geospatial analysis on windows hpc platform," in Environmental Science and Information Application Technology (ESIAT), 2010 International Conference on, vol. 1, pp. 210-213, IEEE, 2010.
Assessing the value of cloudbursting: A case study of satellite image processing on windows azure. M Humphrey, Z Hill, C Van Ingen, K Jackson, Y Ryu, IEEE 7th International Conference on. IEEEE-Science (e-Science)M. Humphrey, Z. Hill, C. Van Ingen, K. Jackson, and Y. Ryu, "Assessing the value of cloudbursting: A case study of satellite image processing on windows azure," in E-Science (e-Science), 2011 IEEE 7th International Conference on, pp. 126-133, IEEE, 2011.
Hpc in a hep lab: lessons learned from setting up costeffective hpc clusters. M Husejko, I Agtzidis, P Baehler, T Dul, J Evans, N Himyr, H Meinhard, Journal of Physics: Conference Series. IOP Publishing664M. Husejko, I. Agtzidis, P. Baehler, T. Dul, J. Evans, N. Himyr, and H. Meinhard, "Hpc in a hep lab: lessons learned from setting up cost- effective hpc clusters," in Journal of Physics: Conference Series, vol. 664 #9, IOP Publishing, 2015.
Linux: a portable operating system. L Torvalds, University of Helsinki, dept. of Computing ScienceMaster's thesisL. Torvalds, "Linux: a portable operating system," Master's thesis, University of Helsinki, dept. of Computing Science, 1997.
A survey of high-performance computing scaling challenges. A Geist, D A Reed, The International Journal of High Performance Computing Applications. 311A. Geist and D. A. Reed, "A survey of high-performance computing scaling challenges," The International Journal of High Performance Computing Applications, vol. 31, no. 1, pp. 104-113, 2017.
Interactive supercomputing on 40,000 cores for machine learning and data analysis. A Reuther, C Byun, S Samsi, W Arcand, D Bestor, B Bergeron, V Gadepally, M Houle, M Hubbell, M Jones, A Klein, P Michaleas, L Milechin, J Mullen, A Prout, A Rosa, C Yee, J Kepner, High Performance Extreme Computing Conference (HPEC). IEEEA. Reuther, C. Byun, S. Samsi, W. Arcand, D. Bestor, B. Bergeron, V. Gadepally, M. Houle, M. Hubbell, M. Jones, A. Klein, P. Michaleas, L. Milechin, J. Mullen, A. Prout, A. Rosa, C. Yee, and J. Kepner, "Interactive supercomputing on 40,000 cores for machine learning and data analysis," in High Performance Extreme Computing Conference (HPEC), IEEE, 2018.
Memory resource management in vmware esx server. C A Waldspurger, ACM SIGOPS Operating Systems Review. 36SIC. A. Waldspurger, "Memory resource management in vmware esx server," ACM SIGOPS Operating Systems Review, vol. 36, no. SI, pp. 181-194, 2002.
A performance study on the vm startup time in the cloud. M Mao, M Humphrey, IEEE 5th International Conference on. IEEECloud Computing (CLOUD)M. Mao and M. Humphrey, "A performance study on the vm startup time in the cloud," in Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on, pp. 423-430, IEEE, 2012.
LLSupercloud: Sharing HPC systems for diverse rapid prototyping. A Reuther, J Kepner, W Arcand, D Bestor, B Bergeron, C Byun, M Hubbell, P Michaleas, J Mullen, A Prout, A Rosa, High Performance Extreme Computing Conference (HPEC). IEEEA. Reuther, J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, and A. Rosa, "LLSu- percloud: Sharing HPC systems for diverse rapid prototyping," in High Performance Extreme Computing Conference (HPEC), IEEE, 2013.
Scalability of VM provisioning systems. M Jones, B Arcand, B Bergeron, D Bestor, C Byun, L Milechin, V Gadepally, M Hubbell, J Kepner, P Michaleas, J Mullen, A Prout, T Rosa, S Samsi, C Yee, A Reuther, High Performance Extreme Computing Conference (HPEC). IEEEM. Jones, B. Arcand, B. Bergeron, D. Bestor, C. Byun, L. Milechin, V. Gadepally, M. Hubbell, J. Kepner, P. Michaleas, J. Mullen, A. Prout, T. Rosa, S. Samsi, C. Yee, and A. Reuther, "Scalability of VM provi- sioning systems," in High Performance Extreme Computing Conference (HPEC), IEEE, 2016.
Operating System Family / Linux -Top 500 Supercomputer Sites. Top500 Supercomputer Sites, Online; accessed 01TOP500 Supercomputer Sites, "Operating System Family / Linux - Top 500 Supercomputer Sites." https://www.top500.org/statistics/details/ osfam/1, 2017. [Online; accessed 01-May-2018].
Linux Programmer's Manual -CHROOT(2) -Change Root Directory. L M Pages, Online; accessed 01L. M. Pages, "Linux Programmer's Manual -CHROOT(2) -Change Root Directory." http://man7.org/linux/man-pages/man2/chroot.2.html, 2017. [Online; accessed 01-May-2018].
Jails: Confining the omnipotent root. P.-H Kamp, R N Watson, Proceedings of the 2nd International SANE Conference. the 2nd International SANE Conference43116P.-H. Kamp and R. N. Watson, "Jails: Confining the omnipotent root," in Proceedings of the 2nd International SANE Conference, vol. 43, p. 116, 2000.
Virtualization in Linux. K Kolyshkin, White paper, OpenVZ. 339K. Kolyshkin, "Virtualization in Linux," White paper, OpenVZ, vol. 3, p. 39, 2006.
User-mode linux. J Dike, Annual Linux Showcase & Conference. J. Dike, "User-mode linux," in Annual Linux Showcase & Conference, 2001.
Docker: Lightweight Linux containers for consistent development and deployment. D Merkel, Linux Journal. 2014239D. Merkel, "Docker: Lightweight Linux containers for consistent de- velopment and deployment," Linux Journal, vol. 2014, no. 239, p. 2, 2014.
An updated performance comparison of virtual machines and Linux containers. W Felter, A Ferreira, R Rajamony, J Rubio, Performance Analysis of Systems and Software (ISPASS). IEEEIEEE International Symposium OnW. Felter, A. Ferreira, R. Rajamony, and J. Rubio, "An updated per- formance comparison of virtual machines and Linux containers," in Performance Analysis of Systems and Software (ISPASS), 2015 IEEE International Symposium On, pp. 171-172, IEEE, 2015.
Wine. B Amstadt, M K Johnson, Linux Journal. 19944es3B. Amstadt and M. K. Johnson, "Wine," Linux Journal, vol. 1994, no. 4es, p. 3, 1994.
WineHQ -Run Windows Applications on Linux, BSD, Solaris and macOS. The Wine Project, Online; accessed 01The Wine Project, "WineHQ -Run Windows Applications on Linux, BSD, Solaris and macOS." https://www.winehq.org/, 2017. [Online; accessed 01-May-2018].
Scheduler technologies in support of high performance data analysis. A Reuther, C Byun, W Arcand, D Bestor, B Bergeron, M Hubbell, M Jones, P Michaleas, A Prout, A Rosa, J Kepner, High Performance Extreme Computing Conference (HPEC). IEEEA. Reuther, C. Byun, W. Arcand, D. Bestor, B. Bergeron, M. Hubbell, M. Jones, P. Michaleas, A. Prout, A. Rosa, and J. Kepner, "Scheduler technologies in support of high performance data analysis," in High Performance Extreme Computing Conference (HPEC), IEEE, 2016.
Scalable system scheduling for HPC and big data. A Reuther, C Byun, W Arcand, D Bestor, B Bergeron, M Hubbell, M Jones, P Michaleas, A Prout, A Rosa, J Kepner, Journal of Parallel and Distributed Computing. 111A. Reuther, C. Byun, W. Arcand, D. Bestor, B. Bergeron, M. Hubbell, M. Jones, P. Michaleas, A. Prout, A. Rosa, and J. Kepner, "Scalable system scheduling for HPC and big data," Journal of Parallel and Distributed Computing, vol. 111, pp. 76-92, 2018.
MapReduce: Simplified data processing on large clusters. J Dean, S Ghemawat, Communications of the ACM. 511J. Dean and S. Ghemawat, "MapReduce: Simplified data processing on large clusters," Communications of the ACM, vol. 51, no. 1, pp. 107-113, 2008.
Llmapreduce: Multi-level map-reduce for high performance data analysis. C Byun, J Kepner, W Arcand, D Bestor, B Bergeron, V Gadepally, M Hubbell, P Michaleas, J Mullen, A Prout, A Rosa, C Yee, A Reuther, High Performance Extreme Computing Conference (HPEC). IEEEC. Byun, J. Kepner, W. Arcand, D. Bestor, B. Bergeron, V. Gadepally, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, A. Rosa, C. Yee, and A. Reuther, "Llmapreduce: Multi-level map-reduce for high performance data analysis," in High Performance Extreme Computing Conference (HPEC), IEEE, 2016.
Slurm: Simple Linux Utility for Resource Management. A B Yoo, M A Jette, M Grondona, Workshop on Job Scheduling Strategies for Parallel Processing. SpringerA. B. Yoo, M. A. Jette, and M. Grondona, "Slurm: Simple Linux Utility for Resource Management," in Workshop on Job Scheduling Strategies for Parallel Processing, pp. 44-60, Springer, 2003.
LLGrid: Enabling on-demand grid computing with gridMatlab and pMatlab. A Reuther, T Currie, J Kepner, H G Kim, A Mccabe, M P Moore, N Travinin, tech. rep. MIT Lincoln LaboratoryA. Reuther, T. Currie, J. Kepner, H. G. Kim, A. McCabe, M. P. Moore, and N. Travinin, "LLGrid: Enabling on-demand grid computing with gridMatlab and pMatlab," tech. rep., MIT Lincoln Laboratory, 2004.
Interactive grid computing at Lincoln Laboratory. N T Bliss, R Bond, J Kepner, H Kim, A Reuther, Lincoln Laboratory Journal. 161165N. T. Bliss, R. Bond, J. Kepner, H. Kim, and A. Reuther, "Interactive grid computing at Lincoln Laboratory," Lincoln Laboratory Journal, vol. 16, no. 1, p. 165, 2006.
Driving Big Data with Big Compute. C Byun, W Arcand, D Bestor, B Bergeron, M Hubbell, J Kepner, A Mccabe, P Michaleas, J Mullen, D O'gwynn, A Prout, A Reuther, A Rosa, C Yee, High Performance Extreme Computing Conference (HPEC). IEEEC. Byun, W. Arcand, D. Bestor, B. Bergeron, M. Hubbell, J. Kepner, A. McCabe, P. Michaleas, J. Mullen, D. O'Gwynn, A. Prout, A. Reuther, A. Rosa, and C. Yee, "Driving Big Data with Big Compute," in High Performance Extreme Computing Conference (HPEC), IEEE, 2012.
Achieving 100,000,000 database inserts per second using accumulo and d4m. J Kepner, W Arcand, D Bestor, B Bergeron, C Byun, V Gadepally, M Hubbell, P Michaleas, J Mullen, A Prout, A Reuther, A Rosa, C Yee, High Performance Extreme Computing Conference (HPEC). IEEEJ. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun, V. Gadepally, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, A. Reuther, A. Rosa, and C. Yee, "Achieving 100,000,000 database inserts per second using accu- mulo and d4m," in High Performance Extreme Computing Conference (HPEC), IEEE, 2014.
Enabling on-demand database computing with MIT SuperCloud database management system. A Prout, J Kepner, P Michaleas, W Arcand, D Bestor, B Bergeron, C Byun, L Edwards, V Gadepally, M Hubbell, J Mullen, A Rosa, C Yee, A Reuther, High Performance Extreme Computing Conference (HPEC). IEEEA. Prout, J. Kepner, P. Michaleas, W. Arcand, D. Bestor, B. Bergeron, C. Byun, L. Edwards, V. Gadepally, M. Hubbell, J. Mullen, A. Rosa, C. Yee, and A. Reuther, "Enabling on-demand database computing with MIT SuperCloud database management system," in High Performance Extreme Computing Conference (HPEC), IEEE, 2015.
Computing on masked data: a high performance method for improving big data veracity. J Kepner, V Gadepally, P Michaleas, N Schear, M Varia, A Yerukhimovich, R K Cunningham, High Performance Extreme Computing Conference (HPEC). IEEEJ. Kepner, V. Gadepally, P. Michaleas, N. Schear, M. Varia, A. Yerukhi- movich, and R. K. Cunningham, "Computing on masked data: a high performance method for improving big data veracity," in High Perfor- mance Extreme Computing Conference (HPEC), IEEE, 2014.
D4M 2.0 schema: A general purpose high performance schema for the Accumulo database. J Kepner, C Anderson, W Arcand, D Bestor, B Bergeron, C Byun, M Hubbell, P Michaleas, J Mullen, D O'gwynn, A Prout, A Reuther, A Rosa, C Yee, High Performance Extreme Computing Conference (HPEC). IEEEJ. Kepner, C. Anderson, W. Arcand, D. Bestor, B. Bergeron, C. Byun, M. Hubbell, P. Michaleas, J. Mullen, D. O'Gwynn, A. Prout, A. Reuther, A. Rosa, and C. Yee, "D4M 2.0 schema: A general purpose high performance schema for the Accumulo database," in High Performance Extreme Computing Conference (HPEC), IEEE, 2013.
D4M: Bringing associative arrays to database engines. V Gadepally, J Kepner, W Arcand, D Bestor, B Bergeron, C Byun, L Edwards, M Hubbell, P Michaleas, J Mullen, A Prout, A Rosa, C Yee, A Reuther, High Performance Extreme Computing Conference (HPEC). IEEEV. Gadepally, J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun, L. Edwards, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, A. Rosa, C. Yee, and A. Reuther, "D4M: Bringing associative arrays to database engines," in High Performance Extreme Computing Conference (HPEC), IEEE, 2015.
Dynamic Distributed Dimensional Data Model (D4M) database and computation system. J Kepner, W Arcand, W Bergeron, N Bliss, R Bond, C Byun, G Condon, K Gregson, M Hubbell, J Kurz, A Mccabe, P Michaleas, A Prout, A Reuther, A Rosa, C Yee, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEJ. Kepner, W. Arcand, W. Bergeron, N. Bliss, R. Bond, C. Byun, G. Condon, K. Gregson, M. Hubbell, J. Kurz, A. McCabe, P. Michaleas, A. Prout, A. Reuther, A. Rosa, and C. Yee, "Dynamic Distributed Dimensional Data Model (D4M) database and computation system," in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5349-5352, IEEE, 2012.
Big Data strategies for Data Center Infrastructure management using a 3D gaming platform. M Hubbell, A Moran, W Arcand, D Bestor, B Bergeron, C Byun, V Gadepally, P Michaleas, J Mullen, A Prout, A Reuther, A Rosa, C Yee, J Kepner, High Performance Extreme Computing Conference (HPEC). IEEEM. Hubbell, A. Moran, W. Arcand, D. Bestor, B. Bergeron, C. Byun, V. Gadepally, P. Michaleas, J. Mullen, A. Prout, A. Reuther, A. Rosa, C. Yee, and J. Kepner, "Big Data strategies for Data Center Infrastruc- ture management using a 3D gaming platform," in High Performance Extreme Computing Conference (HPEC), IEEE, 2015.
The Lustre Storage Architecture. P J Braam, Cluster File Systems, IncP. J. Braam, The Lustre Storage Architecture. Cluster File Systems, Inc., October 2003.
|
[] |
[
"arXiv:cond-mat/9505029v1 8 May 1995 Electrons, pseudoparticles, and quasiparticles in one-dimensional interacting electronic systems",
"arXiv:cond-mat/9505029v1 8 May 1995 Electrons, pseudoparticles, and quasiparticles in one-dimensional interacting electronic systems"
] |
[
"J M P Carmelo \nDepartment of Physics\nUniversity ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal\n",
"A H Castro Neto \nInstitute of Theoretical Physics\nUniversity of California\n93106-4030Santa BarbaraCA\n",
"N M R Peres \nDepartment of Physics\nUniversity ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal\n"
] |
[
"Department of Physics\nUniversity ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal",
"Institute of Theoretical Physics\nUniversity of California\n93106-4030Santa BarbaraCA",
"Department of Physics\nUniversity ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal"
] |
[] |
We find the singular transformation between the electron operator and the pseudoparticle operators for the Hubbard chain. We generalize the concept of quasiparticle to one-dimensional electronic systems which in 1D refers to many-pseudoparticle objects. We obtain explicit results for the electron renormalization factor, self energy, and vertex functions at the Fermi points. We also discuss the possible connection of our results to higher dimensions and explore the possibilities of instabilities in the interacting problem such as the formation of Cooper pairs.
| null |
[
"https://export.arxiv.org/pdf/cond-mat/9505029v1.pdf"
] | 2,770,228 |
cond-mat/9505029
|
93203d67d2decb230acf4de10ac736dfe0b55773
|
arXiv:cond-mat/9505029v1 8 May 1995 Electrons, pseudoparticles, and quasiparticles in one-dimensional interacting electronic systems
J M P Carmelo
Department of Physics
University ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal
A H Castro Neto
Institute of Theoretical Physics
University of California
93106-4030Santa BarbaraCA
N M R Peres
Department of Physics
University ofÉvora, Apartado 94P-7001ÉvoraCodexPortugal
arXiv:cond-mat/9505029v1 8 May 1995 Electrons, pseudoparticles, and quasiparticles in one-dimensional interacting electronic systems
(Received 24 February 1995)numbers: 7215 Nj7420 -z7510Lp6740 Db Typeset Using REVTEX 1
We find the singular transformation between the electron operator and the pseudoparticle operators for the Hubbard chain. We generalize the concept of quasiparticle to one-dimensional electronic systems which in 1D refers to many-pseudoparticle objects. We obtain explicit results for the electron renormalization factor, self energy, and vertex functions at the Fermi points. We also discuss the possible connection of our results to higher dimensions and explore the possibilities of instabilities in the interacting problem such as the formation of Cooper pairs.
The unconventional electronic properties of novel materials such as the superconducting coper oxides and synthetic quasi-unidimensional conductors has attracted much attention to the many-electron problem in spatial dimensions 1 ≤D≤ 3. Although quantum liquids in dimensions 1 <D< 3 are, probably, neither Fermi liquids (3D) nor Luttinger liquids (1D) but have instead an intermediate physics, the complexity of the problem requires a good understanding of both the different and common properties of these two limiting cases.
While their different properties were the motivation for the introduction of the concept of Luttinger liquid in 1D [1], the characterization of their common properties is also of great interest because the latter are expected to be present in dimensions 1 <D< 3 as well.
One example is the Landau-liquid character common to Fermi liquids and some Luttinger liquids which consists in the generation of the low-energy excitations in terms of different momentum-occupation configurations of quantum objects (quasiparticles or pseudoparticles) whose forward-scattering interactions determine the low-energy properties of the quantum liquid. This generalized Landau-liquid theory was introduced in Ref. [2], which refers to contact-interaction soluble problems (shortly after the same kind of ideas were applied to 1/r 2 -interaction integrable models [3]).
The nature of interacting electronic quantum liquids in dimensions 1 <D< 3, including the existence or non existence of quasiparticles and Fermi surfaces, remains an open question of crucial importance for the clarification of the microscopic mechanisms behind the unconventional properties of the novel materials. Inspired by the results of Landau's theory of the Fermi liquid in 3D, it is one of the aims of this Letter to introduce the operator which creates a well defined elementary excitation of the Fermi system, which we call quasiparticle.
This excitation is a transition between two exact ground states of the interacting electronic problem differing in the number of electrons by one. When one electron is added to the electronic system the number of these excitations also increases by one. Naturally, its relation to the electron will depend on the overlap between the states associated with this and the quasiparticle and how close we are in energy from the starting interacting ground state. Therefore, in order to define the quasiparticle we need to understand the properties of the actual ground state of the problem as, for instance, is given by its exact solution via the Bethe ansatz (BA). The vanishing of the one-electron renormalization factor in 1D does not necessarily implies the non existence of the above quasiparticles and clarifying the problem in the 1D limit is also important for understanding how dimensionality changes the physics of interacting electronic systems in dimensions 1 ≤D≤ 3.
We consider here the Hubbard model [4][5][6][7] in one dimension with a finite chemical potential µ and in the presence of a magnetic field H,
H = −t j,σ c † j,σ c j+1,σ + c † j+1,σ c j,σ + U j [c † j,↑ c j,↑ − 1/2][c † j,↓ c j,↓ − 1/2] − µ σN σ − 2µ 0 HŜ z ,(1)
where c † j,σ and c j,σ are the creation and annihilation operators, respectively, for electrons at the site j with spin projection σ =↑, ↓. In what follows k F σ = πn σ and k F = [k F ↑ + k F ↓ ]/2 = πn/2, where n σ = N σ /N a and n = N/N a , and N σ and N a are the number of σ electrons and lattice sites, respectively (N = σ N σ ). The results refer to all finite values of U, electron densities 0 < n < 1, and spin densities 0 < m < n. This problem can be diagonalized using the BA [4]. This solution refers to a pseudoparticle operator basis, as was well established in Refs. [6,7] (the pseudoparticle Hamiltonian parameters and phase shifts we refer below are studied in detail in these papers). At constant values of the electron numbers this description of the problem is very similar to Fermi-liquid theory, except for two main differences: (i) the ↑ and ↓ quasiparticles are replaced by the c and s pseudoparticles and (ii) the discrete pseudoparticle momentum (pseudomomentum) is of the usual form q j = 2π Na I α j but the numbers I α j are not always integers. They are integers or half-integers depending whether the number of particles in the system is even or odd. This plays a central role in the present problem. The actual ground state of (1) is described in terms of the above c and s pseudoparticles which are created and annihilated by fermionic operators b † q,α and b q,α , respectively (where α = c, s). The ground state, |0; N c = N ↑ + N ↓ , N s = N ↓ , and low-energy Hamiltonian eigenstates can be completely described in terms of the occupations of these excitations. (Below we use often the alternative notation for the ground state, |0; N σ , N −σ .)
The c and s pseudoparticles are non-interacting at the small-momentum and low-energy fixed point and the spectrum is described in terms of bands in a pseudo-Brillouin zone which goes between q (−) c ≈ −π and q (+) c ≈ π for the c pseudoparticles and q (−) s ≈ −k F ↑ and q (+) s ≈ k F ↑ for the s pseudoparticles. In the ground state these are occupied for q
(−) F α ≤ q ≤ q (+) F α , where the pseudo-Fermi points are such that q (±) F c ≈ ±2k F and q (±) F s ≈ ±k F ↓ .
Often, it is useful to introduce the quantum number ι = ±1 which defines the right (ι = 1) and
left (ι = −1) pseudoparticles of momentumq = q − q (±)
F α . At higher energies and (or ) large momenta the pseudoparticles start to interact (this is the price paid for the choice of a fermionic statistics!) via zero-momentum transfer forward-scattering processes. As in a Fermi liquid, these are associated with f functions whose values at the pseudo-Fermi points define the Landau parameters,
F j αα ′ = 1 2π ι=±1 (ι) j f αα ′ (q (±) F α , ιq (±) F α ′ ), where j = 0, 1. Their expressions involve the pseudoparticle group velocities v α = ±v α (q (±) F α ) and the parameters ξ j αα ′ = δ α,α ′ + Φ αα ′ (q (+) F α , q (+) F α ′ ) + (−1) j Φ αα ′ (q (+) F α , q (−) F α ′ ), where j = 0, 1 and Φ αα ′ (q, q ′ ) is a two-pseudoparticle phase shift.
Our task is finding the relationship between the electronic operators c † k,σ in momentum space and the pseudoparticle operators b † q,α . In this Letter we solve the problem at the relative momenta of ground-state pairs differing in N σ by one. Notice that the electron excitation is not an eigenstate of the interacting problem and, therefore, when the electronic operator acts onto the ground state it produces a multiparticle process in terms of the pseudoparticles. The study of the above ground-state pairs of the interacting problem (1) reveals that their relative momentum equals precisely the U = 0 Fermi points, ±k F σ . We consider the case when the electron has spin projection, say, ↑ and momentum k F ↑ (the construction is the same for the case of spin projection ↓). We define the quasiparticle operator,c † k F ↑ ,↑ , which creates one quasiparticle with spin projection ↑ and momentum k F ↑ as,c †
k F ↑ ,↑ |0; N c = N ↑ + N ↓ , N s = N ↓ = |0; N c = N ↑ + 1 + N ↓ , N s = N ↓ .(2)
The quasiparticle operator defines a one-to-one correspondence between the addition of one electron to the system and the creation of one quasiparticle, exactly as we expect from the Landau theory in 3D: the electronic excitation, c † k F ↑ ,↑ |0; N c = N ↑ + N ↓ , N s = N ↓ , defined at the Fermi momentum but arbitrary energy, contains a single quasiparticle, as we show below. We will study this excitation as we take the energy to be zero, where the problem is equivalent to Landau's. Since we are discussing the problem of addition or removal of one particle the boundary conditions play a crucial role. When we add or remove one electron from the many-body system we have to consider the transitions between states with integer and half-integer quantum numbers I α j . The transition between two ground states differing in the number of electrons by one is then associated with two different processes: a backflow in the Hilbert space of the pseudoparticles with a shift of all the pseudomomenta by ± π Na and the creation of one or a pair of pseudoparticles at the pseudo-Fermi points. We find that the backflow is described in terms of a unitary operator,
U α (δq) = exp −δq q ′ [ ∂ ∂q ′ b † q ′ ,α ]b q ′ ,α .(3)
From the expressions of the ground-state pseudoparticle generators the following relation between the quasiparticle and the pseudoparticles follows,
c † ±k F ↑ ,↑ = b † q (±) F c ,c U ±1 s ,c † ±k F ↓ ,↓ = b † q (±) F c ,c b † q (±) F s ,s U ±1 c ,(4)
where U ±1 α = U α ∓ π Na . According to Eq. (4) the σ quasiparticles are many-pseudoparticle objects which recombine the pseudoparticle colors c and s (charge and spin in the limit m = n ↑ − n ↓ → 0 [7]) giving rise to spin projection ↑ and ↓ and have Fermi surfaces at ±k F σ . However, note that two-quasiparticle objects can be of two-pseudoparticle character because the product of the two corresponding many-pseudoparticle operators is such that
U +1 α U −1 α = 1 1, as for the triplet pairc † +k F ↑ ,↑c † −k F ↑ ,↑ = b † q (+) F c ,c b † q (−) F c ,c .
In order to relate the quasiparticle operatorsc † ±k F σ ,σ to the electronic operators c † ±k F σ ,σ we have combined a generator pseudoparticle analysis and a suitable Lehmann representation with conformal-field theory [5,7]. We measure the energy ω from the initial chemical potential µ(N σ , N −σ ) (ie, we consider µ(N σ , N −σ ) = 0). As in a Fermi liquid, we find that the one-electron renormalization factor Z σ (ω) has a crucial role in the above relation. This factor is given by the small-ω leading-order term of |1 − ∂ReΣσ(±k F σ ,ω) ∂ω | −1 , where Σ σ (k, ω) is the σ self energy. Remarkably, we found the following low-ω expression, ReΣ σ (±k F σ , ω) =
ω[1 − ω −1−ςσ a σ 0 + j=1,2,3,... a σ j ω 4j ],
where a σ j are constants and ς ↑ = −2 + α
1 2 [(ξ 1 αc −ξ 1 αs ) 2 + (ξ 0 αc ) 2 ] and ς ↓ = −2 + α 1 2 [(ξ 1 αs ) 2 + (ξ 0 αc + ξ 0 αs ) 2 ]
are U, n, and m dependent exponents which for U > 0 are negative and such that −1 < ς σ < −1/2. Therefore, both the real part of the Green function ReG σ (±k F σ , ω) and the lifetime τ σ = 1/ImΣ σ (±k F σ , ω) diverge when ω → 0 as ω ςσ , yet Z σ (ω) = a σ 0 |ςσ| ω 1+ςσ vanishes in that limit and there is no overlap between the quasiparticle and the electron, in contrast to a Fermi liquid. In the different three limits U → 0, m → 0, and m → n the exponents ς ↑ and ς ↓ are are equal and given by −1, −2 Our method allows the identification of which particular Hamiltonian eigenstates contribute to the above a σ j ω 4j corrections and leads to the following relation between the electron and quasiparticle operators
+ 1 2 [ ξ 0 2 + 1 ξ 0 ] 2 , and − 1 2 − η 0 [1 − η 0 2 ],c † ±k F σ ,σ = Z σ (ω)|ς σ |[1 + ω 2 α,α ′ ,ι=±1 C ι α,α ′ρ α,ι (ι 2π N a )ρ α ′ ,−ι (−ι 2π N a ) + O(ω 4 )]c † ±k F σ ,σ ,(5)
where C ι α,α ′ are constants andρ α,ι (k) = q b † q+k,α,ι bq ,α,ι is a pseudoparticle-pseudohole operator. When ω → 0 this relation refers to a singular transformation because Z σ (ω) vanishes in that limit. Combining Eqs. (4) and (5) gives the electron operator in the pseudoparticle basis. The singular nature of the transformation (5) which maps the 0renormalization-factor electron onto the 1-renormalization-factor quasiparticle explains the perturbative character of the pseudoparticle-operator basis [7]. It is this perturbative character that determines the form of expression (5) which except for the non-classical exponent in the Z σ (ω) =
lationc † k F σ ,σ |0; N σ , N −σ = |0; N σ + 1, N −σ with Eq. (5) we find that Z σ (ω)|ς σ | = | 0; N σ + 1, N −σ |c † k F σ ,σ |0; N σ , N −σ | ∝ ω1+ςσ
2 . In the present thermodynamic limit this result is only valid in the limit ω → 0 and confirms that this amplitude vanishes, the way it goes to zero with the excitation energy being relevant for comparison with other vanishing matrix elements: the higher-order contributions to expression (5) are associated with lowenergy excited Hamiltonian eigenstates orthogonal to the ground states and whose matrixelement amplitudes vanish as ω 1+ςσ +4j 2 (with 2j the number of pseudoparticle-pseudohole processes relative to |0; N σ + 1, N −σ and j = 1, 2, 3, ...). Therefore, the leading-order term of (5) and the exponent ς σ fully control the low-energy overlap between the ±k F σ quasiparticles and electrons and determines the expressions of all k = ±k F σ one-electron low-energy quantities. For instance, we find for the σ spectral function A σ (±k F σ , ω) ∝ ω ςσ . (For a numerical study see Ref. [8].) Furthermore, A σ (k, ω) (and ReG σ (k, ω)) vanishes when ω → 0 for all momentum values except at the non-interacting Fermi-points k = ±k F σ . It follows that for ω → 0 the density of states, D σ (ω) = k A σ (k, ω), results, exclusively, of contributions from the peaks centered at k = ±k F σ and is such that D σ (ω) ∝ ωA σ (±k F σ , ω).
It is known that D σ (ω) ∝ ω νσ , where ν σ is the exponent of the equal-time momentum distribution expression, N σ (k) ∝ |k ∓ k F σ | νσ [4]. We find the relation ς σ = ν σ − 1, in agreement with the above analysis. However, this simple relation does not imply that the equal-time expressions provide full information on the small-energy instabilities. For instance, in addition to the momentum values k = ±k F σ and in contrast to the spectral function, N σ (k) shows singularities at k = ±[k F σ + 2k F −σ ] [5]. Therefore, only the direct low-energy study reveals all the true instabilities of the quantum liquid. (In some Luttinger liquids N(k) ∝ |k ∓ k F | ν with ν > 1 [11]. Then, A(±k F , ω) ∝ ω ν−1 does not diverge.) The electron -quasiparticle low-energy overlap also determines the behavior of the two-electron vertex function at the Fermi momenta and small energy. As usually [11], we choose the energy variables in such a way that the combinations ω 1 + ω 2 , ω 1 − ω 3 , and ω 1 − ω 4 are all equal to ω. We find that the vertex function diverges as Γ ι σσ ′ (k F σ , ιk F σ ′ ; ω) ∝ ω −2−ςσ−ς σ ′ , where ι = ±1. Further, we could evaluate the following closed-form expression
Γ ι σσ ′ (k F σ , ιk F σ ′ ; ω) = 1 |ςσς σ ′ |Zσ(ω)Z σ ′ (ω) { ι ′ =±1 (ι ′ ) 1−ι 2 [v ι ′ ρ + (δ σ,σ ′ − δ σ,−σ ′ )v ι ′ σz ] − δ σ,σ ′ṽ σ },
where v ι ρ and v ι σz are given in Table I and involve only the pseudoparticle velocities and Landau parameters referred above. (We have not derived the expression for the velocityṽ σ . Note, however, that the relevant quantity for the low-energy physics isṽ σ -independent and reads
δ σ,σ ′ṽ σ + |ς σ ς σ ′ |Z σ (ω)Z σ ′ (ω)Γ ι σσ ′ (k F σ , ιk F σ ′ ; ω).)
Let us now consider the excitation energies ∆E
sp = ω 0 σ,−σ − ω 0 σ − ω 0 −σ and ∆E tpσ = ω 0 σ,σ − 2ω 0 σ , where ω 0 σ = E 0 (N σ +1, N −σ )−E 0 (N σ , N −σ ), ω 0 σ,−σ = E 0 (N σ +1, N −σ +1)−E 0 (N σ , N −σ ), and ω 0 σ,σ = E 0 (N σ + 2, N −σ ) − E 0 (N σ , N −σ )
are ground-state excitation energies. We have evaluated the exact expressions of these vanishing energies which to first order in 1 Na read, Therefore, for 0 < δ < δ c there is attraction between quasiparticles in the singlet Cooperpair channel. In a Fermi liquid this would imply a singlet-superconductivity instability for hole concentrations 0 < δ < δ c . However, since in the limit of vanishing energy the electrons and quasiparticles have no overlap the quasiparticle attraction does not necessarily imply the occurrence of such superconductivity instability. We evaluated the corresponding response functions with the results Reχ sc (±2k F , ω) and Imχ sc (±2k F , ω) ∝ ω ςsc , and Reχ tcσ (0, ω) and
∆E sp = π Na [v c +F 0 cc +F 0 cs −(v s +F 1 ss )+F 1 cs ], ∆E tp↑ = π Na [v c +F 0 cc −(v c +F 1 cc )−(v s +F 1 ss )+2F 1 cs ], and ∆E tp↓ = π Na [v c + F 0 cc + v s + F 0 ss + 2F 0 cs − (v s + F 1 ss )].
Imχ tcσ (0, ω) ∝ ω ςtpσ , where for U > 0 the exponents are positive and such that 0 < ς sc < 1, 0 < ς tc↑ < 1, and 0 < ς tc↓ < 2. Therefore, in the 1D Hubbard model the low-energy electron -quasiparticle overlap is not strong enough for the quasiparticle -quasiparticle attraction in the singlet channel giving rise to a superconductivity instability.
One of the goals of this Letter was, in spite of the differences between the Luttingerliquid Hubbard chain and 3D Fermi liquids, detecting common features in these two limiting problems which we expect to be present in electronic quantum liquids in spatial dimensions 1 <D< 3. As in 3D Fermi liquids, we find that there are Fermi-surface quasiparticles in the Hubbard chain which connect ground states differing in the number of electrons by one and whose low-energy overlap with electrons determines the one-electron ω → 0 divergences.
For instance, in spite of the vanishing electron density of states and renormalization factor, we find that the spectral function vanishes at all momenta values except at the Fermi surface where it diverges (as a Luttinger-liquid power law). While low-energy excitations are described by c and s pseudoparticle-pseudohole excitations which determine the c and s separation [7], the quasiparticles describe only ground-state -ground-state transitions and recombine c and s (charge and spin in the m → 0 limit) giving rise to the spin projections σ (see Eq. (4)). Importantly, we have written the electron operator at the Fermi surface in the pseudoparticle basis. The vanishing of the electron renormalization factor implies a singular character for the transformation (5) which leads to the above quasiparticles with renormalization factor 1. Our exact results have confirmed that the electron renormalization factor can be related to a single matrix-element amplitude [9,10]. Although in the Luttingerliquid case the form of the vertex function at the Fermi surface depends on the way that the energies go to zero [11], our study reveals that the Landau parameters which control the quasiparticle interaction energies ∆E sp and ∆E tpσ are determined by the finite renormalized vertex lim ω→0 [Z σ (ω)Z σ ′ (ω)Γ ι σσ ′ (k F σ , ιk F σ ′ ; ω)], as in a Fermi liquid. This justifies the finite forward-scattering functions f αα ′ (q, q ′ ) and the perturbative character of the Hamiltonian (1) in the pseudoparticle basis [7]. Finally, from the existence of Fermi-surface quasiparticles both in the 1D and 3D limits, our results suggest their existence for quantum liquids in dimensions 1 <D< 3 and we predict the main role of increasing dimensionality being the strengthening of the electron -quasiparticle overlap [12]. For instance, if this leads to a
finite (yet small) vanishing-energy overlap, the pre-existing 1D singlet quasiparticle pairing induced by electronic correlations could lead to a real superconductivity instability.
We thank D. K. Campbell, F. Guinea, and K. Maki for illuminating discussions. This research was supported by the NSF under the Grant No. PHY89-04035.
v ι ρ v ι σz ι = −1 v c + F 1 cc v c + F 1 cc + 4(v s + F 1 ss − F 1 cs ) ι = 1 (v s + F 0 ss )/L 0 (v s + F 0 ss + 4[v c + F 0 cc + F 0 cs ])/L 0= (v c + F 0 cc )(v s + F 0 ss ) − (F 0 cs ) 2 .
respectively. Here the m → 0 parameter ξ 0 changes from ξ 0 = √ 2 at U = 0 to ξ 0 = 1 as U → ∞ and η 0 = ( 2 π ) tan −1 4t sin(πn) U .
absorbed by the electron-quasiparticle transformation) includes only classical exponents, as in a Fermi liquid. Combining the re-
Again, these expressions involve exclusively the velocities and Landau parameters. Defining the hole concentration δ = 1−n, while the above ground-state triplet pairing energies are always positive, the singlet pairing energy ∆E sp is also positive except in a small concentration domain, 0 < δ < δ c , where it is negative. This domain is larger for zero magnetization m = 0. In this case the U-dependent critical concentration δ c vanishes both in the limits U → 0 and U → ∞ and is maximum for an intermediate but relative large value of U. As in a Fermi liquid, we findc † k F σ ,σc † k F −σ ,−σ |0; N σ , N −σ = |0; N σ + 1, N −σ + 1 andc † k F σ ,σc † −k F σ ,σ |0; N σ , N −σ = |0; N σ + 2, N −σ . This implies that ∆E sp and ∆E tpσ are quasiparticle pairing energies.
TABLE
Table I -
IExpressions of the parameters v ι ρ and v ι σz in terms of the velocities v α and Landau parameters F j αα ′ , where L 0
. F D M Haldane, J. Phys. C. 142585F. D. M. Haldane, J. Phys. C 14, 2585 (1981).
J Carmelo, A A Ovchinnikov, Cargèse lectures. unpublishedJ. Carmelo and A. A. Ovchinnikov, Cargèse lectures, unpublished (1990);
. J. Phys.: Condens. Matter. 3757J. Phys.: Condens. Matter 3, 757 (1991).
. F D M Haldane, Phys. Rev. Lett. 661529F. D. M. Haldane, Phys. Rev. Lett. 66, 1529 (1991);
. E R Mucciolo, B Shastry, B D , E. R. Mucciolo, B. Shastry, B. D.
. B L Simons, Altshuler, Phys. Rev. B. 49Simons, and B. L. Altshuler, Phys. Rev. B 49, 15 197 (1994).
. H Elliott, F Y Lieb, Wu, Phys. Rev. Lett. 201445Elliott H. Lieb and F. Y. Wu, Phys. Rev. Lett. 20, 1445 (1968).
. Holger Frahm, V E Korepin, ibid. 43Phys. Rev. B. 425653Holger Frahm and V. E. Korepin, Phys. Rev. B 42, 10 553 (1990); ibid. 43, 5653 (1991);
. Masao Ogata, Tadao Sugiyama, Hiroyuki Shiba, ibid. 438401Masao Ogata, Tadao Sugiyama, and Hiroyuki Shiba, ibid. 43, 8401 (1991).
. J M P Carmelo, P Horsch, A A Ovchinnikov, Phys. Rev. B. 457899J. M. P. Carmelo, P. Horsch, and A. A. Ovchinnikov, Phys. Rev. B 45, 7899 (1992);
. J M P Carmelo, A H Castro Neto, D K Campbell, ibid. 3683Phys. Rev. Lett. 733667Phys. Rev. BJ. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. Lett. 73, 926 (1994); ibid., Phys. Rev. B 50, 3667 (1994); ibid. 3683 (1994);
. J M P Carmelo, N , J. M. P. Carmelo and N.
. M R Peres, ibid. 517481M. R. Peres, ibid. 51, 7481 (1995).
. R Preuss, A Muramatsu, W Der Linden, P Dierterich, F F Assaad, W , R. Preuss, A. Muramatsu, W. von der Linden, P. Dierterich, F. F. Assaad, and W.
. Hanke, Phys. Rev. Lett. 73732Hanke, Phys. Rev. Lett. 73, 732 (1994).
. P W Anderson, Phys. Rev. Lett. 641839P. W. Anderson, Phys. Rev. Lett. 64, 1839 (1990).
. Walter Metzner, Claudio Castellani, preprintWalter Metzner and Claudio Castellani, preprint (1994).
. J Sólyom, Adv. Phys. 28201J. Sólyom, Adv. Phys. 28, 201 (1979);
. J Voit, Phys. Rev. B. 476740J. Voit, Phys. Rev. B 47, 6740 (1993);
. Walter Metzner, Carlo Di Castro, 16107Walter Metzner and Carlo Di Castro, ibid. 16 107 (1993).
. J M P Carmelo, F Guinea, P Horsch, K Maki, preprintJ. M. P. Carmelo, F. Guinea, P. Horsch, and K. Maki, preprint (1995).
|
[] |
[] |
[
"Martino Marelli \nINAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly\n\nUniversitá degli Studi dell'Insubria -Via Ravasi\n2 -21100VareseItaly\n",
"Andrea De Luca \nINAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly\n\nIUSS -V.le Lungo Ticino Sforza\n56 -27100PaviaItaly\n",
"Patrizia A Caraveo \nINAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly\n"
] |
[
"INAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly",
"Universitá degli Studi dell'Insubria -Via Ravasi\n2 -21100VareseItaly",
"INAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly",
"IUSS -V.le Lungo Ticino Sforza\n56 -27100PaviaItaly",
"INAF/IASF Milano\nVia E. Bassini 15 -I20133MilanoItaly"
] |
[] |
Using archival as well as freshly acquired data, we assess the X-ray behaviour of the Fermi/LAT γ-ray pulsars listed in the First Fermi source catalog (Abdo et al. 2010c). After revisiting the relationships between the pulsars' rotational energy losses and their X and γ-ray luminosities, we focus on the distance-indipendent γ to X-ray flux ratios. When plotting our F γ /F X values as a function of the pulsars' rotational energy losses, one immediately sees that pulsars with similar energetics have F γ /F X spanning 3 decades. Such spread, most probably stemming from vastly different geometrical configurations of the X and γ-ray emitting regions, defies any straightforward interpretation of the plot. Indeed, while energetic pulsars do have low F γ /F X values, little can be said for the bulk of the Fermi neutron stars. Dividing our pulsar sample into radio-loud and radio-quiet subsamples, we find that, on average, radio-quiet pulsars do have higher values of F γ /F X , implying an intrinsec faintness of their X-ray emission and/or a different geometrical configuration. Moreover, despite the large spread mentioned above, statistical tests show a lower scatter in the radio-quiet dataset with respect to the radio-loud one, pointing to a somewhat more constrained geometry for the radio-quiet objects with respect to the radio-loud ones.
|
10.1088/0004-637x/733/2/82
|
[
"https://arxiv.org/pdf/1103.0572v1.pdf"
] | 118,519,586 |
1103.0572
|
b340feaa38f62f38fad6983ddb0ba15fcc359efa
|
2 Mar 2011
Martino Marelli
INAF/IASF Milano
Via E. Bassini 15 -I20133MilanoItaly
Universitá degli Studi dell'Insubria -Via Ravasi
2 -21100VareseItaly
Andrea De Luca
INAF/IASF Milano
Via E. Bassini 15 -I20133MilanoItaly
IUSS -V.le Lungo Ticino Sforza
56 -27100PaviaItaly
Patrizia A Caraveo
INAF/IASF Milano
Via E. Bassini 15 -I20133MilanoItaly
2 Mar 2011A multiwavelength study on the high-energy behaviour of the Fermi/LAT pulsars Received ; accepted ...Subject headings: gamma rays: generalx rays: generalpulsars: generalstars:
Using archival as well as freshly acquired data, we assess the X-ray behaviour of the Fermi/LAT γ-ray pulsars listed in the First Fermi source catalog (Abdo et al. 2010c). After revisiting the relationships between the pulsars' rotational energy losses and their X and γ-ray luminosities, we focus on the distance-indipendent γ to X-ray flux ratios. When plotting our F γ /F X values as a function of the pulsars' rotational energy losses, one immediately sees that pulsars with similar energetics have F γ /F X spanning 3 decades. Such spread, most probably stemming from vastly different geometrical configurations of the X and γ-ray emitting regions, defies any straightforward interpretation of the plot. Indeed, while energetic pulsars do have low F γ /F X values, little can be said for the bulk of the Fermi neutron stars. Dividing our pulsar sample into radio-loud and radio-quiet subsamples, we find that, on average, radio-quiet pulsars do have higher values of F γ /F X , implying an intrinsec faintness of their X-ray emission and/or a different geometrical configuration. Moreover, despite the large spread mentioned above, statistical tests show a lower scatter in the radio-quiet dataset with respect to the radio-loud one, pointing to a somewhat more constrained geometry for the radio-quiet objects with respect to the radio-loud ones.
Introduction
The vast majority of the 1800 rotation-powered pulsars known to date (Manchester et al. 2005) were discovered by radio telescopes. While only few pulsars have also been seen in the optical band (see e.g. Mignani 2008Mignani , 2010, the contribution of Chandra and XMM-Newton telescopes increased the number of X-ray counterparts of radio pulsars bringing the gran total of to ∼100 (see e.g. Becker 2009). Such high-energy emission can yield crucial information on the pulsar physics, disentangling thermal components from non-thermal ones, and tracing the presence of pulsar wind nebulae (PWNe).
Chandra's exceptional spatial resolution made it possible to discriminate clearly the PWN and the PSR contributions while XMM-Newton's high spectral resolution and throughput unveiled the multiple spectral components which characterize pulsars (see e.g. Possenti et al. (2002)). Altough the X-ray non-thermal powerlaw index seems somehow related to the gamma-ray spectrum (see e.g. Kaspi et al. 2004), extrapolating the X-ray data underpredicts the γ-ray flux by at least one order of magnitude (see e.g. Abdo et al. 2010d).
Until the launch of Fermi, only seven pulsars were seen in high-energy gamma rays (Thompson 2008), and only one of them, Geminga, was not detected by radio telescopes.
The Fermi Large Area Telescope (LAT) changed dramatically such scenario estabilishing radio-quiet pulsars as a major family of γ-ray emitting neutron stars. After one year of all-sky monitoring Fermi/LAT has detected 54 gamma-ray pulsars, 22 of which are radio-quiet (Abdo et al. 2010a;Camilo et al. 2009). Throughout this paper we shall classify as radio-quiet all the pulsars detected by Fermi through blind searches (Abdo et al. 2010a; but not seen in radio in spite of dedicated deep searches. Containing a sizeble fraction of radio-quiet pulsars, the Fermi sample provides, for the first time, the possibility to compare the phenomenology of radio-loud and radio-quiet neutron stars assessing their similarities and their differences (if any).
While our work rests on the Fermi data analysis and results (Abdo et al. 2010a; for the X-ray side we had first to build an homogeneus data set relying both on archival sources and on fresh observations. In the following we will address the relationship between the classical pulsar parameters, such as age and overall energeticsĖ, and their X and γ-ray yields. While the evolution of the X and γ-ray luminosities as a function ofĖ and the characteristic age τ c have been already discussed, we will concentrate on the ratio between the X and γ-ray luminosities thus overcoming the distance conondrum which has hampered the studies discussed so far in the literature. We note that F γ /F X parameter probes both pulsar efficiencies at different wavelenghts and distribution of the emitting regions in the pulsar magnetosphere.
Thus, such a distance-indipendent approach does magnify the role of both geometry and geography in determining the high-energy emission from pulsars.
Data Analysis
γ-ray Analysis
We consider all the pulsars listed in the First Year Catalog of Fermi γ-ray sources (Abdo et al. 2010c) which contains the γ-ray pulsars listed in the First Fermi pulsar catalog (Abdo et al. 2010a) as well as the new blind search pulsars found by Saz . Our sample comprehends 54 pulsars:
-29 detected using radio ephemerides -25 found through blind searches; of these 3 were later found to have also a radio emission and, as such, they were added to the radio emitting ones.
Thus, our sample of γ-ray emitting neutron stars consists of 32 radio pulsars and 22 radio-quiet pulsars. Here, we summarize the main characteristics of the analysis performed in the two articles.
The pulsar spectra were fitted with an exponential cutoff powerlaw model of the form:
dN/dE = KE −γ GeV exp(−E/E cutof f )
1 GeV has been chosen to define the normalization factor because it's the energy at which the relative uncertainty on the differential flux is minimal.
The spectral analysis was performed taking into account the contribution of all the neighboring sources (up to 17 • ) and the diffuse emission. Sources at more than 3 • from any pulsars were assigned fixed spectra, taken from the all-sky analysis. γ-rays with E>100
MeV have been used and the contamination produced by cosmic rays interactions in the Earth's atmosphere was avoided by selecting a zenith angle >105 • .
At first, all events have been used in order to obtain a phase-averaged spectrum for each pulsar. Next the data have been splitted into on-pulse and off-pulse samples. The off-pulse sample has been described with a simple powerlaw while, for the on-pulse emission, an exponentially cutoff powerlaw has been used, with the off-pulse emission (scaled to the on-pulse phase interval) added to the model. Such an approach is adopted in order to avoid a possible PWN contamination to the pulsar spectrum.
For completeness, we included in our sample also the 4 radio pulsars listed in the 4th IBIS/ISGRI catalog (Bird et al. 2009) but, so far, not seen by Fermi. Searching in the 1-year Fermi catalog (Abdo et al. 2010c), we found a potential counterpart for PSR J0540-6919 but the lack of a pulsation prevent us to associate the IBIS/ISGRI pulsar with the Fermi source. We therefore used the 1FGL flux as an upper limit. The three remaining IBIS pulsars happen to be located near the galactic centre, where the intense radiation from the disk of our Galaxy hampers the detection of γ-ray sources. We used the sensitivity map taken from Abdo et al. (2010c) to evaluate the Fermi flux upper limit.
X-ray Data
The X-ray coverage of the Fermi LAT pulsars is uneven since the majority of the newly discovered radio-quiet PSRs have never been the target of a deep X-ray observation, while for other well-known γ-ray pulsars -such as Crab, Vela and Geminga -one can rely on a lot of observations. To account for such an uneven coverage, we classify the X-ray spectra on the basis of the public X-ray data available, thus assigning:
-label "0" to pulsars with no confirmed X-ray counterparts (or without a non-thermal spectral component);
-label "1" to pulsars with a confirmed counterpart but too few photons to assess its spectral shape;
-label "2" to pulsars with a confirmed counterpart for which the data quality allows for the analysis of both the pulsar and the nebula (if present).
An "ad hoc" analysis was performed for seven pulsars for which the standard analysis couldn't be applied (e.g. owing to the very high thermal component of Vela or to the closeness of J1418-6058 to an AGN). Table 2 provides details on such pulsars.
We consider an X-ray counterpart to be confirmed if:
-X-ray pulsation has been detected; -X and Radio coordinates concide; -X-ray source position has been validated through the blind-search algorithm developed by the Fermi collaboration (Abdo et al. 2009;Ray et al. 2010).
If none of these conditions apply, γ-ray pulsar is labelled as "0".
According to our classification scheme we have 14 type-0, 7 type-1 and 37 type-2 pulsars. In total 44 γ-ray neutron stars, 31 radio-lound and 13 radio-quiet have an X-ray counterpart.
Since the X-ray observation database is continuously growing, the results available in literature encompass only fractions of the X-ray data now available. Moreover, they have been obtained with different versions of the standard analysis softwares or using different techniques to account for the PWN contribution. Thus, with the exception of the well-known and bright X-ray pulsars, such as Crab or Vela, we re-analyzed all the X-ray data pubblicly available following an homogeneous procedure. If only a small fraction of the data are publicly available, we quoted results from a literature search.
In order to assess the X-ray spectra of Fermi pulsars, we used photons with energy 0.3<E<10 keV collected by Chandra/ACIS (Garmire et al. 2003), XMM-Newton (Struder et al. 2001), (Turner et al. 2001) and SWIFT/XRT (Burrows et al. 2005). We selected all the public observations (as of April 2010) that overlap the error box of Fermi pulsars or the Radio coordinates.
We neglected all Chandra/HRC observations owing to the lack of energy resolution of the instrument. To analyze Chandra data, we used the Chandra Interactive Analysis of Observation software (CIAO version 4.1.2). The Chandra point spread function depends on the off-axis angle: we used for all the point sources an extraction area around the pulsar that cointains 90% of the events. For instance, for on-axis sources we selected all the photons inside a 2" radius circle, while we extracted photons from the inner part of PWNs (excluding the 2" radius circle of the point source) in order to assess the nebular spectra:
such extended regions vary for pulsar to pulsar as a function of the nebula dimension and flux.
We analyzed all the XMM-Newton data (both from PN and MOS1/2 detectors) with the XMM-Newton Science Analysis Software (SASv8.0). The raw observation data files (ODFs)
were processed using standard pipeline tasks (epproc for PN, emproc for MOS data); we used only photons with event pattern 0-4 for the PN detector and 0-12 for the MOS1/2 detectors. When necessay, an accurate screening for soft proton flare events was done, following the prescription by De Luca&Molendi (2004).
If, in addiction to XMM data, deep Chandra data were also available, we made an XMM spectrum of the entire PSR+PWN and used the Chandra higher resolution in order to disentangle the two contributions. When only XMM-Newton data were available, the point source was analyzed by selecting all the photons inside a 20" radius circle while the whole PWN (with the exception of the 20" radius circle of the point source) was used in order to assess the nebular spectrum.
We analyzed all the SWIFT/XRT data with HEASOFT version 6.5 selecting all the photons inside a 20" radius circle. If multiple data sets collected by the same instruments were found, spectra, response and effective area files for each dataset were added by using the mathpha, addarf and addrmf HEASOFT tools.
All the spectra have been studied with XSPEC v.12 (Arnaud 1996) choosing, whenever possible, the same background regions for all the different observations of each pulsar. All the data were rebinned in order to have at least 25 counts per channel, as requested for the validity of χ 2 statistic.
The XMM-Chandra cross calibration studies (Stuhlinger et al. 2008) report only minor changes in flux (<10%) between the two instruments. When both XMM and Chandra data were available, a constant has been introduced to account for such uncertainty. Conversely, when the data were collected only by one instrument, a systematic error was introduced.
All the PSRs and PWNs have been fitted with absorbed powerlaws; when statistically needed, a blackbody component has been added to the pulsar spectrum. Since PWNs typically show a powerlaw spectrum with a photon index which steepens moderately as a function of the distance from the PSR (Gaensler&Slane 2006), we used only the inner part of each PWN. Absorption along the line of sight has been obtained through the fitting procedure but for the cases with very low statistic for which we used informations derived from observations taken in different bands.
X-ray Analysis
For pulsars with a good X-ray coverage we carried out the following steps.
If only XMM-Newton public observations were available, we tried to take into account the PWN contribution. First we searched the literature for any evidence of the presence of a PWN and, if nothing was found, we analyzed the data to search for extended emission.
If no evidence for the presence of a PWN was found, we used PN and MOS1/2 data in a simultaneous spectral fit. On the other hand, if a PWN was present, its contribution was evaluated on a case by case basis. If the statistic was good enough, we studied simultanously the inner region, containing both PSR and PWN, and the extended source region surrounding it. The inner region data were described by two absorbed (PWN and PSR) powerlaws, while the outer one by a single (PWN) powerlaw. The N H and the PWN photon index values were the same in the two (inner and outer) datasets.
When public Chandra data were available, we evaluated separately PSR and PWN (if any) in a similar way.
If both Chandra and XMM public data were available, we exploited Chandra space resolution to evaluate the PWN contribution by:
-obtaining two different spectra of the inner region (a), encompassing both PSR and PWN and of the outer region (b) encompassing only the PWN;
-extracting a total XMM spectrum (c) containing both PSR and PWN: this is the only way to take into account the XMM's larger PSF;
-fitting simultaneously a,b,c with two absorbed powerlaws and eventually (if statistically significant) an absorbed blackbody, using the same N H ; a constant moltiplicative was also introduced in order to account for a possible discrepancy between Chandra and XMM calibrations;
-forcing to zero the normalization(s) of the PSR model(s) in the Chandra outer region and freeing the other normalizations in the Chandra datasets; fixing the XMM PSR normalization(s) at the inner Chandra dataset one and the XMM PWN normalization at the inner+outer normalizations of the Chandra PWN.
Only for few well-known pulsars, or pulsars for which the dataset is not yet entirely public, we used results taken from the literature (see Table 2). Where necessary, we used XSPEC in order to obtain the flux in the 0.3-10 keV energy range and to evaluate the unabsorbed flux.
For pulsars with a confirmed counterpart but too few photons to discriminate the spectral shape, we evaluated an hypothetical unabsorbed flux by assuming that a single powerlaw spectrum with a photon index of 2 to describe PSR+PWN. We also assumed that the PWN and PSR thermal contributions are 30% of the entire source flux (a sort of mean value of all the considered type 2 pulsars). To evaluate the absorbing column, we need a distance value which can come either from the radio dispersion or -for radio-quiet pulsars -from the following pseudo-distance reported in Saz :
d = 0.51Ė 34 1/4 /F 1/2
γ,10 kpc whereĖ =Ė 34 × 10 34 erg/s and F γ = F γ,10 × 10 −10 erg/cm 2 s and the beam correction factor f γ is assumed to be 1 (Watters et al. 2009) for all pulsars.
Then, the HEASARC WebTools (http://heasarc.gsfc.nasa.gov/docs/tools.html) was used to find the galactic column density (N H ) in the direction of the pulsar; with the distance information, we could rescale the column density value of the pulsar. We found the source count rate by using the XIMAGE task (Giommi et al. 1992). Then, we used the WebPimms tool inside the WebTools package to evaluate the source unabsorbed flux. Such a value has then to be corrected to account for the PWN and PSR thermal contributions. We are aware that each pulsar can have a different photon index, as well as thermal and PWN contributions so that we used these mean values only as a first approximation. All the low-quality pulsars (type 1) will be treated separately and all the considerations in this paper will be based only on high-quality objects (type 2).
For pulsars without a confirmed counterpart we evaluated the X-ray unabsorbed flux upper limit assuming a single powerlaw spectrum with a photon index of 2 to describe PSR+PWN and using a signal to noise of 3.
The column density has been evaluated as above. Under the previous hypotheses, we used the signal-to-noise definition in order to compute the upper limit to the absorbed flux of the X-ray counterpart. Next we used XSPEC to find the unabsorbed upper limit flux.
On the basis of our X-ray analysis we define a subsample of Fermi γ-ray pulsars for which we have, at once, reliable X-ray data (type 2 pulsars) and satisfactory distance estimates such parallax, radio dispersion measurement, column density estimate, SNR association. Such a subsample contains 24 radio emitting neutron stars and 5 radio quiet ones. The low number of radio quiet is to be ascribed to lack of high quality X-ray data.
Only one of the IBIS pulsars has a clear distance estimate. Moreover, we have 4 additional radio-quiet pulsars with reliable X-ray data but without a satisfactory distance estimate.
In Tables 2-3 we reported the gamma-ray and X-ray parameters of the 54 Fermi first
year pulsars. We also included the four hard X-ray pulsars taken from the "4th IBIS/ISGRI soft gamma-ray survey catalog" (Bird et al. 2009). We useĖ = 4π 2 IṖ /P , τ C = P/2Ṗ and B lc = 3.3 × 10 19 (PṖ ) 1/2 × (10km)/(R 3 lc ), where R lc = cP/2π, P is the pulsar spin period,Ṗ its derivative and the standard value for moment of inertia of the neutron star I=10 45 g/cm 2 (see e.g. Steiner et al. 2010 Table 1).
Discussion
Study of the X-ray luminosity
The X-ray luminosity, L X , is correlated with the pulsar spin-down luminosityĖ.
The scaling was firstly noted by Seward&Wang (1988) ≃ 10 −3Ė . The uncertainty due to soft X-ray absorption translates into very high flux errors; moreover it was very hard to discriminate between the thermal and powerlaw spectral components. A re-analysis was performed by Possenti et al. (2002), who studied in the 2-10 keV band a sample of 39 pulsars observed by several X-ray telescopes. However, they could not separate the PWN from the pulsar contribution. Moreover, they conservatively adopted, for most of the pulsars, an uncertainty of 40% on the distance values. A better comparison with our data can be done with the results by Kargaltsev&Pavlov (2008), who recently used high-resolution Chandra data in order to disentangle the PWN and pulsar fluxes. Focussing just on Chandra data, and rejecting XMM observations, they obtain a poor spectral characterization which translates in high errors on fluxes. They also adopted an uncertainty of 40% on the distance values for most pulsars. Despite the big uncertainties, mainly due to poor distance estimates, all these datasets show that the L X versusĖ relation is quite scattered. The high values of the χ 2 red seem to exclude a simple statistical effect.
We are now facing a different panorama, since our ability to evaluate pulsars' distances has improved (Abdo et al. 2010a;) and we are now much better in discriminating pulsar emission from its nebula. The use of XMM data makes it possible to build good quality spectra allowing to disentangle the non-thermal from the thermal contribution, when present. In particular, we can study the newly discovered radio-quiet pulsar population and compare them with the "classical" radio-loud pulsars. We investigate the relations between the X and γ luminosities and pulsar parameters, making use of the data collected in Tables 1-2-3.
Using the 29 Fermi type 2 pulsars with a clear distance estimate and with a well-constrained X-ray spectrum, the weighted least square fit yields:
log 10 L X 29 = (1.11 +0.21 −0.30 ) + (1.04 ± 0.09)log 10Ė34(1)
whereĖ =Ė 34 × 10 34 erg/s and L X = L X 29 × 10 29 erg/s. All the uncertains are at 90% confidence level. We can evaluate the goodness of this fit using the reduced chisquare value χ 2 red = 3.7; a double linear fit does not significantly change the value of χ 2 red . A more precise way to evaluate the dispersion of the dataset around the fitted curve is the parameter:
W 2 = (1/n) i=1−>n (y i oss − y i f it ) 2 where y i
oss is the actual i th value of the dataset (in our case log 10 L X 29 ) and y i f it the expected one. A lesser spread in the dataset translate into a lower value of W 2 . We obtain W 2 = 0.436 for the L x −Ė relationship. Such high values of both W 2 and χ 2 red are an indication of an important scattering of the L X values around the fitted relation.
Our results are in agreement with Possenti et al. (2002);Kargaltsev&Pavlov (2008).
Study of the γ-ray luminosity
The gamma-ray luminosity, L γ , is correlated with the pulsar spin-down luminositẏ E. Such a trend is expected in many theorical models (see e.g. Zhang et al. 2004;Muslimov&Harding 2003) and it's shortly discussed in the Fermi LAT catalog of gamma-ray pulsars (Abdo et al. 2010a).
Selecting the same subsample of Fermi pulsar used in the previous chapter to assess the relation between L γ andĖ, we found that a linear fit:
log 10 L γ 32 = (0.45 +0.50 −0.17 ) + (0.88 ± 0.07)log 10Ė34(2)
yields an high value of χ 2 red = 4.2. Inspection of the distribution of residuals lead us to try a double-linear relationship, which yields:
log 10 L γ 32 = (2.45 ± 0.76) + (0.20 +0.27 −0.31 )log 10Ė34 ,Ė > E crit (3a) log 10 L γ 32 = (0.52 ± 0.18) + (1.43 +0.31 −0.23 )log 10Ė34 ,Ė < E crit(3b)
with E crit = 3.72 +3.55 −3.44 × 10 35 erg/s and χ 2 red = 2.
2. An f-test shows that the probability for a chance χ 2 improvement is 0.00011. Such a result is in agreement with the data reported in Abdo et al. (2010a) for the entire dataset of Fermi γ-ray pulsars. Indeed, the χ 2 red obtained for the double linear fit is better than that obtained for the L X -Ė relationship.
We obtain W 2 = 0.344 for the double linear L γ −Ė relationship. Both the χ 2 red and W 2 are in agreement with a little higher scatter in the L X −Ė graph. A difference between the X-ray and γ-ray emission geometries -that translates in different values of f γ and f X -could explain such a behaviour.
The existence of anĖ crit has been posited from the theoretical point for different pulsar emission models. Revisiting the outer-gap model for pulsars with τ < 10 7 yrs and assuming initial conditions as well as pulsars' birth rates, Zhang et al. (2004) found a sharp boundary, due to the saturation of the gap size, for L γ =Ė. They obtain the following distribution of pulsars' γ-ray luminosities:
log 10 L γ = log 10Ė + const. ,Ė <Ė crit (4a) log 10 L γ ∼ 0.30log 10Ė + const. ,Ė >Ė crit(4b)
By assuming the fractional gap size from Zhang&Cheng (1997), they obtainĖ crit = 1.5 × 10 34 P 1/3 erg/s. While Equation 4 is similar to our double linear fit (Equation 3), the E crit they obtain seems to be lower than our best fit value.
On the other hand, in slot-gap models (Muslimov&Harding 2003), the break occurs at about 10 35 erg/s, when the gap is limited by screening of the acceleration field by pairs.
We can see from Figure 2 that radio-quiet pulsars have higher luminosities than the radio-loud ones, for similar values ofĖ. As in the L X −Ė fit, we can't however discriminate between the two population due to the big errors stemming from distance estimate.
Study of the γ-to-X ray luminosity ratio
At variance with the X-ray and gamma-ray luminosities, the ratio between the X-ray and gamma-ray luminosities is indipendent from pulsars' distances. This makes it possible to significatively reduce the error bars leading to more precise indications on the pulsars' emission mechanisms. Figure 3 reports the histogram of the F γ /F X values using only type 2 (high quality X-ray data) pulsars. The radio-loud pulsars have < F γ /F X >∼ 800 while the radio-quiet population has < F γ /F X >∼ 4800. Applying the Kolmogoroff-Smirnov test to type 2 pulsars' F γ /F X values we obtained that the chance for the two datasets belong to the same population is 0.0016. By using all the pulsars with a confirmed X-ray counterpart (i.e.
including also type 1 objects) this probability increase to 0.00757. We can conclude, with a 3σ confidence level, that the radio-quiet and radio-loud datasets we used are somewhat different.
3.3.1. A distance indipendent spread in F γ /F X Figure 4 shows F γ /F X as a function ofĖ for our entire sample of γ-ray emitting NSs while in Fig 5 only the pulsar with "high quality" X-ray data have been selected. Even neglecting the upper and lower limits (shown as triangles) as well as the low quality points (see Figure 5), one immediatly notes the scatter on the F γ /F X parameter values for a given value ofĖ. Such an apparent spread cannot obviously be ascribed to a low statistic.
An inspection of Figure 4 makes it clear that a linear fit cannot satisfactory describe the data. In a sense, this finding should not come as a surprise since Figure 4 is a combination of Figures 1 and 2 and we have seen that figure 2 requires a double linear fit. However, combining the results of our previous fits (Equations 1 and 3) we obtain the dashed line in Figure 4, clearly a very poor description of the data. ForĖ ∼< 5 × 10 36 the F γ /F X values scatter around a mean value of ∼1000 with a spread of a factor about 100. For higherĖ the values of F γ /F X seem to decrease drastically to an average value of ∼50, reaching the Crab with F γ /F X ∼ 0.1.
The spread in the F γ /F X values for pulsars with similarĖ is obviously unrelated to distance uncertainties. Such a scatter can be due to geometrical effects. For both X-ray and γ-ray energy bands:
L γ,X = 4πf γ,X F obs D 2(5)
where f X and f γ account for the X and γ beaming geometries (which may or may not be related). If the pulse profile observed along the line-of-sight at ζ (where ζ E is the Earth line-of-sight) for a pulsar with magnetic inclination α is F (α, ζ, φ), where φ is the pulse phase, than we can write:
f = f (α, ζ E ) = F (α, ζ, φ)sin(ζ)dζdφ 2 F (α, ζ E , φ)dφ(6)
where f depends only from the viewing angle and the magnetic inclination of the pulsar.
With an high value of this correction coefficient, the emission is disfavoured. Obviously The very important scatter found for F γ /F X values is obviously due to the different geometrical configurations which determine the emission at different wavelength of each pulsar. While geometry is clearly playing an equally important role in determining pulsar luminosities, the F γ /F X plot makes its effect easier to appreciate.
F γ /F X =L γ /L X ×f X /f γ . Different f γ /f X
The dashed line in Figures 4 and 5 is the combination of the best fits of L γ -Ė and
L X -Ė relationship, considering f γ =1 and f X =1 so that represent the hypothetical value of F γ /F X that each pulsar would have if f γ =f X : all the pulsars with a value of F γ /F X below the line have f X <f γ . We have seen in Section 3.3 that the radio-quiet dataset shows an higher mean value of F γ /F X . This is clearly visible in Figure 5 where all the radio-quiet points are above the expected values (dashed line) so that all the radio-quiet pulsars should have f X >f γ . Moreover, the radio-quiet dataset shows a lower scatter with respect to the radio-loud one pointing to more uniform values of f γ /f X for the radio-quiet pulsars. A similar viewing angle or a similar magnetic inclination for all the radio-quiet pulsars could explain such a behaviour (see Equation 6). Figure 6 shows the F γ /F X behaviour as a function of the characteristic pulsar age.
In view of the uncertainty of this parameter, we have also built a similar plot using "real" pulsar age, as derived from the associated supernova remnants (see Figure 6).
Similarly to theĖ relationship, for τ < 10 4 years, F γ /F X values increase with age (both the characteristic and real ones), while for t > 10 4 years the behaviour becomes more complex.
Study of the selection effects
There are two main selections we have done in order to obtain our sample of pulsars with both good γ and X-ray spectra (type 2). First, the two populations of radio-quiet and radio-loud pulsars are unveiled with different techniques: using the same dataset, pulsars with known rotational ephemerides have a detection threshold lower than pulsars found through blind period searches. In the First Fermi LAT pulsar catalog (Abdo et al. 2010a) the faintest gamma-ray-selected pulsar has a flux ∼ 3× higher than the faintest radio-selected one. Second, we chose only pulsars with a good X-ray coverage. Such a coverage depends on many factors (including the policy of X-ray observatories) that cannot be modeled.
Our aim is to understand if these two selections influenced in different ways the two populations of pulsars we are studying: if this was the case, the results obtained would have been distorced. The γ-ray selection is discussed at length in the Fermi LAT pulsar catalog (Abdo et al. 2010a). Since the radio-quiet population has obviously a detection threshold higher than the radio-loud, we could avoid such bias by selecting all the pulsars with a flux higher than the radio-quiet detection threshold (6×10 −8 ph/cm 2 s). Only five radio-loud type 2 pulsars are excluded (J0437-4715, J0613+1036, J0751+1807, J2043+2740
and J2124-3358) with F γ /F X values ranging from 87 to 1464. We performed our analysis on such a reduced sample and the results don't change significatively.
We can, therefore, exclude the presence of an important bias due to the γ-ray selection on type 2 pulsars.
In order to roughly evaluate the selection affecting the X-ray observations, we used the method developed by (Schmidt 1968) to compare the current radio-quiet and -loud samples' spatial distributions, following the method also used in Abdo et al. (2010a). For each object with an available distance estimate, we computed the maximum distance still allowing detection from D max = D est (F γ /F min ) 1/2 , where D est comes from Table 1, the photon flux and F min are taken from Abdo et al. (2010a); Saz . We limited D max to 15 kpc, and compared V , the volume enclosed within the estimated source distance, to that enclosed within the maximum distance,V max , for a galactic disk with radius 10 kpc and thickness 1 kpc (as in Abdo et al. (2010a)). The inferred values of < V /V max > are 0.462, 0.424, 0.443 and 0.516 for the entire gamma-ray pulsars' dataset, the radio-quiet pulsars, millisecond pulsars and the radio-loud pulsars. These are quite close to the expected value of 0.5 even if < V /V max > rq is lower than < V /V max > rl . If we use only type 2 pulsars we obtain 0.395, 0.335, 0.462 and 0.419. These lower values of < V /V max > indicate that we have a good X-ray coverage only for close-by -or very bright -pulsars, not a surprising result. By using the X-ray-counterpart dataset, both the radio-loud and -quiet < V /V max > values appear lower of about 0.1: this seems to indicate that we used the same selection criteria for the two population and we minimized the selection effects in the histogram of We can conclude that the γ-ray selection introduced no changes in the two populations, while the X-ray selection excluded objects both faint and/or far away; any distortion, if present, is not overwhelming.
Only if deep future X-ray observations centered on radio-quiet Fermi pulsars will fail to unveil lower values of F γ /F x , it will be possible to be sure that radio-quiet pulsars have a different geometry (or a different emission mechanism) than radio-loud ones.
Conclusions
The discovery of a number of radio-quiet pulsars comparable to that of radio-loud ones together with the study of their X-ray counterparts made it possible, for the first time, to address their behaviour using a distance independent parameters such as the ratio of their fluxes at X and gamma ray wavelengths.
First, we reproduced the well known relationship between the neutron stars luminosities and their rotational energy losses. Next, selecting only the Fermi pulsars with good X-ray data,
we computed the ratio between the gamma and X-ray fluxes and studied its dependence on the overall rotational energy loss as well as on the neutron star age.
Much to our surprise, the distance independent F γ /F X values computed for pulsars of similar age and energetic differ by up to 3 orders of magnitude, pointing to important (yet poorly understood) differences both in position and height of the regions emitting at X and γ-ray wavelengths within the pulsars magnetospheres. Selection effects cannot account for the spread in the F γ /F X relationship and any further distortion, if present, is not overwhelming.
In spite of the highly scattered values, a decreasing trend is seen when considering young and energetic pulsars. Moreover, radio quiet pulsars are characterized by higher values of the F γ /F X parameter (< F γ /F X > rl ∼ 800 and < F γ /F X > rq ∼ 4800) so that a KS test points to a chance of 0.0016 for them to belong to the same population as the radio loud ones. While it would be hard to believe that radio loud and radio quiet pulsars belong to two different neutron star populations, the KS test probably points to different geometrical configurations (possibly coupled with viewing angles) that characterize radio loud and radio quiet pulsars. Indeed the radio-quiet population we analyzed is less scattered than the radio-loud one, pointing to a more uniform viewing or magnetic geometry of radio-quiet pulsars.
Our work is just a starting point, based on the first harvest of gamma-ray pulsars. The observational panorama will quickly evolve. The gamma-ray pulsar list will certainly grow and this will trigger more X-ray observations, improving both in quantity and in quality the database of the neutron stars detected in X and γ-rays to be used to compute our multiwavelength, distance independent parameter. However, to fully exploit the information packed in the F γ /F X a complete 3D modelling of pulsar magnetosphere is needed to account for the different locations and heights of the emitting regions at work at different energies.
Such modelling could provide the clue to account for the spread we have observed for the ratios between γ and X-ray fluxes as well as for the systematically higher values measured for radio-quiet pulsars.
XMM data analysis is supported by contracts ASI-INAF I/088/06/0 and NASA NIPR NNG10PL01I30. Chandra data analysis is supported by INAF-ASI contract n.I/023/05/0.
It is a pleasure to thank Niel Gehrels for granting SWIFT observations of the newly disocvered Fermi pulsar. We also thank Pablo Saz Parkinson and Andrea Belfiore for the collaboration in searching for X-ray counterparts of radio-quiet pulsars. b : Age derived from the associated SNR. Respectively taken from Slane et al. (2004), Gotthelf et al. (2007), Rudie et al. (2008), Hwang et al. (2001), Thorsett et al. (2003), Gorenstein et al. (1974), Winkler et al. (2009), Bock&Gvaramadze (2002, Hales et al. (2009), Roberts&Brogan (2008, Tam&Roberts (2008), Brogan et al. (2005), Bietenholz&Bartel (2008), Migliazzo et al. (2008), Kothes et al. (2006).
c : These distances are taken from Saz and are obtained under the absumption of a beam correction factor fγ = 1 for the gamma-ray emission cone of all pulsars. In this way one obtains:
d = 0.51Ė 34 1/4 /F 1/2
γ,10 kpc whereĖ =Ė 34 × 10 34 erg/s and Fγ = F γ,10 × 10 −10 erg/cm 2 s. See also . d : Respectively taken from Campana et al. (2008), Kaspi et al. (1998), Kaspi et al. (2001), Gotthelf&Halpern (2009), Halpern et al. (2007). e : g = radio-quiet pulsars ; r = radio-loud pulsars ; m = millisecond pulsars ; i = pulsars detected by INTEGRAL/IBIS but not yet by Fermi (see Bird et al. (2009) c : here, the column density has been fixed by using the galactic value in the pulsar direction obtained by Webtools (http://heasarc.gsfc.nasa.gov/docs/tools.html) and scaling it for the distance (see Table 1). d : the beam correction factor f X is assumed to be 1, which can result in an efficiency > 1. See Watters et al. (2009). Here the errors are not reported.
e : The statistic is very low so that it was necessary to freeze the column density parameter; the values have been evaluated by using WebTools g : The spectrum is well fitted also by a single blackbody. Fig. 3.-: log(Fγ /F X ) histogram. The step is 0.5; the radio-loud (and millisecond) pulsars are indicated in grey and the radio-quiet ones in black. Only high confidence pulsars (type 2) have been used for a total of 24 radio-loud and 9 radio-quiet pulsars. Fig. 4.-:Ė-Fγ/F X diagram. Green: IBIS pulsars; black: radio-quiet pulsars; red: radio-loud pulsars; blue: millisecond pulsars. The triangles are upper and lower limits, the squares indicate pulsars with a type 1 X-ray spectrum (see Table 2) and the stars pulsars with a high quality X-ray spectrum The dotted line is the combination of the best fitting functions obtained for Figure 1 and 2 with the geometrical correction factor set to 1 for both the X and γ-ray bands. pulsars; red: radio-loud pulsars. Triangles are upper limits, squares are pulsars with a type 1 X-ray spectrum while stars are pulsars with a type 2 X-ray spectrum.
. Using the P andṖ values taken from Abdo et al. (2010a); Saz Parkinson et al. (2010), we computed the values reported in Table 1. Most of the distance values are taken from Abdo et al. (2010a); Saz Parkinson et al. (2010) (see
who used Einstein data of 22 pulsar -most of them just upper limits -to derive a linear relation between logF X 0.2−4keV and logĖ. Later, Becker&Trumper (1997) investigated a sample of 27 pulsars by using ROSAT, yielding the simple scaling L 0.1−2.4keV X
values for different pulsars can explain the scattering seen in the F γ /F X -Ė relationship.Watters et al. (2009) assume a nearly uniform emission efficiency whileZhang et al. (2004) compute a significant variation in the emission efficiency as a function of the geometry of pulsars. In both cases, geometry plays an important role through magnetic field inclination as well as through viewing angle.
Figure 3 .
3Figure 3.
most of the values of the distance are taken from Abdo et al. (2010a), Saz Parkinson et al. (2010).
2:: X-ray spectra of the pulsars. The fluxes are unabsorbed and here the non-thermal and total fluxes are shown. The model used is an absorbed powerlaw plus blackbody, where statistically necessary. The only exceptions are PSR J0437-4715 (double PC plus powerlaw), J0633+1746 and J0659+1414 (double BB plus powerlaw): here only the most relevant thermal component is reported. All the errors are at a 90% confidence level. a : This parameter shows the confidency of the X-ray spectrum of each pulsar, based on the available X-ray data. An asterisk mark the pulsars for which ad ad-hoc analysis was necessary. See section 2.2. b : C = Chandra/ACIS ; X = XMM/PN+MOS ; S = SWIFT/XRT ; L = literature. Only public data have been used (at December 2010).
(
http://heasarc.gsfc.nasa.gov/docs/tools.html). f : Respectively taken from Webb et al. (2004a), Kargaltsev&Pavlov (2008), Campana et al. (2008), DeLuca et al. (2005), DeLuca et al. (2005), Webb et al. (2004b), Mori et al. (2004), Kargaltsev et al. (2009), Li et al. (2005).
Fig. 5 .
5-:Ė-Fγ /F X diagram for high confidence pulsars only (type 2). Green: IBIS pulsars; black: radio-quiet pulsars; red: radio-loud pulsars; blue: millisecond pulsars. The triangles are upper limits.
Fig. 6 .
6-: Left: Characteristic Age-Fγ /F X diagram. Right: SNR Age-Fγ /F X diagram. Green: IBIS pulsars; black: radio-quiet
)
Table
(10 −13 erg/cm 2 s) (10 −13 erg/cm 2 s) (10 20 cm −3 )).
f : Only bright PWNs have been considered (with F pwn
x
> 1/5 F psr
x ). The presence or the absence of a bright PWN has been valued by re-analyzing the X-ray data
(except for the X-ray analyses taken from literature, see the following table).
PSR Name X a Inst b
F nt
X
F tot
X
N H
γ X
kT
R BB
Eff d
X
(10 −13 erg/cm 2 s) (10 −13 erg/cm 2 s) (10 20 cm −3 )
(keV)
(km)
J0007+7303
2
X/C
0.686±0.100
0.841±0.098
16.6 +8.9
−7.6
1.30±0.18
0.102 +0.032
−0.018
0.64 +0.88
−0.20
2.84×10 −5
J0030+0451
2
X
1.16±0.02
2.8±0.1
0.244 +7.470
−0.244
2.8 +0.5
−0.4
0.194 +0.015
−0.021
0.6 +0.225
−0.1
3.32×10 −4
J0205+6449
2
C
19.9±0.5
19.9±0.5
40.2±0.11
1.82±0.03
-
-
5.92×10 −5
J0218+4232
2
L f
4.87 +0.57
−1.28
4.87 +0.57
−1.28
7.6±4.3
1.19±0.12
-
-
2.05×10 −3
J0248+6021
0
S
<9.00
<9.00
80 c
2
-
-
-
J0357+32
2
C
0.64 +0.09
−0.06
0.64 +0.09
−0.06
8.0±4.0
2.53±0.25
-
-
-
PSR Name X a Inst b
F nt
X
F tot
X
N H
γ X
kT
R BB
Eff d
X
(10 −13 erg/cm 2 s) (10 −13 erg/cm 2 s) (10 20 cm −3 )
(keV)
(km)
J0437-4715
2
X/C
10.1 0.8
−0.6
14.3 0.9
−0.7
4.4 ± 1.7
3.17 ± 0.13 0.228 +0.006
−0.003
0.060 +0.009
−0.008
8.23×10 −5
J0534+2200
2
L f
44300±1000
44300±1000
34.5±0.2
1.63±0.09
-
-
3.67×10 −3
J0540-6919
2
L f
568±6
568±6
37±1
1.98±0.02
-
-
1.13×10 −1
J0613-0200 2*
X
0.221 +0.297
−0.158
0.221 +0.297
−0.158
1 e
2.7±0.4
-
-
3.74×10 −5
J0631+1036
0
X
<0.225
<0.225
20 c
2
-
-
-
J0633+0632
1
S
1.53±0.51
1.53±0.51
20 c
2
-
-
-
J0633+1746
2
L f
4.97 +0.09
−0.27
12.6 +0.2
−0.7
1.07 e
1.7±0.1
0.190±0.030 0.04±0.01 8.99×10 −5
J0659+1414
2
L f
4.06 +0.03
−0.59
168 +1
−24
4.3±0.2
2.1±0.3
0.125±0.003 1.80±0.15 8.46×10 −5
J0742-2822
0
X
<0.225
<0.225
20 c
2
-
-
-
J0751+1807
2
L f
0.44 +0.18
−0.13
0.44 +0.18
−0.13
4 e
1.59±0.30
-
-
2.52×10 −4
J0835-4510 2*
L f
65.1±15.7
281±67
2.2±0.5
2.7±0.6
0.129±0.007
2.5±0.3
9.78×10 −6
J1023-5746 2*
C
1.61±0.27
1.61±0.27
115 +47
−41
1.15 +0.24
−0.22
-
-
-
J1028-5819
1
S
1.5±0.5
1.5±0.5
50 c
2
-
-
-
J1044-5737
0
S
<3.93
<3.93
50 c
2
-
-
-
J1048-5832 2* C+X
0.50 +0.35
−0.10
0.50 +0.35
−0.10
90 +40
−20
2.4±0.5
-
-
1.74×10 −5
J1057-5226
2
C+X
1.51 +0.02
−0.13
24.5 +0.3
−2.5
2.7±0.2
1.7±0.1
0.179±0.006 0.46±0.06 2.49×10 −4
J1124-5916
2
C
9.78 +1.18
−1.03
10.90 +1.32
−1.26
30.0 +2.8
−4.8
1.54 +0.09
−0.17
0.426 +0.034
−0.018
0.274 +0.089
−0.077
2.27×10 −4
J1413-6205
0
S
<4.9
<4.9
40 c
2
-
-
-
J1418-6058
2
C+X
0.353±0.154
0.353±0.154
233 +134
−106
1.85 +0.83
−0.56
-
-
1.05×10 −5
J1420-6048 2*
X
1.6±0.7
1.6±0.7
202 +161
−106
0.84 +0.55
−0.37
-
-
1.11×10 −4
J1429-5911
0
S
<16.9
<16.9
80 c
2
-
-
-
J1459-60
0
S
<3.93
<3.93
100 c
2
-
-
-
PSR Name X a Inst b
F nt
X
F tot
X
N H
γ X
kT
R BB
Eff d
X
(10 −13 erg/cm 2 s) (10 −13 erg/cm 2 s) (10 20 cm −3 )
(keV)
(km)
J1509-5850
2
C+X
0.891 +0.132
−0.186
0.891 +0.132
−0.186
80 e
1.31±0.15
-
-
1.12×10 −4
J1614-2230
0
C+X
<0.286
0.286 +0.015
−0.086
2.9 +4.3
−2.9
2
0.236±0.024
0.92 +0.73
−0.35
-
J1617-5055
2
L f
64.2±0.3
64.2±0.3
345±14
1.14±0.06
-
-
2.03×10 −3
J1709-4429
2
C+X
3.78 +0.37
−0.94
9.04 +0.87
−2.25
45.6 +4.4
−2.9
1.88±0.21 0.166±0.012
4.3 +1.72
−0.86
6.62×10 −5
J1718-3825
2
X
2.80±0.67
2.80±0.67
70 e
1.4±0.2
-
-
3.12×10 −4
J1732-31
0
S
<2.42
<2.42
50 c
2
-
-
-
J1741-2054 g
1
S
4.64 +1.84
−1.63
4.64 +1.84
−1.63
0 e
2.10 +0.50
−0.28
-
-
9.93×10 −4
J1744-1134
0
C
<0.272
0.272±0.020
12 +42
−12
2
0.272 +0.094
−0.098
0.132 +1.600
−0.120
-
J1747-2958 2* C+X
48.7 +21.3
−6.0
48.7 +21.3
−6.0
256 +9
−6
1.51 +0.12
−0.44
-
-
7.41×10 −4
J1809-2332
2
C+X
1.40 +0.25
−0.23
3.14 +0.57
−0.53
61 +9
−8
1.85 +1.89
−0.36
0.190±0.025
1.54 +1.26
−0.44
8.98×10 −5
J1811-1926
2
C
26.6 +2.3
−3.7
26.6 +2.3
−3.7
175 +11
−12
0.91 +0.09
−0.08
-
-
1.18×10 −3
J1813-1246
1
S
9.675±3.225
9.675±3.225
100 c
2
-
-
1.13×10 −3
J1813-1749
2
C
24.4±11.5
24.4±11.5
840 +433
−373
1.3±0.3
-
-
-
J1826-1256
2
C
1.18±0.58
1.18±0.58
100 e
0.63 +0.90
−0.63
-
-
-
J1833-1034
2
X+C
66.3±2.0
66.3±2.0
230 e
1.51±0.07
-
-
4.15×10 −4
J1836+5925
2
X+C
0.459 +0.403
−0.174
0.570 +0.500
−0.216
0 +0.792
−0
1.56 +0.51
−0.73
0.056 +0.012
−0.009
4.47 +3.03
−1.31
5.84×10 −5
J1846+0919
0
S
<2.92
<2.92
20 c
2
-
-
-
J1907+06
1
C
3.93±1.45
3.93±1.45
398 +468
−375
3.16 +2.76
−2.28
-
-
-
J1952+3252
2
L f
35.0±4.4
38.0±3.0
30±1
1.63 +0.03
−0.05
0.13±0.02
2.2 +1.4
−0.8
3.57×10 −4
J1954+2836
0
S
<3.65
<3.65
50 c
2
-
-
-
J1957+5036
0
S
<2.98
<2.98
10 c
2
-
-
-
J1958+2841
1
S
1.57±0.53
1.57±0.53
40 c
2
-
-
-
PSR Name X a Inst b
F nt
X
F tot
X
N H
γ X
kT
R BB
Eff d
X
(keV)
(km)
J2021+3651
2
C+X
2.21 +0.35
−1.27
6.01 +0.96
−3.44
65.5±6.0
2±0.5
0.140 +0.023
−0.018
4.94±1.40 2.75×10 −5
J2021+4026
1
C
0.443±0.148
0.443±0.148
40 c
2
-
-
1.03×10 −4
J2032+4127 2* C+X
0.423±0.118
0.423±0.118
38.7 +75.6
−38.7
1.87 +0.96
−0.76
-
-
1.99×10 −4
J2043+2740
2
X
0.208 +0.480
−0.208
0.208 +0.48
−1.08
0 +20
−0
3.1±0.4
-
-
1.44×10 −4
J2055+25
2
X
0.382 +0.197
−0.148
0.382 +0.197
−0.148
7.3 +10.4
−7.3
2.2 +0.5
−0.6
-
-
1.79×10 −3
J2124-3358
2
X
0.668 +0.150
−0.344
0.959 +0.216
−0.494
2.76 +4.87
−2.76
2.89 +0.45
−0.35
0.268 +0.034
−0.032
0.019 +0.012
−0.009
1.25×10 −4
J2229+6114
2
C+X
51.3 +9.3
−5.8
51.3 +9.3
−5.8
30 +9
−4
1.01 +0.06
−0.12
-
-
2.90×10 −4
J2238+59
0
S
<4.49
<4.49
70 c
2
-
-
-
Table
. A Abdo, Sci. 325840Abdo, A. et al. 2009, Sci, 325, 840A
. A Abdo, ApJ. 187460Abdo, A. et al., 2010, ApJ, 187, 460A
. A Abdo, ApJ. 7081254Abdo, A. et al., 2010, ApJ, 708, 1254A
. A Abdo, ApJ. 188405Abdo, A. et al., 2010 ApJ, 188, 405A
. A Abdo, ApJ. 7121209Abdo, A. et al., 2010 ApJ, 712, 1209A
. A Abdo, ApJ. 71164Abdo, A. et al., 2010 ApJ, 711,64A
K A Arnaud, Astronomical Data Analysis software Systems V. G. Jacoby & J. Barnes10117Arnaud, K. A., 1996, in Astronomical Data Analysis software Systems V, eds. G. Jacoby & J. Barnes (ASP Conf. Ser. 101), 17
. W Becker, J Trumper, A&A. 326682Becker, W. & Trumper, J., 1997, A&A, 326, 682
. W Becker, Astrophysics and Space Science Library. W. Becker Bietenholz, M.F.& Bartel, N.3571411MNRASBecker, W., 2009, in Astrophysics and Space Science Library, Vol. 357, Astrophysics and Space Science Library, ed W. Becker Bietenholz, M.F.& Bartel, N., 2008, MNRAS, 386, 1411B
. A J Bird, arXiv:0910.1704Bird, A.J. et al., 2009, arXiv:0910.1704
. D C Bock, .-J Gvaramadze, V V , A&A. 394533Bock, D.C.-J. & Gvaramadze, V.V., 2002, A&A, 394, 533B
. C L Brogan, ApJ. 629105Brogan, C.L. et al., 2005, ApJ, 629L, 105B
. D N Burrows, Space Sci. Rev. 120165Burrows, D. N. et al., 2005, Space Sci. Rev., 120, 165
. F Camilo, ApJ. 7051Camilo, F. et al., 2009. ApJ, 705, 1C
. R Campana, MNRAS. 389691Campana R. et al., 2008, MNRAS, 389, 691C
. A De Luca, S Molendi, ApJ. 419837De Luca, A. & Molendi, S. 2004, ApJ, 419, 837D
. A De Luca, ApJ. 6231051De Luca, A. et al., 2005, ApJ, 623, 1051
. B M Gaensler, P O Slane, ARA&A. 4417Gaensler, B.M. & Slane, P.O., 2006, ARA&A 44, 17
. G G Garmire, 485128Garmire, G.G. et al., 2003, SPIE 4851, 28
. P Giommi, ASPC. 25100Giommi, P. et al. 1992, ASPC, 25, 100G
. P Gorenstein, ApJ. 192661Gorenstein, P. et al., 1974, ApJ, 192, 661G
. E V Gotthelf, ApJ. 654267Gotthelf, E.V. et al., 2007, ApJ, 654, 267G
. E V Gotthelf, J P Halpern, ApJ. 700158Gotthelf, E.V. & Halpern, J.P., ApJ, 700L, 158G
. C A Hales, ApJ. Hales, C.A. et al., 2009, ApJ, 706 1316H
. J P Halpern, ApJ. 6681154Halpern, J.P. et al., 2007, ApJ, 668, 1154H
. A K Harding, ApJ. 245267Harding, A.K. et al., 1981, ApJ, 245, 267
. A K Harding, A G Mulismov, ApJ. 568862Harding, A.K. & Mulismov, A.G., 2002, ApJ, 568, 862
. U Hwang, ApJ. 560742Hwang, U. et al., 2001, ApJ, 560, 742H
. O Kargaltsev, G G Pavlov, Ap&SS. 308287Kargaltsev O. & Pavlov, G.G., 2007, Ap&SS, 308, 287K
O Kargaltsev, G G Pavlov, AIP Conf. Proc. 983171Kargaltsev O. & Pavlov, G.G., 2008, in AIP Conf. Proc. 983, 171
. O Kargaltsev, ApJ. 690891Kargaltsev, O. et al., 2009, ApJ, 690, 891
. V M Kaspi, ApJ. 503161Kaspi, V.M. et al., 1998, ApJ, 503L, 161K
. V M Kaspi, ApJ. 560371Kaspi, V.M. et al., 2001, ApJ, 560, 371K
. V M Kaspi, astro.ph. 2136KKaspi, V.M. et al., 2004, astro.ph. 2136K
. R Kothes, ApJ. 238225Kothes, R. et al., 2006, ApJ, 238, 225
. X H Li, ApJ. 628931Li, X.H. et al., 2005, ApJ, 628, 931
. R N Manchester, AJ. 129Manchester, R.N. et al., 2005, AJ, 129, 1993
. J M Migliazzo, arXiv:astro-ph/0202063v1Migliazzo, J.M, et al., 2008, arXiv:astro-ph/0202063v1
. Roberto P Mignani, arXiv0912.2931MMignani, Roberto P., 2009, arXiv0912.2931M
. Roberto P Mignani, 47ihea.bookMignani, Roberto P., 2010, ihea.book, 47M
. K Mori, AdSpR. 33503Mori, K. et al., 2004, AdSpR, 33, 503M
. A G Muslimov, A K Harding, ApJ. 588430Muslimov, A.G. & Harding, A.K., 2003, ApJ, 588, 430
. F Mattana, ApJ. 69412Mattana, F. et al., 2009, ApJ, 694, 12M
. A Possenti, A&A. 387993Possenti, A. et al., 2002, A&A, 387, 993
. P S Ray, Ray, P.S. et al., 2010 arXiv1011.2468R
. M S E Roberts, C L Brogan, ApJ. 681320Roberts, M.S.E.& Brogan, C.L., 2008, ApJ, 681, 320R
. G C Rudie, MNRAS. 3841200Rudie, G.C. et al., 2008, MNRAS, 384, 1200R
. P Saz Parkinson, Z.R. Seward, F.D. & Wang332199ApJSaz Parkinson, P. et al., in preparation Seward, F.D. & Wang, Z.R., 1988, ApJ, 332, 199
. M Schmidt, ApJ. 151393Schmidt, M., 1968, ApJ, 151, 393S
. P Slane, ApJ. 6011045Slane, P. et al., 2004, ApJ, 601, 1045S
. A W Steiner, ApJ. 72233Steiner, A.W. et al., 2010, ApJ, 722, 33
. L Struder, A&A. 36518Struder, L. et al., 2001, A&A, 365, L18
. M Stuhlinger, XMM-SOC-CAL-TN-0052Stuhlinger, M. et al., 2008, XMM-SOC-CAL-TN-0052
. C Tam, M S E Roberts, arXiv:astro-ph/0310586v1Tam, C. & Roberts, M.S.E., 2008, arXiv:astro-ph/0310586v1
. D J Thompson, Reports on Progress in Physics. 71116901Thompson, D.J., 2008, Reports on Progress in Physics, 71, 116901
. S E Thorsett, ApJ. 59271Thorsett, S.E. et al., 2003, ApJ, 592L, 71T
. M J L Turner, A&A. 36527Turner, M.J.L. et al. 2001, A&A, 365, L27
. K P Watters, ApJ. 6951289Watters, K.P. et al., 2009, ApJ, 695, 1289
. N A Webb, A&A. 417181Webb, N.A. et al., 2004, A&A, 417, 181
. N A Webb, A&A. 419269Webb, N.A. et al., 2004, A&A, 419, 269
. P F Winkler, ApJ. 6921489Winkler, P.F. et al., 2009, ApJ, 692, 1489W
. V E Zavlin, ApJ. 638951Zavlin, V.E., 2005, ApJ, 638, 951
. V E Zavlin, ApJ. 638951Zavlin, V.E. et al., 2006, ApJ, 638, 951
. L Zhang, K S Cheng, ApJ. 487370Zhang, L. & Cheng, K.S., 1997, ApJ, 487, 370
. L Zhang, ApJ. 604317Z This manuscript was prepared with the AAS L A T E X macros v5.2Zhang, L. et al., 2004, ApJ, 604, 317Z This manuscript was prepared with the AAS L A T E X macros v5.2.
Table 3:: γ-ray spectra of the pulsars. A broken powerlaw spectral shape is assumed for all the pulsars and the values are taken from. Abdo, Table 3:: γ-ray spectra of the pulsars. A broken powerlaw spectral shape is assumed for all the pulsars and the values are taken from Abdo et al. (2010a);
The gamma-ray flux is above 100 GeV. The 4 sources with an upper limit flux are taken from Bird et al. (2009) (see section 2.1). The radio flux densities. Saz Parkinson, at 1400MHz) are taken from Abdo et al.Saz Parkinson et al. (2010). The gamma-ray flux is above 100 GeV. The 4 sources with an upper limit flux are taken from Bird et al. (2009) (see section 2.1). The radio flux densities (at 1400MHz) are taken from Abdo et al. (2010a);
All the errors are at a 90% confidence level. a : fγ is assumed to be 1, which can result in an efficiency > 1. Saz Parkinson, Abdo et al.Here the errors are not reported. b : taken fromSaz Parkinson et al. (2010). All the errors are at a 90% confidence level. a : fγ is assumed to be 1, which can result in an efficiency > 1. See Watters et al. (2009). Here the errors are not reported. b : taken from Abdo et al. (2010e).
Ė-Lγ diagram for all pulsars classified as type 2 and with a clear distance estimation. assuming fγ=1 (see EquationFig. 2.-:Ė-Lγ diagram for all pulsars classified as type 2 and with a clear distance estimation, assuming fγ=1 (see Equation
Black: radio-quiet pulsars; red: radio-loud pulsars; blue: millisecond pulsars. The double linear best fit of the logs of the two quantities is shownBlack: radio-quiet pulsars; red: radio-loud pulsars; blue: millisecond pulsars. The double linear best fit of the logs of the two quantities is shown.
|
[] |
[
"REGULARITY OF THE ETA FUNCTION ON MANIFOLDS WITH CUSPS",
"REGULARITY OF THE ETA FUNCTION ON MANIFOLDS WITH CUSPS"
] |
[
"Paul Loya ",
"Sergiu Moroianu ",
"Jinsung Park "
] |
[] |
[] |
On a spin manifold with conformal cusps, we prove under an invertibility condition at infinity that the eta function of the twisted Dirac operator has at most simple poles and is regular at the origin. For hyperbolic manifolds of finite volume, the eta function of the Dirac operator twisted by any homogeneous vector bundle is shown to be entire.
|
10.1007/s00209-010-0769-3
|
[
"https://arxiv.org/pdf/0901.2404v1.pdf"
] | 6,216,388 |
0901.2404
|
8774be38fdb8af06914d2b6e82024e326c2bdae6
|
REGULARITY OF THE ETA FUNCTION ON MANIFOLDS WITH CUSPS
16 Jan 2009
Paul Loya
Sergiu Moroianu
Jinsung Park
REGULARITY OF THE ETA FUNCTION ON MANIFOLDS WITH CUSPS
16 Jan 2009
On a spin manifold with conformal cusps, we prove under an invertibility condition at infinity that the eta function of the twisted Dirac operator has at most simple poles and is regular at the origin. For hyperbolic manifolds of finite volume, the eta function of the Dirac operator twisted by any homogeneous vector bundle is shown to be entire.
Introduction
The eta invariant was first introduced in [1] as a real number associated to certain elliptic first-order differential operators on compact manifolds with boundary, which happened to equal the difference between the Atiyah-Singer integral and the index with respect to the Atiyah-Patodi-Singer spectral boundary condition. During the thirty years since its discovery, this invariant has risen from the status of "error term" to that of a subtle tool, highly efficient in solving otherwise intractable problems from various fields of mathematics. Let us mention in this respect its recent application in finding obstructions for hyperbolic and flat 3-manifolds to bounding hyperbolic 4-manifolds [17].
For a Hermitian vector bundle E over a closed manifold M, consider an elliptic selfadjoint first-order differential operator D : C ∞ (M, E) → C ∞ (M, E). Then the L 2 spectrum of D is purely discrete and distributed according to the classical Weyl law. It follows that the complex function [6].
It is a byproduct of the index theorem of Atiyah, Patodi and Singer that when M is a boundary and D is the tangential part of an elliptic operator as above, the point s = 0 is always regular for the eta function. In fact, the eta function is always regular at the origin for general elliptic pseudodifferential operators; this was proved using K-theory in the spirit of early index theory by Atiyah, Patodi, and Singer [3] in the odd-dimensional case, and by Gilkey [9] for arbitrary dimensions. The eta invariant of D is defined as η(D) = η(D, 0) + dim ker(D).
Still on a closed manifold another question arises: it is easy to note that the possible residue of the eta function at the origin is the integral on M of a well-defined density determined locally by D, called the local eta residue. Is this density zero? For arbitrary differential operators the answer is negative, but for Dirac operators it was proved by Bismut and Freed [5] that this is indeed the case. The proof uses the properties of the heat coefficients in terms of Clifford filtration, along the lines of Bismut's heat equation proof of the local index formula.
In this note we first revisit the vanishing of the local eta density from the point of view of conformal invariance. We give a self-contained proof, using the APS formula, of the fact that the eta invariant of the spin Dirac operator is insensible to conformal changes. This important fact belongs to the mathematical folklore but we could not find a complete proof in the literature. The existing proofs (e.g. [2, pp. 420-421]) tend to use the index formula of [1] for metrics which are not of product type near the boundary, without explaining why one can do so. In section 2 we show how to apply the APS formula to a true producttype metric in order to prove the conformal invariance of the eta invariant. We deduce the vanishing of the local eta residue from this conformal invariance, by interpreting the variation of the eta invariant in terms of the Wodzicki residue.
Our main results concern the eta function on noncompact spin manifolds with conformal cusps, in particular on complete finite-volume hyperbolic spin manifolds. More precisely, let M be a compact manifold with boundary and [0, ǫ) x × ∂M a collar neighborhood of its boundary. The interior M • of M is called a conformally cusp manifold if it is endowed with a metric g p which near x = 0 takes the form (1) g p = x 2p dx 2 x 4 + h , for some p > 0, where h is a metric on ∂M independent of x.
The main examples are complete hyperbolic manifolds of finite volume, for which p = 1 and h is flat. Assume now that M is spin. Let E be a bundle with connection over M which is of product-type on the collar, and let D p denote the associated twisted Dirac operator on Σ ⊗ E where Σ is the spinor bundle over M. We assume that the spin structure and the connection on E are "nontrivial" (Assumption 1 in Section 5) in the sense that the twisted Dirac operator D (∂M,h) for the induced spin structure on (∂M, h) is invertible. Under this assumption, the twisted Dirac operator D p is essentially self-adjoint with discrete spectrum obeying the Weyl law and the corresponding eta function η(D p , s) has a meromorphic extension to C with possible double poles [23]. For the untwisted Dirac operator on finite-volume hyperbolic manifolds, it was already noted by Bär [4] that the spectrum of D = D 1 is discrete if and only if the spin structure is "nontrivial" on the cusps in the above sense, otherwise the continuous spectrum of D is R. In Appendix A we prove that the same occurs for conformal cusp metrics: If D (∂M,h) fails to be invertible, then for p ≤ 1 the twisted Dirac operator D p has essential spectrum equal to R. In the case p > 1, D p fails to be essentially self-adjoint and although every self-adjoint extension of D p has discrete spectrum, nothing is known about the meromorphic properties of the corresponding eta functions. These are the reasons the "nontriviality" assumption plays an important rôle in this theory. The main results of this paper are that under Assumption 1 the eta function η(D p , s) in fact has at most simple poles, and is always regular at the origin. Moreover, the poles disappear for hyperbolic manifolds, thus the eta function is entire in that case.
Main Theorem. Let M • be an odd-dimensional spin manifold with conformal cusps, E a twisting bundle of product type on the cusps, and let D p be the associated twisted Dirac operator to (1) on Σ ⊗ E satisfying Assumption 1. Then In even dimensions the eta function vanishes identically since the spectrum is symmetric, see Section 2.
A related question may be asked on more complicated metrics at infinity, like the fiberedcusp metrics arising on Q-rank one locally symmetric spaces. A similar problem arose from [18], where we could obtain a meromorphic extension of the eta function for cofinite quotients of PSL 2 (R) by using the Selberg trace formula. The methods employed here do not seem to extend easily to such spaces.
We now outline this paper. We begin in Section 2 by proving that on a closed spin manifold the eta invariants are identical for two Dirac operators associated to conformal metrics. In Section 3 we review the Guillemin-Wodzicki residue density and residue trace and derive some of their elementary properties that we need in the sequel. In Section 4 we give a new proof that on any spin manifold, the local eta residue of a twisted Dirac operator vanishes. In Sections 5 and 6 we prove the main theorem based on Melrose's cusp calculus [20], which we review in Appendix B.
Conformal invariance of eta invariants on closed manifolds
The eta invariant of the spin Dirac operator and of the odd signature operator are known to be invariant under conformal changes of the metric. Since on one hand we need to understand this fact in depth, and on the other hand we were unable to find a good reference, we chose to give here a complete proof.
Let (M, g) be a closed spin Riemannian manifold of dimension n, (E, ∇) a Hermitian vector bundle on M with compatible connection ∇, and D the twisted Dirac operator. Note that when n is even, the eta invariant reduces to dim ker(D) (which is known to be a conformal invariant, see (2)). Indeed, in even dimensions the operator D is odd with respect to the splitting in positive and negative spinors. Thus the eta function itself vanishes in these dimensions because the spectrum is symmetric around 0. For the untwisted spin Dirac operator, the same vanishing occurs in dimensions 4k + 1: for n = 8k + 1 the spinor bundle has a real structure (i.e. a skew-complex map C with C 2 = 1) which anti-commutes with D, while in dimensions 8k + 5 it has a quaternionic structure (i.e. a skew-complex map J with J 2 = −1) which anti-commutes with D [1, pp. 61, Remark (3)].
Let f ∈ C ∞ (M) be a real conformal factor, g ′ := e −2f g a metric conformal to g and D ′ the corresponding Dirac operator. Proof. The map of dilation by e f gives an SO(n)-isomorphism between the orthonormal frame bundles of g and g ′ . Thus the principal Spin(n)-bundle (for the fixed spin structure) corresponding to g and g ′ are also isomorphic via the lift of this map. This identifies the spinor bundles for the two metrics; the Dirac operators are linked by the formula
g ′ = e −2f g, D ′ = e n+1 2 f De − n−1 2 f(2)
(see e.g. [25, Proposition 1] for a proof). In particular, the null-spaces of these two operators have the same dimension. Let ψ : I = [0, 1] → R be a smooth function which is 0 for t < 1/3 and which is identically 1 for t > 2/3. Set f t := ψ(t)f and define a metric on X := I × M by
h = dt 2 + e −2ft g.
We denote again by E the pull back of E from the second factor, together with its connection. Therefore, the curvature tensor of E on X satisfies
(3) ∂ t R E = 0.
The metric h is of product type near ∂X and hence the Atiyah-Patodi-Singer formula can be applied to the (chiral) twisted Dirac operator D + on X:
index(D + ) = XÂ (R h )ch(R E ) − 1 2 η(D) + 1 2 η(D ′ ).
On the other hand, this index is also equal to the spectral flow in the space of Riemannian metrics from D to D ′ ; again by (2), there is no spectral flow so the index vanishes. The proof will be concluded by showing that the top component of the integrand in the APS formula vanishes.
From (3) we deduce ∂ t exp(R E ) = 0 so it is enough to show that ∂ t  (R h ) = 0. Recall that is a polynomial in the Pontrjagin forms tr(R h ) 2k ∈ Ω 4k (X). Also, recall that the Pontrjagin forms are conformal invariant (they only depend on the Weyl tensor -first proved by Chern and Simons [8]). Let
h ′ = e 2ft h = e 2ft dt 2 + g.
We claim that ∂ t tr((R h ′ ) 2k ) = 0 for all k. Indeed, let ∇ be the Levi-Civita connection of h ′ . For every vector field V on M denote byṼ its pull-back to X, which is orthogonal on the length-1 vector field T := e −ft ∂ t . Note that
[Ṽ , T ] = −ψ(t)V (f )T, [Ṽ ,Ũ] = [V, U].
We deduce that
2 ∇Ṽ T,Ũ =Ṽ T,Ũ + T Ṽ ,Ũ −Ũ Ṽ , T + [Ṽ , T ],Ũ + [Ũ,Ṽ ], T + [Ũ, T ],Ṽ =0.
Clearly, since also ∇Ṽ T, T = 0, we infer ∇Ṽ T = 0. Directly from the definition of the curvature this implies that R h ′ VŨ T = 0. If we split T X into T I ⊕ T M, we see from the symmetry of the curvature tensor that R h ′ VŨ is a diagonal linear map, while R h ′ V T is offdiagonal. It follows that ∂ T (R h ′ ) 2k is an off-diagonal form-valued endomorphism (since it contains exactly one curvature term involving T ). Hence its trace is zero.
Remark 2. The same proof applies as well to the (twisted) odd signature operator on any orientable manifold, since the Hirzebruch L-form, like theÂ-form, is also a polynomial in the Pontrjagin forms.
The residue trace and the residue density
We review a refined construction of the residue trace. Let A be a classical pseudodifferential operator A of integer order on a smooth closed manifold M of dimension n, acting on the sections of a vector bundle E. We will later be interested in twisted spinor bundles over spin manifolds, but the description of the residue density does not need these assumptions. Let κ A (m, m ′ ) be the Schwartz kernel of A, which is a distributional section in E ⊠ (E * ⊗ |Ω|) over M × M with singular support contained in the diagonal. Choose a diffeomorphism
Φ : U → V ⊂ M × M, Φ(m, v) = (m, φ m (v))(4)
from a neighborhood of the zero section in T M to a neighborhood of the diagonal in M, extending the canonical identification of M with the diagonal. Cut-off κ A away from the diagonal, i.e., multiply it by a function ψ with support in V which is identically 1 near the diagonal. Fix a connection in E, so that we can identify E * φm(v) with E * m using parallel transport along the curve t → φ m (tv). Then Φ * (ψκ A ) is a compactly-supported distributional section over T M in the bundle End(E) pulled back from the base, tensored with the fiberwise density bundle. This distribution is conormal to the zero section, thus by definition there exists a classical symbol a(m, ξ) on T * M (with values at (m, ξ) in End(E m )) such that
Φ * (ψκ A )(m, v) = 1 (2π) n T * M/M e iξ(v) a(m, ξ) ω n .
Here ω is the canonical symplectic form on T * M, and T * M/M means integration along the fibers of T * M. The result on the right-hand side is an End(E)-valued density in the base variables; however since the vertical tangent bundle to T M at (m, v) is canonically isomorphic to T m M, this can be interpreted as a vertical density.
Let R be the radial (vertical) vector field in the fibers of T * M. Let a [−n] denote the component of homogeneity −n of the classical symbol a. Fix a Euclidean metric g in the vector bundle T * M (this amounts to choosing a Riemannian metric on M), thus defining a sphere bundle S * M inside T * M.
Definition 3. The residue density of A is the smooth End(E)-valued density res(A) := 1 (2π) n S * M/M a [−n] R ω n .
At this stage, res(A) depends on a number of choices: the embedding Φ, the cut-off ψ, the connection in E and the metric g.
One way to show that res(A) is defined independently of the choices involved is through holomorphic families. Let (A s ) s∈C be a holomorphic family of pseudodifferential operators
on E such that A s is of order k − s, where k is the order of A, and A 0 − A ∈ Ψ −∞ (M, E).
Then for ℜ(s) sufficiently large, restricting the Schwartz kernel of A s to the diagonal ∆ gives a well-defined and holomorphic End(E)-valued density
F (s) := κ As | ∆ .
This density extends to C with possible simple poles at s ∈ n + k − N, where k is the order
of A and N = {0, 1, 2, 3, . . .}. One natural choice of a holomorphic family is A s = AQ s or A s = Q s A where (Q s ) s∈C is a holomorphic family of pseudodifferential operators on E such that Q s is of order −s and Q 0 − Id ∈ Ψ −∞ (M, E)
. It is then straightforward to check that Armed with this holomorphic family interpretation of res(A) and Tr R (A), one can deduce without effort various properties of res and Tr R . For example, it follows that Tr R vanishes on commutators. Indeed, given integer order operators A and B and taking any auxiliary family Q s as explained above, we can write
Tr([A, B]Q s ) = Tr(C s ) + Tr([AQ s , B]),
where C s = ABQ s − AQ s B is a holomorphic family of operators that is smoothing at s = 0. One can check that Tr[S, T ] = 0 for any pseudodifferential operators S and T with ord(S) + ord(T ) < −n, so for ℜ(s) sufficiently large, the trace Tr([AQ s , B]) vanishes. Therefore, by analytic continuation, Tr([AQ s , B]) vanishes for all s ∈ C; in particular, we have Tr([A, B]Q s ) = Tr(C s ). Now C 0 is smoothing, and since the residue density of a smoothing operator is zero, we have Res s=0 Tr(C s ) = Tr R (C 0 ) = 0. Thus,
Tr R ([A, B]) = 0.
Furthermore, if S is any section in End(E), using the fact that κ SAs = Sκ As and κ AsS = κ As S, where A s is a holomorphic family as in the formula (5), we have res(SA) = Sres(A) and res(AS) = res(A)S. That res(SA) = Sres(A) also follows directly from Definition 3. In particular, we have (7) res(uA) = res(Au) = u res(A)
for every function u ∈ C ∞ (M, C). Finally, observing that
κ As | ∆ = κ A * s | ∆ * ,
taking the residue at s = 0 of both sides we obtain res(A) = res(A * ) * ; that is, we have res(A * ) = res(A) * .
Vanishing of the local eta residue
Let (D t ) t∈I be a smooth 1-parameter family of elliptic self-adjoint pseudodifferential operators of order 1 on a closed manifold M. For simplicity assume for a moment that D 0 is invertible, hence D t is invertible for small enough t. Then
∂ t η(D t , s) =∂ t Tr D t (D 2 t ) − s+1 2 =Tr Ḋ t (D 2 t ) − s+1 2 − s + 1 2 Tr D t (D 2 t ) − s+3 2 (Ḋ t D t + D tḊt ) = − sTr Ḋ t (D 2 t ) − s+1 2 = − sTr Ḋ t |D t | −1 (D 2 t ) − s 2 . Now Q s := (D 2 t ) − s 2
is an analytic family of operators of order −s and Q 0 = Id, therefore
(8) ∂ t η(D t ) = −sTr Ḋ t |D t | −1 Q s s=0 = −Tr R (Ḋ t |D t | −1 ).
From (6), the Wodzicki residue trace vanishes on smoothing operators, so as a corollary we see that the eta invariant is constant under smoothing perturbations. By this argument, the above expression makes sense even when D t is not invertible.
In the same spirit, let D be an elliptic self-adjoint invertible pseudodifferential operator of order k ∈ (0, ∞). Then the residue at s = 0 of the eta function η(D, s) is
Res s=0 Tr(D|D| −1 (D 2 ) − s 2 ) = 1 k Tr R (D|D| −1 ) = 1 k M tr(res(D|D| −1 )).
We have been assuming that M is closed, so that the trace on the left is defined. However, notice that by definition, tr(res(D|D| −1 )) is a local quantity in the sense that it depends only on finitely many terms of the local symbol of D|D| −1 ; moreover, each homogeneous term of D|D| −1 is given by a universal formula in terms of the local symbol of D in any coordinate patch. Using this universal formula for the −n degree homogeneous term allows us to define the local eta residue on the interior of any manifold (with or without boundary, compact or not), even in the case that |D| −1 does not exist. From the definition of the residue density, the local eta residue is constant under smoothing perturbations, so the definition makes sense when D is not invertible, non symmetric. The local eta residue can be non-vanishing in general (when M is compact, its integral always vanishes for self-adjoint operators of positive order since the eta function is regular at s = 0 [9]). However, for Dirac operators we have: Proof. We give here a new, easy proof. Assume that M is closed; this theorem is a local question, so this case suffices. Let f be an arbitrary smooth real function on M. Define D t as the Dirac operator associated to the family of conformal metrics e −2tf g. This operator is an unbounded operator in the L 2 space associated to the measure e −ntf µ g . To work in the fixed Hilbert space L 2 (µ g ), conjugate through the unitary transformation
Theorem 5.L 2 (M, Σ ⊗ E, e −ntf µ g ) → L 2 (M, Σ ⊗ E, µ g ) φ → e − ntf 2 φ where Σ denotes the spinor bundle over M. Using (2), D t conjugates tõ D t =e tf 2 De tf 2 acting in L 2 (M, Σ ⊗ E, µ g )
and we compute
∂ tDt = 1 2 (fD t +D t f ).(9)
Using Proposition 1 we have on one hand
∂ t η(D t ) = ∂ t η(D t ) = 0.
On the other hand, plugging (9) at t = 0 into (8) we write:
−∂ t η(D t ) |t=0 =Tr R 1 2 (f D + Df )|D| −1 sinceD 0 = D = 1 2 Tr R (f D|D| −1 + f |D| −1 D) since Tr R is a trace =Tr R (f D|D| −1 ) since D commutes with |D| = M tr(res(f D|D| −1 )).
From the definition (see also (7)), res(f D|D| −1 ) = f res(D|D| −1 ). Since f was arbitrary, we deduce tr(res(D|D| −1 )) = 0 as claimed.
We will need such a vanishing result for a larger class of first-order symmetric differential operators: Corollary 6. Let D be a twisted Dirac operator on a spin manifold (M, g). For any u ∈ C ∞ (M, R), the operator D u := e −u De u is symmetric on M with respect to the measure µ u := e 2u µ g , and the local eta residue of D u vanishes.
Proof. It is clear that D u is formally self-adjoint with respect to the measure µ u . As in the proof of Theorem 5, to prove that the local eta residue of D u vanishes we can assume that M is compact. (7), res(D u |D u | −1 ) = res(e −u D|D| −1 e u ) = res(D|D| −1 ). The trace of this last endomorphismvalued density vanishes by Theorem 5.
Then |D u | = e −u |D|e u , hence D u |D u | −1 = e −u D|D| −1 e u . By
Eta function on conformally cusp manifolds
We turn now to our main object of study. Let M • be the interior of a compact manifold with boundary M of dimension n. We assume that M is spin with a fixed spin structure, and that the metric is of conformally cusp type as in [23]. To explain this notion, let x : M → [0, ∞) be a boundary-defining function for the smooth structure of M, namely
(1) x ∈ C ∞ (M); (2) {x = 0} = ∂M; (3) The 1-form dx is non-vanishing on ∂M. There exists a neighbourhood U ⊂ M of ∂M and a diffeomorphism Φ U : U → [0, ǫ) × ∂M
such that x |U is the composition of Φ U with the projection on the first factor. In the sequel we fix such a product decomposition near the boundary.
The metric g on M • is said to be of conformally cusp type if on U ∩ M • it is of the form
(10) g p = x 2p dx 2 x 4 + h
where p ∈ (0, ∞) and h is a metric on ∂M which does not depend on x. Thus g p = x 2p g c , where g c is a particular case of an exact cusp metric as in [20,23] and also an exact b-metric in the sense of Melrose. Geometrically, g c , which takes the form g c = dx 2 x 4 + h on U ∩ M • , is simply a metric with infinite cylindrical ends, as one can see by switching to the variable v = 1/x. Recall that x is a global function, thus g p is defined on M • . The motivating example is given by complete hyperbolic manifolds of finite volume. Outside a compact set, such a hyperbolic manifold is isometric to an infinite cylinder (1, ∞) × T where (T, h) is a (possibly disconnected) flat manifold; the metric takes the form
dt 2 + e −2t h
which is easily seen to be of the form (10) with p = 1 if we set x := e −t .
Let E be a twisting bundle on M, with a connection which is flat in the direction of ∂ x . This implies that near the boundary, E together with its connection are pull-backs of their restrictions to the boundary E |∂M . Finally, let D p (where 2p is the power in the conformal metric g p ) denote the twisted Dirac operator associated to the aforementioned data.
The main assumption under which we work is the invertibility of the boundary Dirac operator. More precisely, Assumption 1. For each connected component N of ∂M, we assume that the Dirac operator D (∂M,h) on N with respect to the metric h and twisted by E, is invertible.
Under this assumption, the results of [23] imply that the L 2 spectrum of the essentially self adjoint operator D p is discrete and obeys a Weyl-type law; moreover the eta function η(D p , s) is holomorphic for ℜ(s) > n and extends to a meromorphic function with possible double poles at certain points. In particular, for n odd, s = 0 is such a possible double pole. It follows that we can define a "honest" eta invariant, depending on the eigenvalues of D p , as the regular value at s = 0 of η(D p , s).
Proof. We need to revisit the construction giving the meromorphic extension and the structure of the poles of the eta function. The main tool is the calculus of cusp pseudodifferential operators first introduced in [20], whose definition we review in Appendix B.
The spinor bundles for conformal metrics are canonically identified together with their metrics. It follows that D c , the Dirac operator for g c , is linked to D p by formula (2). However these two operators act on different L 2 spaces because the measures µ p and µ c induced by the metrics g p , g c are not the same. We view D p as acting in L 2 (M • , Σ ⊗ E, µ p ) and we conjugate it through the Hilbert space isometry
L 2 (M • , Σ ⊗ E, µ p ) → L 2 (M • , Σ ⊗ E, µ c ), σ → x np/2 σ.
It follows that D p is unitarily equivalent to the operator
A = x np 2 D p x − np 2 acting in L 2 (M • , Σ ⊗ E, µ c )
. Using the formula (2) with e −f = x p , we see that
D p = x −p n+1 2 D c x p n−1 2 .
In the sequel we thus replace D p by the unitarily equivalent operator
A = x − p 2 D c x − p 2 acting in L 2 (M • , Σ ⊗ E, µ c ).N(P )(ξ)φ = [e i ξ x P (e −i ξ xφ )] |x=0
for ξ ∈ R, whereφ is any extension of the spinor φ from ∂M to M.
Since g c and the twisting bundle E and its connection are products near infinity, we have
D c = c(ν)x 2 ∂ x + D (∂M,h) , A = x −p x 2 ∂ x − px 2 c(ν) + D (∂M,h)(11)
where D (∂M,h) is the Dirac operator on ∂M, ν = dx x 2 , and c(ν) is Clifford multiplication by ν. It follows from the definition that (12) N(D c )(ξ) = c(ν)iξ + D (∂M,h) .
The boundary operator D (∂M,h) anti-commutes with c(ν) for algebraic reasons and is invertible by Assumption 1. Since
N(D c )(ξ) 2 = ξ 2 + D 2 (∂M,h)
is strictly positive, it follows that N(D c )(ξ) is invertible for all ξ. Such an operator is called fully elliptic. From [23], we know that A has essentially the same properties as an elliptic operator on a closed manifold: it is Fredholm, has compact resolvent and (in the self-adjoint case) has pure-point spectrum. . .}; the poles are at most double at points in the intersections of these two sets, otherwise they are at most simple. The content of the theorem is that s = 0 is in fact a regular point. We will see later that some of the above singularities do not occur in our setting.
To start the proof, consider the holomorphic family in two complex variables
(s, w) → x w A(s), A(s) = x −ps A(A 2 ) − s+1 2 ∈ Ψ −s c (M, Σ ⊗ E)(13)
and the function F (s, w) := Tr(x w A(s)).
Clearly F (s, ps) = η(D p , s). Proof. The operator kernel of x w A(s) is smooth outside the diagonal and continuous at the diagonal for ℜ(s) > n. Its restriction to the diagonal is a smooth multiple of the cusp volume density for such s, and has an asymptotic expansion in powers of x as x → 0, starting from x w . This is due to the fact that A(s) is a conormal distribution on the cusp double space M 2 c , with Taylor expansion at the front face. The trace of x w A(s) equals the integral on the lifted diagonal of the above density. The normal bundle to ∆ c in M 2 c is canonically identified with c T M. By the Fourier inversion formula, this is equal to (14) c T * M x w a s (p, ξ) ω n where a s (p, ξ) is a holomorphic family of classical symbols of order −s on c T * M (smooth down to x = 0) and ω is the canonical symplectic form on c T * M. The volume form ω n is singular at M, however x 2 times it extends smoothly to the boundary of c T * M. It follows that the integral is absolutely convergent (hence holomorphic in s, w) for ℜ(s) > n, ℜ(w) > 1.
It is now easy to construct the analytic extension of (14) in w by expanding a s (p, ξ) in Taylor series at x = 0, using that for any k ∈ N, we have
1 0 x w+k dx x 2 = 1 w + k − 1 .
To get the analytic extension of (14) in s we expand a s (p, ξ) in homogeneous components in ξ of order −s − k where k ∈ N, then switching to polar coordinates and using that
∞ 1 r −s−k+n−1 dr = 1 s − n + k .
We first note that there is no pole at s = 0. From here on, we assume that the dimension of the manifold M is odd, otherwise the eta function is 0 so there is nothing to prove. Proof. From the construction of the analytic extension of F it follows that for every w with ℜ(w) > 1, Res s=0 F (s, w) = M x w tr(res(A|A| −1 )).
As A is unitarily conjugated to D p by a real function, we see from Corollary 6 that the density tr(res(A|A| −1 )) vanishes identically. In other words, the holomorphic function w → Res s=0 F (s, w) is identically 0 on a half-plane, and by unique continuation it is identically zero for all w.
It remains to show that there is no pole in w at w = 0 either. This will imply Theorem 7 since η(D p , s) = F (s, ps).
For this, we fix s with ℜ(s) > n and we examine F (s, w) as a meromorphic function in the complex variable w.
For any cusp operator B ∈ x z Ψ −s c (M) (where we suppress the bundles for brevity) consider the power series expansion of its Schwartz kernel x −z κ B at the front face of the cusp double space. Although x is not everywhere a defining function for the front face, we do get such an expansion since κ B vanishes in Taylor series at faces other than the front face. Taking the inverse Fourier transform of the coefficients we can regard the coefficients as lying in the suspended calculus Ψ −s sus (∂M). This gives a short exact sequence of spaces of operators (15) 0
→ x ∞ Ψ −s c (M) ֒→ x z Ψ −s c (M) q → x z Ψ −s sus (∂M)[[x]] → 0.
For a weighted cusp operator B ∈ x z Ψ −s c (M), we write q(B) = x z q 0 (B) + xq 1 (B) + x 2 q 2 (B) + . . .
It is easy to see that we have q 0 (B) = N(x −z B), see [20].
We use a result from [14]. Let s > n and P ∈ Ψ −s c (M) with ℜ(s) > n. Then x w P is trace-class for w > 1, and for k ∈ N, (16) Res w=1−k Tr(x w P ) = 1 2π R Tr(q k (P )(ξ))dξ.
Proposition 10. The function F (s, w) does not have any poles in w.
Proof. Let R ∈ Diff 1 c (M, Σ ⊗ E) be any cusp differential operator which equals D ∂ := c(ν)D (∂M,h) near the boundary. This makes sense since we have a fixed product decomposition near the boundary. By definition we have
q(R) = D ∂ .
We notice that near ∂M, D ∂ anticommutes with the cusp differential operator A from (11). Therefore, if we denote by I s := ker(q) ⊂ Ψ s c (M, Σ ⊗ E) the subspace of operators which vanish to every order at the front face, we have
RA + AR ∈ I 2 .
This implies
[R, A 2 ] ∈ I 3 ,
[R, (A 2 ) − s+1 2 ] ∈ I −s ; the latter, by the construction of the complex powers [7]. Together with the obvious commutation [R, x −ps ] ∈ I 0 , we get for the operator A(s) defined in (13) RA(s) + A(s)R ∈ I −s . Proof. We have written η(D p , s) = F (s, ps) for an analytic function F (s, w) in w ∈ C, s ∈ C \ {n, n − 1, . . . , 1, −1, −2, −3, . . .} by Theorem 7, Lemma 8, and Proposition 10. We claim that F (s, w) is in fact regular at s ∈ C \ {−2, −4, . . .}. Indeed, for ℜ(w) > 1 the residue in s at s = n − k is given by M x w−p(n−k) tr(res(A|A| k−n−1 )). (7),
Since A = x np 2 D p x − np 2 , we have A|A| k−n−1 = x np 2 D p |D p | k−n−1 x − np 2 , so byRes s=n−k F (s, w) = M x w−p(n−k) tr(res(D p |D p | k−n−1 )).
Consider the well-known odd heat kernel small-time expansion given in Lemma 1.9.1 of [10] (17)
D p e −tD 2 p (y, y) ∼ ∞ k=0 t −n+k−1 2 b k (y)
valid on the interior of any manifold, by locality of the coefficients; moreover, for k even, we have b k ≡ 0. From the relationship
(18) D p |D p | −s−1 = 1 Γ s+1 2 ∞ 0 t s−1 2 D p e −tD 2 p dt we deduce that (19) res(D p |D p | k−n−1 ) = 2 Γ( n−k+1 2 ) b k .
In particular, when k is even, this residue vanishes. By the regularity results of Bismut-Freed [5], the pointwise traces of the local coefficients b 0 , b 1 , . . . , b n vanish identically, i.e. tr(b k (y)) ≡ 0 for k = 0, . . . , n so for these k the density tr(res(D p |D p | k−n−1 )) also vanishes.
(The vanishing of the term with k = n, corresponding to the residue of the eta function at the origin, has already been proved in Proposition 9 by using conformal invariance, although here we could have also deduced this fact from the regularity results of Bismut-Freed [5].) This proves that F (s, w) is regular at s ∈ C \ {−2, −4, . . .}.
In the hyperbolic case we have a stronger result. The necessary local vanishing of the heat trace is proved in the next section. Proof. By Proposition 13 in Section 6, we have that for m ∈ H n (R) with n = 2d + 1, tr(De −tD 2 (m, m)) = 0. This implies that tr(b k ) = 0 for all the coefficients b k on H n (R), and thus also on the locally symmetric space M • . In view of (19), the possible poles in s of the function F (s, w) actually do not occur. Together with Proposition 10, this shows that the eta function is entire since η(D, s) = F (s, s).
Odd heat kernels for homogeneous vector bundles over hyperbolic space
The real hyperbolic space H n (R) is given as the symmetric space SO(n, 1)/SO(n). But, for our purpose, we use the realization of H n (R) = G/K where G = Spin(n, 1), K = Spin(n), which are the double covering groups of SO(n, 1), SO(n). We denote the Lie algebras of G and K by g and k, respectively. The Cartan involution θ on g gives the decomposition g = k ⊕ p where k and p are, respectively, the +1 and −1 eigenspaces of θ. The subspace p can be identified with the tangent space T o (G/K) ∼ = g/k at o = eK ∈ G/K. Let a be a fixed maximal abelian subspace of p. Then the dimension of a is one. Let M = Spin(n − 1) be the centralizer of A = exp(a) in K with Lie algebra m. We put β to be the positive restricted root of (g, a). Note that A ∼ = R via a r = exp(rH) with H ∈ a, β(H) = 1.
From now on we assume that n is odd, that is, n = 2d + 1. The spinor bundle Σ over H n (R) = G/K = Spin(n, 1)/Spin(n) is defined by
(20) Σ = Spin(n, 1) × τs V τs −→ H n (R) = Spin(n, 1)/Spin(n)
where (τ s , V τs ) denotes the spin representation of Spin(n). Here points of Spin(n, 1) × τs V τs are given by equivalence classes [g, v] of pairs (g, v) under (gk, v) ∼ (g, τ s (k)v). In general, any G-homogeneous Clifford module bundle over H n (R) is associated to (τ s ⊗ τ, V τs ⊗ V τ ) for a unitary representation (τ, V τ ) of Spin(n) as in (20), which we denote by Σ ⊗ E. For instance, the representation τ s ⊗ τ s of Spin(n) determines a homogeneous vector bundle Σ ⊗ Σ over H n (R) whose fiber is V τs ⊗ V τs ∼ = ⊕ d k=0 ∧ k (p ⊗ C). The space of smooth sections from H n (R) to Σ ⊗ E is denoted by C ∞ (H n (R), Σ ⊗ E) and can be identified with [C ∞ (G) ⊗ V τs ⊗ V τ ] K where K acts on C ∞ (G) by the right regular representation R. Now a natural connection ∇ : C ∞ (H n (R), Σ ⊗ E) → C ∞ (H n (R), Σ ⊗ E ⊗ T * (G/K)) is given by
(21) ∇f = n i=1 (R(X i ) ⊗ Id)f ⊗ X * i
where {X i } is an orthonormal basis of p and {X * i } is its dual basis. This connection is the unique connection on C ∞ (Σ ⊗ E) which is G-homogeneous and anti-commutes with the Cartan involution θ (see Lemma 3.2 of [24]). Now the Dirac operator D on Σ ⊗ E associated to the connection ∇ is defined by
D = n i=1 R(X i ) ⊗ c(X i )
where c(X i ) denotes the Clifford multiplication.
Proposition 13. For m ∈ H n (R), we have tr(De −tD 2 (m, m)) = 0.
Proof. Recalling C ∞ (H n (R), Σ ⊗ E) ∼ = [C ∞ (G) ⊗ V τs ⊗ V τ ] K , the Schwartz kernel of De −tD 2 is given by a section H t in [C ∞ (G) ⊗ End(V τs ⊗ V τ )] K×K satisfying (22) H t (k 1 gk 2 ) = (τ s ⊗ τ ) −1 (k 2 )H t (g)(τ s ⊗ τ )(k 1 ) −1
for k 1 , k 2 ∈ K, g ∈ G, which acts on [C ∞ (G) ⊗V τs ⊗V τ ] K by convolution. For each t > 0, H t lies in [S(G) ⊗ End(V τs ⊗ V τ )] K where S(G) = ∩ p>0 S p (G) with S p (G) the Harish-Chandra L p -Schwartz space. For more details, we refer to Section 3 of [24]. Taking the local trace of H t , we have that h t := tr(H t ) ∈ S(G). From (22) and recalling that a point in the homogeneous vector bundle Σ ⊗ E is given by an equivalence class through the relation (gk, v) ∼ (g, (τ s ⊗ τ )(k)v), we can see that the local trace of De −tD 2 (m, m) is given by h t (e) for the identity element e ∈ G. By the Plancherel theorem (see Theorem 4.1 in [22]), we have the following expression for h t at e ∈ G (up to a constant depending on a normalization),
(23) h t (e) = σ∈M ∞ −∞ Θ σ,iλ (h t ) p(σ, iλ)dλ,
whereM denotes the set of equivalence classes of irreducible unitary representations of M,
Θ σ,iλ (h t ) = Tr G h t (g)π σ,iλ (g) dg,
and p(σ, iλ) denotes the Plancherel measure associated to the unitary principal representation π σ,iλ . Here the unitary principal representation π σ,iλ = Ind G M AN (σ ⊗ e iλ ⊗ Id) acts by the left regular representation on
H σ,iλ = { f : G → V σ | f (gma r n) = e −(iλ+d)rH σ(m) −1 f (g) }
where n = 2d + 1. By Proposition 3.6 in [24], it follows that
a i − b i ∈ Z (i, j = 1, 2, . . . , d), a 1 ≥ b 1 ≥ . . . a d−1 ≥ b d−1 ≥ a d ≥ |b d | where τ = d i=1 a i e i , σ = d i=1 b i e i .
Here we denote the highest weights of the representations τ, σ with respect to the standard basis. This implies that
[(τ s ⊗ τ )| M : σ][σ : σ + ] = [(τ s ⊗ τ )| M : σ + ] = [(τ s ⊗ τ )| M : σ − ] = [(τ s ⊗ τ )| M : σ][σ : σ − ] since σ ± = 1 2 (e 1 + e 2 + . . . + e d−1 ± e d )
. Now, by Theorem 3.1 of [21], we also have p(σ + , iλ) = p(σ − , iλ). This implies h t (e) = 0 by (23) and (24), which completes the proof.
Remark 14. It can be proved, using the Selberg trace formula, that the eta function vanishes at negative odd integers under the condition Γ ∩ P = Γ ∩ N for the fundamental group Γ of the given hyperbolic manifold where P = MAN (Langlands decomposition) denotes a parabolic subgroup of G fixing the infinity point of a cusp. Recall that this fact is true for arbitrary operators of Dirac type over closed manifolds, as follows immediately from (18) and from the odd heat trace expansion (17). We believe that this vanishing holds also in the context of manifolds with conformal cusps without this technical condition but the necessary work, which surpasses the scope of this paper, is left for a future publication.
Appendix A. The spectrum of the Dirac operator
Recall that under Assumption 1, the Dirac operator D p is always essentially self-adjoint with discrete spectrum [23]. One may ask what happens with the eta invariant when Assumption 1 does not hold. Like in [4], for p ≤ 1 when the manifold is complete with respect to g p , the answer is that the continuous spectrum of the twisted Dirac operator (which is essentially self adjoint by [28], [27]) becomes the full real line, hence the usual definition of the eta invariant breaks down. We will not attempt here to extend the definition in that case, note however that for finite-volume hyperbolic manifolds this has been done in [26]. The proof of the following result is very similar to the corresponding statements from [11,12] concerning magnetic and Hodge Laplacians.
Theorem 15. Let M be a spin manifold with conformal cusps and E a twisting bundle of product type near the cusps. Let D p denote the Dirac operator associated to the metric (1) on M, twisted by E. If Assumption 1 does not hold, then
• if 0 < p ≤ 1, the essential spectrum of D p is R.
• if p > 1, then D p is not essentially self-adjoint, and every self-adjoint extension of D p in L 2 has purely discrete spectrum.
Proof. The idea is to reduce the problem to a 1-dimensional problem, essentially to the computation of the spectrum of i∂ t on an interval. When p > 1, the metric is of metric horn type, and self-adjoint extensions of D p on M (given by boundary conditions at x = 0) are in 1-to-1 correspondence with Lagrangian subspace in ker(D (∂M,h) ) with respect to the symplectic form ω(u, v) := c(ν)u, v L 2 (∂M,Σ⊗E) , see [16]. Such subspaces exist by the cobordism invariance of the index (note that ∂M may be disconnected). Moreover, since Assumption 1 does not hold, there exist infinitely many Lagrangian subspaces in ker(D (∂M,h) ) thus D p is not essentially self-adjoint.
We work with the operator A from (11), which is unitarily conjugated to D p hence has the same spectrum as D p . When p > 1, for each Lagrangian subspace W ⊂ ker(D (∂M,h) ), A is essentially self-adjoint on the initial domain
D W (A) = C ∞ c (M, Σ ⊗ E) ⊕ {φ(x)
x p/2 w; w ∈ W } for some fixed cut-off function φ supported in the cusps which equals 1 near infinity. When 0 < p ≤ 1, A is essentially self-adjoint on D(A) = C ∞ c (M, Σ ⊗ E). The essential spectrum of A (with the above boundary condition when p > 1) can be computed on the complement of any compact set in M, i.e., on the union of the cusps, by imposing self-adjoint boundary conditions. More precisely, consider the non-compact manifold with boundary M ǫ := {x ≤ ǫ}. We need to specify a self-adjoint boundary condition for A at x = ǫ, which is obtained by the AP S condition and by choosing yet another Lagrangian subspace in ker(D (∂M,h) ). With these self-adjoint boundary conditions, the decomposition principle (see [4,Prop. 1]) states that the essential spectrum of A on M ǫ coincides with the essential spectrum of A on M.
We decompose the space of L 2 spinors on ∂M twisted by E into the space of zero-modes (i.e., the kernel of D (∂M,h) ) and its orthogonal complement consisting of "high energy modes". Accordingly we get an orthogonal decomposition of L 2 (M ǫ , Σ ⊗ E) into zeromodes and high energy modes, the main point being that A preserves this decomposition. As in [12,Prop. 5.1] the high energy modes do not contribute to the essential spectrum. The reason is that there exists a cusp pseudodifferential operator R ∈ x −2p Ψ −∞ c (M, Σ⊗E), localized on the cusps and acting as 0 on high energy modes, such that A 2 + R is fully elliptic. Therefore this operator has discrete spectrum so in particular, on high energy modes A 2 has discrete spectrum as claimed. We are left with the formally self-adjoint operator
A 0 := x 1−p c(ν) x∂ x − p 2
acting in L 2 (0, ǫ)⊗ker(D (∂M,h) ) with respect to the volume form dx x 2 (with a certain boundary condition at x = ǫ).
We claim that A 0 is unitarily equivalent to c(ν)t∂ t over a certain interval depending on ǫ and p, with respect to the measure dt t . We start by conjugating with x 1/2 , so the volume form becomes dx
x and
x − 1 2 A 0 x 1 2 = x 2−p c(ν)∂ x + c(ν) 2 (1 − p)x 1−p .
For p = 1, we already obtain the desired expression for our operator by setting t := x. For p = 1, we write
x 2−p ∂ x = y 2 ∂ y , y := (1 − p)x 1−p .
Then, after conjugating with y −1/2 , we obtain the operator c(ν)y 2 ∂ y acting in L 2 with respect to the measure dy y 2 . With the change of variable cusp-to-b t := e −1/y we get the desired operator. Now for p ≤ 1 the operator c(ν)t∂ t acts on an interval of the form (0, β), while for p > 1 it acts on (1, β) for some strictly positive β. Here c(ν) is a diagonalizable automorphism with ±i eigenvalues. For every self-adjoint extension, in the first case the spectrum is R while in the second case it is discrete.
Alternatively, one could prove the first part of the theorem similarly to [4] by constructing Weyl sequences for each real number.
Although for p > 1 the spectrum of any self-adjoint extension of D p is discrete even when Assumption 1 does not hold, the methods of this paper do not show the meromorphic extension of the eta function in that case.
compactify cylindrical end - - N X X ∼ = (1, ∞) t × N M ∼ = [0, 1) x × N M x = 1/t ⇐⇒ t = 1/x Figure 1. Compactifying X into M.
Appendix B. Elements of the cusp calculus
In this appendix we give a short introduction to the cusp calculus (first defined by Melrose and Nistor [20]). Consider a Riemannian manifold X with a cylindrical end as shown in the left-hand side of Figure 1. The metric takes the form dt 2 + h on the cylinder where h is a metric on the cross section N. Changing coordinates to x = 1/t and noting that for t ∈ (1, ∞) we have x ∈ (0, 1), and t → ∞ implies x → 0, it follows that we can view X as the interior of the compact manifold M obtained from X by replacing the infinite cylinder (1, ∞) t × N with the finite cylinder [0, 1) x × N, see Figure 1.
A cusp pseudodifferential operator is just a "usual" pseudodifferential operator on X that "behaves nicely" near t = ∞. To make this precise, recall that in local coordinates on the cylinder, the Schwartz kernel of an m-th order (m ∈ C) pseudodifferential operator A on X takes the form (25) κ A = e i(t−t ′ )τ +i(y−y ′ )·η a(t, y, τ, η) dτ dη,
where (t, t ′ , y, y ′ ) ∈ (1, ∞) 2 × U 2 with U local coordinates on N and a is a classical symbol of order m in τ and η. We say that the operator A is, by definition, an m-th order cusp pseudodifferential operator if in the compactified coordinates, a(x, y, τ, η) := a(1/x, y, τ, η)
is smooth at x = 0 (and is still classical in τ and η). This definition only works on the cylinder, so to be more precise, A is a cusp operator if it can be written as
A 1 + A 2 + A 3 ,
where A 1 is of the form (25) such thatã(x, y, τ, η) := a(1/x, y, τ, η) is smooth at x = 0, A 2 is a usual pseudodifferential operator on the compact part of X, and finally, where A 3 is a smoothing operator on X that vanishes, with all derivatives, at ∞ on the cylinder (or equivalently, the Schwartz kernel of A 3 is a smooth function on M 2 vanishing to infinite order at ∂(M 2 )). The space of cusp pseudodifferential operators of order m ∈ C is denoted Ψ m c . If the symbol a is polynomial in τ, η, the resulting operator is differential, and can be written near x = 0 as sums of compositions of partial differentials on N and of ∂ t = −x 2 ∂ x with smooth coefficients on the compactification M. The space of cusp differential operators of order m ∈ Z is denoted Diff m c . Cusp operators are usually presented geometrically in relation to blown-up spaces, which might obscure their straightforward definition, so we shall explain this relationship. Setting z = (t − t ′ , y − y ′ ), which is a normal coordinate to the set {z = 0} = {t = t ′ , y = y ′ } = diagonal in X 2 , we see from (25) that κ A is written as the inverse Fourier transform of a symbol using the normal coordinate z. Hence κ A is a distribution on X 2 that is, by definition, conormal to the diagonal in X 2 . Expressing the kernel (25) in the compactified coordinates, we obtain (26) κ A = e iz·(τ,η)ã (x, y, τ, η) dτ dη , where z = 1
x − 1 x ′ , y − y ′ .
Note that z is not a normal coordinate to the diagonal (given by {x = x ′ , y = y ′ }) in M 2 because the coordinate 1 x − 1 x ′ fails to be smooth at the corner x = x ′ = 0. Thus, it seems like switching to compactified coordinates destroys the conormal distribution portrayal of pseudodifferential operators. However, we now show how to interpret this kernel as being conormal, not in M 2 , but on a related blown-up manifold. The idea is to blow-up the singular point x = x ′ = 0 until the kernel (26) can be interpreted as conormal. To begin this program, we first write
M 2 ∼ = [0, 1) x × [0, 1) x ′ × N 2
near the corner {x = x ′ = 0} as shown pictorially on the left in Figure 2. Next, we introduce polar coordinates (r, θ) in the x, x ′ variables, where r = x 2 + (x ′ ) 2 and θ = arctan(x ′ /x). Geometrically we can think of introducing polar coordinates as "blowing up" the set {x = x ′ = 0} by replacing it with a quarter-circle (the angular θ coordinate). The resulting manifold is called the b-double space M 2 b , see the middle picture in Figure 2. Actually, instead of using the standard polar coordinates (r, θ), in the sequel it is helpful to use the projective coordinates (x, s) where s = x/x ′ , which can be used as coordinates instead of (r, θ) for θ away from 0 and π/2. Here, x represents the radial coordinate and s represents the angular coordinate along the quarter circle. Now we form the cusp double space M 2 c . To do so, we introduce polar coordinates where s = 1 and x = 0, in M 2 b , as shown in Figure 2. This blow-up geometrically replaces the set s = 1, x = 0 in M 2 b with a half circle, which is called the cusp front face and which we denote by ff c . Since the set s = 1, x = 0 is the set of points where 1 − s = 0 and x = 0, we can use the projective coordinate
w = 1 − s x = 1 x − 1 x ′
as an angular coordinate along ff c and we can use x as the radial variable, at least if we stay away from the extremities of ff c . Thus, (x, w) can be used as coordinates near the blown-up face ff c . Note that the set {x = x ′ } corresponds to the set {w = 0} in M 2 c , therefore w is a normal coordinate to {x = x ′ }. Moreover, in view of the formula (26), we see that κ A is, by definition, a distribution conormal to the set {w = 0, y = y ′ } in M 2 c , which is called the cusp diagonal. In fact, one can prove the following theorem.
Theorem 16. The Schwartz kernels of cusp pseudodifferential operators are in one-to-one correspondence with distributions on M 2 c that are conormal to the cusp diagonal and vanish to infinite order at all boundary hypersurfaces of M 2 c except the cusp front face where they are smooth.
Our original definition of a cusp pseudodifferential operator as presented after (25) is usually disregarded in favor of the more geometric definition presented in the above theorem. It is evident how to extend the definition from the scalar case to cusp operators acting on sections of vector bundles which are of product type in t near t = ∞.
-defined (and holomorphic) when ℜ(s) > dim(M). This function admits a meromorphic extension to the complex plane with possible simple poles. If dim(M) is odd, the possible poles are located at dim(M) − 1 − 2N where N = {0, 1, 2, . . .}. If dim(M) is even and D is a Dirac operator associated to a Clifford connection, the eta function is entire
Date: January 16, 2009. 2000 Mathematics Subject Classification. 58J28, 58J50.
( 1 )
1The eta function η(D p , s) of the twisted spin Dirac operator is regular for ℜ(s) > −2 and has at most simple poles at s ∈ {−2, −4, . . .}. (2) When p = 1, M • is hyperbolic of finite volume and E is a homogeneous vector bundle, the eta function η(D, s) of the twisted Dirac operator is entire.
Proposition 1 .
1The eta invariants of D and D ′ coincide.
( 5 )
5Res s=0 F (s) = res(A). Thus, the residue Res s=0 F (s) is well-defined irrespective of the choice of A s , and also res(A) is well-defined independently of choices. The residue trace of A is defined by (6) Tr R (A) = Res s=0 Tr(A s ) = M tr(res(A)).
Definition 4 .
4Let M be a possibly non-compact manifold, E → M a vector bundle and z ∈ C. For any elliptic pseudodifferential operator D ∈ Ψ z (M, E), the density tr(res(D|D| −1 )) ∈ |Ω(M)|.is called the local eta residue of D.
[ 5 ]
5Let (M, g) be a spin Riemannian manifold and E a twisting bundle. Then the local eta residue tr(res(D|D| −1 )) of the twisted Dirac operator vanishes.
Theorem 7 .
7Under Assumption 1, the eta function of the twisted Dirac operator on M • is regular at s = 0.
This operator is an elliptic operator in the weighted cusp calculus x −p Diff 1 c (M, Σ ⊗ E). The normal operator of a cusp operator P in Diff 1 c (M, Σ ⊗ E) is the 1-parameter family of operators on ∂M defined by
The eigenvalues are distributed according to a suitable Weyl-type law; in particular, the eta function η(A, s) is well-defined for large real parts of s. Moreover, A(A 2 ) − s+1 2 is a holomorphic family of cusp operators in x ps Ψ −s c (M, Σ ⊗ E) if we define it to be 0 on the finite-dimensional null-space of A, see [23, Proposition 15]. It follows from [23, Proposition 14] that the trace of this family (i.e., the eta function) extends meromorphically to C with possible poles when s ∈ {n, n − 1, n − 2, . . .} and when ps ∈ {1, 0, −1, −2 .
Lemma 8 .
8The operator x w A(s) is of trace-class for ℜ(s) > n, ℜ(w) > 1. Moreover, F (s, w) is holomorphic for ℜ(s) > n, ℜ(w) > 1 and extends to C × C as a meromorphic function with possibly simple poles in s at s ∈ {n, n − 1, n − 2, . . .} and in w at w ∈ {1, 0, −1, −2, . . .}.
Proposition 9 .
9The function F (s, w) is regular in s at s = 0.
Now for every cusp operator Q we have q(RQ) = D ∂ q(Q), q(QR) = q(Q)D ∂ because R is constant in x near the boundary. Therefore D ∂ q(A(s)) = −q(A(s))D ∂ . Using conjugation with the invertible operator D ∂ on ∂M, we see that for every ξ,Tr (q(A(s))(ξ)) = Tr D ∂ q(A(s))(ξ)D −1 ∂ = −Tr q(A(s))(ξ)D ∂ D −1 ∂ so Tr (q(A(s))(ξ)) = 0. Thus for all k ∈ N the integrand in (16) for P = A(s) vanishes.Together with Proposition 9 this finishes the proof of Theorem 7 since η(D p , s) = F (s, ps).In fact, by invoking the regularity results of Bismut and Freed[5], we can restrict further the possible poles of the eta function. By a different argument, it turns out that if M • is a hyperbolic manifold, then there are no poles at all (see Theorem 12)! Of course, we assume that n is odd since otherwise the eta function is 0.Theorem 11. Under Assumption 1, the eta function η(D p , s) is regular for ℜ(s) > −2 and has at most simple poles at s ∈ {−2, −4, . . .}.
Theorem 12 .
12If M • is an odd dimensional hyperbolic manifold of finite volume, the eta function of the Dirac operator twisted by a homogeneous vector bundle is entire.
( 24 )
24Θ σ,iλ (h t ) = [(τ s ⊗ τ )| M : σ]([σ : σ + ] − [σ : σ − ]) λe −tλ 2 where σ ± denotes the half spin representation of M such that τ s | M = σ + ⊕ σ − . By the branching rule from K = Spin(2d + 1) to M = Spin(2d) given in Theorem 8.1.3 of [13], we have that for any τ ∈K, σ ∈M , [τ | M : σ] ≤ 1 and [τ | M : σ] = 1 if and only if
Figure 2 .
2The blown-up manifold M 2 c . (We omit the N 2 factor.)
Spectral asymmetry and Riemannian geometry. I. M F Atiyah, V K Patodi, I M Singer, Math. Proc. Cambridge Philos. Soc. 77M. F. Atiyah, V. K. Patodi, and I. M. Singer, Spectral asymmetry and Riemannian geometry. I, Math. Proc. Cambridge Philos. Soc. 77 (1975), 43-69.
Spectral asymmetry and Riemannian geometry. M F Atiyah, V K Patodi, I M Singer, Math. Proc. Cambridge Philos. Soc. IIM. F. Atiyah, V. K. Patodi, and I. M. Singer, Spectral asymmetry and Riemannian geometry. II, Math. Proc. Cambridge Philos. Soc. 78 (1975), 405-432.
Spectral asymmetry and Riemannian geometry. M F Atiyah, V K Patodi, I M Singer, III, Math. Proc. Cambridge Philos. Soc. 79M. F. Atiyah, V. K. Patodi, and I. M. Singer, Spectral asymmetry and Riemannian geometry. III, Math. Proc. Cambridge Philos. Soc. 79 (1976), 71-99.
The Dirac operator on hyperbolic manifolds of finite volume. C Bär, J. Differential Geom. 543C. Bär, The Dirac operator on hyperbolic manifolds of finite volume, J. Differential Geom. 54 (2000), no. 3, 439-488.
The analysis of elliptic families. II. Dirac operators, eta invariants, and the holonomy theorem. J.-M Bismut, D S Freed, Comm. Math. Phys. 1071J.-M. Bismut and D. S. Freed, The analysis of elliptic families. II. Dirac operators, eta invariants, and the holonomy theorem, Comm. Math. Phys. 107 (1986), no. 1, 103-163.
Residues of the eta function for an operator of Dirac type. T Branson, P B Gilkey, J. Funct. Anal. 1081T. Branson and P. B. Gilkey, Residues of the eta function for an operator of Dirac type, J. Funct. Anal. 108 (1992), no. 1, 47-87.
An extension of the work of V. Guillemin on complex powers and zeta functions of elliptic pseudodifferential operators. B Bucicovschi, Proc. Amer. Math. Soc. 127B. Bucicovschi, An extension of the work of V. Guillemin on complex powers and zeta functions of elliptic pseudodifferential operators, Proc. Amer. Math. Soc. 127, (1999), 3081-3090.
Characteristic forms and geometric invariants. S-S Chern, J Simons, Ann. Math. 99S-S. Chern and J. Simons, Characteristic forms and geometric invariants, Ann. Math. 99 (1974), 48-69.
The residue of the global η function at the origin. P B Gilkey, Adv. Math. 40P. B. Gilkey, The residue of the global η function at the origin, Adv. Math. 40 (1981), 290-307.
Invariance theory, the heat equation, and the Atiyah-Singer index theorem. P B Gilkey, CRC PressBoca Raton, FLSecond EditionP. B. Gilkey, Invariance theory, the heat equation, and the Atiyah-Singer index theorem, Second Edition, CRC Press, Boca Raton, FL, 1995.
S Golénia, S Moroianu, Spectral analysis of magnetic Laplacians on conformally cusp manifolds. 9S. Golénia and S. Moroianu, Spectral analysis of magnetic Laplacians on conformally cusp manifolds, Ann. H. Poincaré 9 (2008), 131-179.
S Golénia, S Moroianu, arXiv:0705.3559The spectrum of Schrödinger operators and Hodge Laplacians on conformally cusp manifolds. preprintS. Golénia and S. Moroianu, The spectrum of Schrödinger operators and Hodge Laplacians on confor- mally cusp manifolds, preprint arXiv:0705.3559
Representations and invariants of the classical groups. R Goodman, N R Wallach, Encyclopedia Math. Appl. 68Cambridge University PressR. Goodman and N. R. Wallach, Representations and invariants of the classical groups, Encyclopedia Math. Appl., vol. 68, Cambridge University Press, Cambridge, 1998.
The index of cusp operators on manifolds with corners. R Lauter, S Moroianu, Ann. Global Analysis Geom. 211R. Lauter and S. Moroianu, The index of cusp operators on manifolds with corners, Ann. Global Analysis Geom. 21 no. 1 (2002), 31-49.
An index formula on manifolds with fibered cusp ends. R Lauter, S Moroianu, J. Geom. Analysis. 15R. Lauter and S. Moroianu, An index formula on manifolds with fibered cusp ends, J. Geom. Analysis 15 (2005), 261-283.
On index formulas for manifolds with metric horns. M Lesch, N Peyerimhoff, Comm. Partial Differential Equations. 23M. Lesch and N. Peyerimhoff, On index formulas for manifolds with metric horns, Comm. Partial Differential Equations 23 (1998), 649-684.
On the geometric boundaries of hyperbolic 4-manifolds. D D Long, A W Reid, Geom. Topol. 4D. D. Long and A. W. Reid, On the geometric boundaries of hyperbolic 4-manifolds, Geom. Topol. 4 (2000), 171-178.
Adiabatic limit of the Eta invariant over cofinite quotient of P SL(2, R). P Loya, S Moroianu, J Park, Comp. Math. 144P. Loya, S. Moroianu and J. Park, Adiabatic limit of the Eta invariant over cofinite quotient of P SL(2, R), Comp. Math. 144 (2008), 1593-1616.
Pseudodifferential operators on manifolds with fibred boundaries. R R Mazzeo, R B Melrose, Asian J. Math. 24R. R. Mazzeo and R. B. Melrose, Pseudodifferential operators on manifolds with fibred boundaries, Asian J. Math. 2 (1998), no. 4, 833-866.
Homology of pseudodifferential operators I. Manifolds with boundary. R B Melrose, V Nistor, preprint funct-an/9606005R. B. Melrose and V. Nistor, Homology of pseudodifferential operators I. Manifolds with boundary, preprint funct-an/9606005.
On the Plancherel measure for linear Lie groups of rank one. R J Miatello, Manuscripta Math. 292-4R. J. Miatello, On the Plancherel measure for linear Lie groups of rank one, Manuscripta Math. 29 (1979), no. 2-4, 249-276.
The Minakshisundaram-Pleijel coefficients for the vector-valued heat kernel on compact locally symmetric spaces of negative curvature. R J Miatello, Trans. Amer. Math. Soc. 2601R. J. Miatello, The Minakshisundaram-Pleijel coefficients for the vector-valued heat kernel on compact locally symmetric spaces of negative curvature, Trans. Amer. Math. Soc. 260 (1980), no. 1, 1-33.
Weyl laws on open manifolds. S Moroianu, Math. Annalen. 340S. Moroianu, Weyl laws on open manifolds, Math. Annalen 340 (2008), 1-21.
Eta invariants of Dirac operators on locally symmetric manifolds. H Moscovici, R J Stanton, Invent. Math. 95H. Moscovici and R. J. Stanton, Eta invariants of Dirac operators on locally symmetric manifolds, Invent. Math. 95 (1989), 629-666.
On the kernel of the equivariant Dirac operator. V Nistor, Ann. Global Analysis Geom. 17V. Nistor, On the kernel of the equivariant Dirac operator, Ann. Global Analysis Geom. 17 (1999), 595-613.
Eta invariants and regularized determinants for odd dimensional hyperbolic manifolds with cusps. J Park, Amer. J. Math. 1273J. Park, Eta invariants and regularized determinants for odd dimensional hyperbolic manifolds with cusps, Amer. J. Math. 127 (2005), no. 3, 493-534.
Analysis of the Laplacian on the complete Riemannian manifold. R S Strichartz, J. Funct. Anal. 521R. S. Strichartz, Analysis of the Laplacian on the complete Riemannian manifold, J. Funct. Anal. 52 (1983), no. 1, 48-79.
Essential self-adjointness for the Dirac operator and its square. J A Wolf, Indiana Univ. Math. J. 2273J. A. Wolf, Essential self-adjointness for the Dirac operator and its square, Indiana Univ. Math. J. 22 (1972/73), 611-640.
E-mail address: [email protected] Institutul de Matematicȃ al Academiei Române. RO-014700address: [email protected] School of Mathematics. Binghamton, NY 13902, U.S.A; Bucharest, Romania E-mail; Dongdaemungu, Seoul764Department of Mathematics, Binghamton UniversityHoegiro. Korea E-mail address: [email protected] of Mathematics, Binghamton University, Binghamton, NY 13902, U.S.A. E-mail address: [email protected] Institutul de Matematicȃ al Academiei Române, P.O. Box 1-764, RO-014700 Bucharest, Romania E-mail address: [email protected] School of Mathematics, Korea Institute for Advanced Study, Hoegiro 87, Dongdaemun- gu, Seoul 130-722, Korea E-mail address: [email protected]
|
[] |
[
"WISE-2MASS all-sky infrared galaxy catalog for large scale structure",
"WISE-2MASS all-sky infrared galaxy catalog for large scale structure",
"WISE-2MASS all-sky infrared galaxy catalog for large scale structure",
"WISE-2MASS all-sky infrared galaxy catalog for large scale structure"
] |
[
"András Kovács \nInstitute of Physics\nEötvös Loránd University\n1117 Pázmány Péter sétány 1/ABudapestHungary\n\nMTA-ELTE EIRSA \"Lendulet\" Astrophysics Research Group\n1117 Pázmány Péter sétány 1/A BudapestHungary\n",
"István Szapudi \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Drive96822HonoluluHIUSA\n",
"András Kovács \nInstitute of Physics\nEötvös Loránd University\n1117 Pázmány Péter sétány 1/ABudapestHungary\n\nMTA-ELTE EIRSA \"Lendulet\" Astrophysics Research Group\n1117 Pázmány Péter sétány 1/A BudapestHungary\n",
"István Szapudi \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Drive96822HonoluluHIUSA\n"
] |
[
"Institute of Physics\nEötvös Loránd University\n1117 Pázmány Péter sétány 1/ABudapestHungary",
"MTA-ELTE EIRSA \"Lendulet\" Astrophysics Research Group\n1117 Pázmány Péter sétány 1/A BudapestHungary",
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Drive96822HonoluluHIUSA",
"Institute of Physics\nEötvös Loránd University\n1117 Pázmány Péter sétány 1/ABudapestHungary",
"MTA-ELTE EIRSA \"Lendulet\" Astrophysics Research Group\n1117 Pázmány Péter sétány 1/A BudapestHungary",
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Drive96822HonoluluHIUSA"
] |
[
"Mon. Not. R. Astron. Soc",
"Mon. Not. R. Astron. Soc"
] |
We combine photometric information of the WISE and 2MASS infrared all-sky surveys to produce a clean galaxy sample for large-scale structure research. Adding 2MASS colors improves star-galaxy separation substantially at the expense of loosing a small fraction of the galaxies: 93% of the WISE objects within the W 1 < 15.2 mag limit have 2MASS observation as well. We use a class of supervised machine learning algorithms, Support Vector Machines (SVM), to classify objects in our large data set. We used SDSS PhotoObj table with known star-galaxy separation for a training set on classification, and the GAMA spectroscopic survey for determining the redshift distribution of our sample. Varying the combination of photometric parameters input into our algorithm revealed that W 1 WISE − J 2MASS is a simple and effective star-galaxy separator, capable of producing results comparable to the multi-dimensional SVM classification. The final catalog has an estimated ∼ 2% stellar contamination among 5 million galaxies with z med ≈ 0.17. This full sky galaxy map with well controlled stellar contamination will be useful for cross correlation studies.
|
10.1093/mnras/stv063
|
[
"https://arxiv.org/pdf/1401.0156v2.pdf"
] | 119,222,073 |
1401.0156
|
e8c96834e4242547e2d4fb2ec4327f91da1f06e4
|
WISE-2MASS all-sky infrared galaxy catalog for large scale structure
2012
András Kovács
Institute of Physics
Eötvös Loránd University
1117 Pázmány Péter sétány 1/ABudapestHungary
MTA-ELTE EIRSA "Lendulet" Astrophysics Research Group
1117 Pázmány Péter sétány 1/A BudapestHungary
István Szapudi
Institute for Astronomy
University of Hawaii
2680 Woodlawn Drive96822HonoluluHIUSA
WISE-2MASS all-sky infrared galaxy catalog for large scale structure
Mon. Not. R. Astron. Soc
0002012Printed 3 January 2014 Submitted 2013(MN L A T E X style file v2.2)catalogues -large-scale structure of Universe
We combine photometric information of the WISE and 2MASS infrared all-sky surveys to produce a clean galaxy sample for large-scale structure research. Adding 2MASS colors improves star-galaxy separation substantially at the expense of loosing a small fraction of the galaxies: 93% of the WISE objects within the W 1 < 15.2 mag limit have 2MASS observation as well. We use a class of supervised machine learning algorithms, Support Vector Machines (SVM), to classify objects in our large data set. We used SDSS PhotoObj table with known star-galaxy separation for a training set on classification, and the GAMA spectroscopic survey for determining the redshift distribution of our sample. Varying the combination of photometric parameters input into our algorithm revealed that W 1 WISE − J 2MASS is a simple and effective star-galaxy separator, capable of producing results comparable to the multi-dimensional SVM classification. The final catalog has an estimated ∼ 2% stellar contamination among 5 million galaxies with z med ≈ 0.17. This full sky galaxy map with well controlled stellar contamination will be useful for cross correlation studies.
INTRODUCTION
In recent years, sky surveys have been producing astronomical data with a rapidly accelerating pace resulting in what is commonly called the "data avalanche". The large quantity of data necessitates automated algorithms for filtering, photometric selection, and estimation observables such as redshifts. Each object in a catalog has multiple properties, thus algorithms have to explore high-dimensional configuration spaces in large, often connected, databases. Such high dimensional spaces can be effectively explored with machine learning techniques, such as the support vector machines (SVM) used in our work.
When analyzing an object catalog, the most fundamental, and often most challenging task is star-galaxy (possibly QSO) separation. A simple separator between stars and galaxies is a morphological measurement, where extended sources are classified as galaxies (Vasconcellos et al. 2011). Morphology, however, looses its power at fainter magnitudes, a problem for wide-field surveys, e.g., Pan-STARRS (Kaiser et al. 2010), Euclid (Amendola et al. 2013), BigBOSS (Schlegel et al. 2011), DES (The Dark Energy Survey Collaboration 2005, and LSST (LSST Science Collaboration et al. 2009). At the fainter end, the most widely used tools for object classification are color-color diagrams: different types of objects will appear in different regions according to the shape of their spectral energy distribution. Classification methods based on color-color selection were employed for star-galaxy separation (Pollo et al. 2010) or for finding special classes of sources, such as high/low redshift QSO, AGN, starburst galaxies, or variable stars (e.g., Richards et al. 2002;Chiu et al. 2005;Stern et al. 2012;Brightman & Nandra 2012, and references therein).
Machine learning techniques, in particular SVMs, are gaining popularity in astronomical data mining and analysis due to their effectiveness and relative simplicity. For instance, Woźniak et al. (2004) analyzed variable sources with an SVM in 5 dimensions constructed from their period, amplitude and three colours; Huertas-Company et al. (2009) estimated morphological properties of near-infrared galaxies using SVM with 12 parameters, while Solarz et al. (2012) created a star-galaxy separation algorithm based on mid and near-infrared colors. Recently, Małek et al. (2013) used VIPERS and VVDS surveys to perform object classification into 3 groups: stars, galaxies, and AGNs. The Photometric Classification Server for the prototype of the Panoramic Survey Telescope and Rapid Response System 1 (Pan-STARRS1) uses SVM as well (Saglia et al. 2012).
While star-galaxy separation can be performed from WISE colors alone Kovács et al. 2013) it is at the expense of severe cuts that still are sensitive to contamination to from the Moon and necessitate complex masks. Adding 2MASS observations to WISE cleans up the data significantly. With several open source implementations and computationally modest cost (Fadely et al. 2012), we set out to use the SVM algorithm for separating stars and galaxies in the matched WISE-2MASS photomet- Figure 1. Estimated redshift distribution of our WISE-2MASS galaxy sample, after matching to the GAMA spectroscopic dataset. We found a pair for 84% of the WISE-2MASS galaxies. This sample has z median = 0.17, slightly deeper than the previous full sky WISE galaxy sample produced with similar methods (Kovács et al. 2013). ric data. Our principal goal is to create a clean catalog of galaxies observed by WISE and 2MASS suitable for large scale structure and cross-correlation studies. At the same time, we will show that our selection algorithms are suitable for producing clean stellar samples as well. The galaxy maps we create are useful for crosscorrelation studies, such as ISW measurements, and galaxy-CMB lensing correlations, while the large data sets of stars may constrain stellar streams and Galactic structure in general.
The paper is organized as follows. Datasets and algorithms are described in Section 2, while our results are presented in Section 3, with detailed discussion, comparisons, and interpretation.
DATASETS AND METHODOLOGY
We combine measurements of two all-sky surveys in the infrared, Wide-Field Infrared Survey Explorer (WISE, Wright et al. (2010)) and 2-Micron All-Sky Survey (2MASS, Skrutskie et al. (2006)). We use photometric measurements of the WISE satellite, which surveyed the sky at four different wavelengths: 3.4, 4.6, 12 and 22 µm (W1-W4 bands). Following Goto et al. (2012) and Kovács et al. (2013) we select sources to a flux limit of W1 15.2 mag to have a fairly uniform dataset. We add 2MASS J, H and K magnitudes conveniently available in the WISE catalog. We keep 93% of the WISE objects with W1 15.2 mag that have 2MASS observations. We note that this choice allows us to produce a deeper catalog than the 2MASS Extended Source Catalog (XSC, Jarrett et al. (2000)), as proper identification of fainter 2MASS objects becomes possible.
To apply machine learning techniques, one needs to identify a "training set", a set of objects with known classification. We choose a smaller region of Stripe 82 in the Sloan Digital Sky Survey (SDSS, Abazajian et al. (2009)), deeper than our catalog and located at 327.5 < RA < 338.5 and −1.25 < Dec < 1.25. We per-formed the cross-matching with the KD-Tree (Bentley 1975) algorithm as implemented in the python package scipy. We found an SDSS match for 99.4% for the 46,749 WISE-2MASS objects using a 3" matching radius. As a further refinement, we applied a W1 12.0 magnitude cut to exclude bright objects with potentially problematic SDSS classification.
As an exploratory test, we downloaded 2MASS XSC data from the same coverage, finding 1,195 galaxies. The WISE-2MASS sample contains 5,922 objects classified as a galaxy in SDSS PhotoObj table. We will show that the fraction of the properly identified galaxies reaches ∼ 75% even with our simplest algorithms, thus we are able to broaden 2MASS XSC significantly.
The corresponding redshift distribution of the matched objects classified as galaxies are shown on Figure 1. These redshifts are provided by matching with the Galaxy and Mass Assembly (GAMA, Driver et al. (2011)) dataset, at the full GAMA coverage of 290 deg 2 . We used 3" as a matching radius, and found a pair for 84% of the WISE-2MASS objects, that is in good agreement with the findings of and (Kovács et al. 2013). The estimated median redshift of the WISE-2MASS sample is z median ≈ 0.17.
Support Vector Machines
SVM designates a subclass of supervised learning algorithms for classification in a multidimensional parameter space. These methods include extensions to nonlinear models of the generic (linear) algorithm developed by Cortes & Vapnik (1995). SVMs carry out object classification and/or regression by calculating decision hyperplanes between sets of points having different class memberships. A central concept of SVM learning is the training set, a special set of objects that supplies the machine with classified examples. Based on its properties, the classifier is tuned, and the hyperspace between different classes is determined. A training set of a few thousands of objects is usually suitable for simpler classification problems.
The algorithm itself, aided by a non-linear kernel function, searches for a hyperplane which will maximizes the distance from the boundary to the closest points belonging to the separate classes of objects (Małek et al. 2013). The kernel is a symmetric function that maps data from the input space X to the feature space F. For our analysis we chose a Gaussian radial basis kernel (RBK) function, defined as:
k(xi, xj) = e −γ||x i −x j ||2(1)
where ||xi − xj|| is the Euclidean distance between xi, and xj. The product of the kernel function is a non-linear representation of each parameter from the input to the feature space. The RBK kernel is often used as SVM kernel function to make the non-linear feature map. We decided to use it because of its effectiveness and simplicity. Apart from kernel functions, SVM offers a whole set of parametrization choices. We chose 'C-classification' because of its good performance and only two free parameters. C is the cost function, i.e. a trade-off parameter that sets the width of the margin separating classes of objects. One can set a small margin of separation with the choice of larger C, but increasing the C parameter too much can lead to over-fitting. A reduced C value smoothes the hyperplane between different classes of objects, and this increases the chance of mis-classifications (Małek et al. 2013). The second parameter, γ, determines the topology of the decision surface. A low Color-coded maps illustrate contamination, completeness, and accuracy for every combinations. All subfigures suggest that W1-J is a dominant potency in star-galaxy separation. We note, however, that SVM failed to produce precise results using W1 alone. Every object was classified as a star with that choice, thus we excluded the W1-only case from the analysis.
W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3 -K W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3 -K Star contamination 0.05 0.10 0.15 0.20 W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3 -K W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3 -K Galaxy completeness 0.2 0.4 0.6 0.8 W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3 -K W 1 W 1 -W 2 W 2 -W 3 W 3 -W 4 J-H H -K W 1 -J W 2 -H W 3
Combinations of W1 and other parameters, however, are preserved, as they produce valuable results.
value of γ sets a rather rigid, and complicated decision boundary, while a value of γ that is too high can give a very smooth decision surface causing mis-classifications. We used a free software environment for SVM in python package scikit-learn.
DISCUSSION
SVM outputs
We use the SVM algorithm implemented in python. First we performed tests to tune both the C and γ parameters, and found the lowest classification errors with C=10.0, and γ = 0.1. Then we proceeded to determine the optimal number of parameters for the optimal classification efficiency experimentally. We used 8,000 objects as a "training set", and 2,000 objects for control, i.e. testing the efficiency of our algorithms. We evoke the terminology of machine learning, and use "True" (T ) and "False" (F ) labels to distinguish between objects that are classified correctly, and the ones have false identification.
We also define five measures of SVM performance:
• Star Contamination = F S T S +F S • Galaxy Contamination = F G T G +F G • Star Completeness = T S T S +F G • Galaxy Completeness = T G T G +F S • Accuracy = T G +T S T G +F G +T S +F S
We used the following set of colors/magnitudes as input parameters: W1,W1-W2, W2-W3, W3-W4, J-H, H-K, W1-J, W2-H, and W3-K. Initially, we supplied SVM with all possible pairs of this set, and obtained contamination, completeness, and accuracy. As shown in Figure 2, parameter W1-J is an astoundingly good star-galaxy separator. Either alone or combined with any other parameter W1-J guarantees the lowest star contamination, the highest galaxy completeness, and the highest accuracy. For instance, the star contamination for the combination of W1 and W1-J, or H-K and W1-J is as low as ∼ 3%, while the galaxy completeness is ∼ 94%. We observed upgrading trends in contamination, completeness, and accuracy for both stars and galaxies. High completeness values for the star sample can be explained by the fact that the sample is dominated by stars, thus False galaxies cannot affect star completeness significantly. We applied W1-J -1.7 cut, and now show the consequences.
Next we supplied SVM with more parameters. We started with W1-W2 alone, then added one more parameter in each step. Our findings are summarized in Figure 3. We qualitatively confirmed our former results, namely that the combination of WISE and 2MASS parameters increases the SVM performance. For WISE colors only, galaxy completeness is at the level of ∼ 40%, while with all parameters it reaches ∼ 95%. At the same time, star contamination decreased from ∼ 15% to ∼ 3%. Finally, similar trends are seen for the accuracy parameter, that incremented by ∼ 10% by adding 2MASS parameters.
SVM vs. color-color and color-magnitude cuts
The findings of the previous subsection suggest that separation of stars and galaxies can be achieved a linear cut on the W1-W1-J color-magnitude plane. Stellar contamination and galaxy completeness are then comparable to that of the multicolor SVM algorithm, but with a simpler and faster method. Figure 4 shows the estimated stellar contamination and the ratio of used and lost galaxies in the case of different W1-J cuts. We choose W1-J -1.7 for our purposes, as it guarantees the lowest stellar contamination, while ∼ 75% of the galaxies can be classified as galaxies.
Visualizing this parameter choice, in Figure 5 we show a W1-W1-J, and WISE color-color diagrams for WISE-2MASS objects from subsample of 10,000 objects. Classes are indicated by SDSS in this comparison. We note that a remarkable separation of stars and galaxies can be seen on the left panel of Figure 5. Objects overplotted with gray crosses enforce our W1 12.0 magnitude cut, as a larger subsample of SDSS "galaxies" imbedded in the definite stellar locus of this plot. This fact suggests that these objects might have been mis-classified by SDSS, and their usage is unsafe in a training set. We emphasize, that neither our SVM methods nor the W1-W1-J based simple galaxy selection are not affected, since we removed the brightest objects from our sample.
Further comparisons
Next we compare our galaxy sample to that of Goto et al. (2012), and Kovács et al. (2013). While these works used all four observations of all the four WISE bands, we only need W1 from WISE. As a result, artifacts are significantly reduced, most notably the stripeshaped over-densities at several locations across the sky shown on the upper panel of Figure 6. As it was pointed out by Kovács et al. (2013), the stripes are associated with the scanning strategy of the WISE survey. Different WISE bands have different sensitivities and sky coverage, therefore affect the uniformity of a full sky sam- . Left: A simple star-galaxy separator which uses only W1, and W1-J color. Separation of stars and galaxies is remarkably strong in this parameter space. Right: Color-color plots of the four WISE bands. We show the special galaxy separator cuts applied by . This result illustrates that star-galaxy separation on traditional color-color planes with linear cuts is challenging, if one wants to use a large fraction of the achievable galaxy sample. Both: Gray over-plotted crosses show objects that were removed by the W1 12.0 magnitude cut. Interestingly, many SDSS "galaxies" lie far from the galaxy locus we have identified here. We note that our W1-J -1.7 galaxy selection cut is not affected by this SDSS subsample. ple through moon-glow contamination. Kovács et al. (2013) handles this issue with special moon-contamination mask using the 'moonlev' flag of the WISE database. The stripes are not present in a W1-J -1.7 selected dataset.
Another difference is the estimated stellar contamination that is ∼ 7% for the Goto et al. (2012), and Kovács et al. (2013) cut vs. ∼ 1.8% for our present sample estimated from the same SDSS classification. At the same time, while previous works were able to keep ∼ 21% of all the galaxies, presently we achieved a ∼ 75% galaxy completeness with the new galaxy selection criteria. SVM results reach ∼ 95% completeness for galaxies, with similar stellar contamination as the W1-J cut. Figure 8 summarizes our findings.
We note that the stellar contamination may be higher where the number density of stars is above the average, e.g. close to the Galactic plane, or at the Small and Large Magellanic Clouds. Among others, these regions should be masked out in order to avoid mis-classification problems.
There are other object separation algorithms in the literature, but either they are optimized for QSO-AGN selection , or limited to bright magnitude cuts (Jarrett et al. 2011). We argue, therefore, that a direct and detailed comparison is only possible with the results of Goto et al. (2012), and Kovács et al. (2013).
All-sky galaxy catalog
W1-J cut appears to be a powerful tool for separating stars and galaxies. This is the fastest and simplest option to create a full-sky galaxy map. An SVM run for the full-sky WISE-2MASS sample would last in ∼ 7 days using a dual-core laptop. The simple cut can be realized by a query into the WISE-2MASS database. As W1-J -1.7 gives the lowest contamination according to our tests, we selected galaxies with the following query:
w1mpro between 12.0 and 15.2 and n_2mass > 0 and w1mpro -j_m_2mass < -1.7 and glat not between -10 and 10
After a few hours of running, we downloaded ∼ 5 million WISE-2MASS objects from the IRSA website 1 . The dataset contained W1,W2,W3 and W4 for WISE, and J, H and K for 2MASS as photometric parameters, and we also downloaded 'cc flag' val- ) cuts and galaxies selected in present work. Significant amount of galaxies is identified properly with the W1-J -1.7 galaxy cut. ues to uphold the possibility of further restrictions. Then we created a galaxy density map instead of galaxy number counts by normalizing the counts with the global average of the nonzero elements of the map. We apply WMAP's 9 year Temperature Mask, and an additional |b| < 20 cut to exclude Galactic regions and point sources.
Our galaxy density map is shown in Figure 7. We note that this galaxy map does not suffer from strange stripe-shaped overdensities caused by moon-glow that were present in the WISE galaxy sample of Kovács et al. (2013). Dense stripes in the galaxy field are due to non-uniform sky coverage in the W2, W3, and W4 bands, while W1 appears to preserve more uniformity. Thus using W1 band only when selecting galaxies appears to be the reason for the surprising advantages of our simple cut. We note, however, that visual inspection reveals an under-dense spot in our galaxy sample on the Northern hemisphere on the left side. This region corresponds to the Dec > +60 Northern Equatorial cap, thus probably it could be corrected, or at least masked out with further refinements in the query or using object flags. Further details of this issue and the uniformity of our galaxy sample are left for future tests.
We note that the star-galaxy separation methods we developed are useful for selecting stellar samples as well. For instance, a W1-J -1.3 color cut should result a clean sample of stars. However, a detailed selection of specific types of stars needs further refinements.
CONCLUSIONS
We focused on creating a full sky galaxy map based on the joint analysis of WISE and 2MASS photometric datasets. Using 2MASS colors add useful information, while ∼ 93% of the WISE objects with W 1 < 15.2 mag have 2MASS pairs. We performed stargalaxy separation using a class of wide-spread machine learning tools, Support Vector Machines. WISE-2MASS objects were crossidentified with SDSS objects, and available SDSS PhotoObj classification data were used as training and control sets.
Exhaustive testing of the SVM algorithm with different parameters and inputs revealed that a simple W1-J photometric color cut produces similarly clean data set as the SVM classification, at the expense of loosing a modest fraction of the galaxies. Thus finally we opted for the simpler method, and produced a clean galaxy sample with ∼ 2% stellar contamination reaching ∼ 75% completeness. On both counts, the resulting full sky survey represent significant improvement over previous samples using WISE colors only for selection. Our galaxy catalog we created with the simple W1-J -1.7 color cut contains N gal ≈ 5 million objects, with an estimated star contamination of 1.8%.
Further refinements, such Galactic dust corrections, tests of magnitude limits, and creation of masks are left for future work. We plan to make our future, refined galaxy catalogs public. Our ultimate goal, however, is a full-sky WISE-2MASS-SuperCOSMOS matched galaxy map with photometric redshifts, extending the efforts of Bilicki et al. (2014). We plan to perform various cosmological tests with this future full sky 3D galaxy catalog.
ACKNOWLEDGMENTS
We take immense pleasure in thanking the support of NASA grants NNX12AF83G and NNX10AD53G. In addition, AK acknowledges support from Campus Hungary fellowship program, and OTKA through grant no. 101666. We thank István Csabai for useful comments improving the SDSS-WISE2MASS matching properties, and target selection. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration, and data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work also makes use of data products of the GAMA survey 2 , and the Sloan Digital Sky Survey 3 .
Figure 2 .
2Measures of SVM performance are presented in the case of pairwise and single usage.
Figure 3 .
3W2 W2-W3 W3-W4 J-H H-K W1-J W2-H W3-Measures of SVM performance are shown as a function of SVM parameters.
Figure 4 .
4Star contamination and the fraction of the lost galaxies is shown.
Figure 6 .
6Gnomonic projection of galaxy number counts in HEALPIX pixels at N side = 128 is shown for the sample of (Kovács et al. 2013) (Top) and present approach (Bottom). Figures are centered on ℓ, b = 48.0, −48.0.
Figure 5
5Figure 5. Left: A simple star-galaxy separator which uses only W1, and W1-J color. Separation of stars and galaxies is remarkably strong in this parameter space. Right: Color-color plots of the four WISE bands. We show the special galaxy separator cuts applied by Goto et. al (2012). This result illustrates that star-galaxy separation on traditional color-color planes with linear cuts is challenging, if one wants to use a large fraction of the achievable galaxy sample. Both: Gray over-plotted crosses show objects that were removed by the W1 12.0 magnitude cut. Interestingly, many SDSS "galaxies" lie far from the galaxy locus we have identified here. We note that our W1-J -1.7 galaxy selection cut is not affected by this SDSS subsample.
Figure 7 .
7WISE-2MASS galaxy density map with our 'WMAP + |b| < 20' mask.
Figure 8 .
8Distributions of galaxies on color-color plane is shown for
'/pix, 1000x1000 pix on (48,-48)
http://irsa.ipac.caltech.edu/
http://www.gama-survey.org/ 3 http://www.sdss.org/
. K N Abazajian, J K Adelman-Mccarthy, M A Agüeros, S S Allam, Allende Prieto, C An, D Anderson, K S J Anderson, S F Annis, J Bahcall, N A , ApJS. 182543Abazajian K. N., Adelman-McCarthy J. K., Agüeros M. A., Allam S. S., Allende Prieto C., An D., Anderson K. S. J., Anderson S. F., Annis J., Bahcall N. A., et al. 2009, ApJS, 182, 543
. L Amendola, S Appleby, D Bacon, T Baker, M Baldi, N Bartolo, A Blanchard, C Bonvin, Living Reviews in Relativity. 166Amendola L., Appleby S., Bacon D., Baker T., Baldi M., Bartolo N., Blanchard A., Bonvin C., 2013, Living Reviews in Relativity, 16, 6
. J L Bentley, Communications of the ACM. 18509Bentley J. L., 1975, Communications of the ACM, 18, 509
. M Bilicki, T H Jarrett, J A Peacock, M E Cluver, L Steward, ApJS. 2109Bilicki M., Jarrett T. H., Peacock J. A., Cluver M. E., Steward L., 2014, ApJS, 210, 9
. M Brightman, K Nandra, MNRAS. 4221166Brightman M., Nandra K., 2012, MNRAS, 422, 1166
. K Chiu, W Zheng, D P Schneider, K Glazebrook, M Iye, N Kashikawa, Z Tsvetanov, M Yoshida, J Brinkmann, AJ. 13013Chiu K., Zheng W., Schneider D. P., Glazebrook K., Iye M., Kashikawa N., Tsvetanov Z., Yoshida M., Brinkmann J., 2005, AJ, 130, 13
. C Cortes, V Vapnik, Machine learning. 20273Cortes C., Vapnik V., 1995, Machine learning, 20, 273
. S P Driver, D T Hill, MNRAS. 413971Driver S. P., Hill D. T., et al. 2011, MNRAS, 413, 971
. R Fadely, D W Hogg, B Willman, ApJ. 76015Fadely R., Hogg D. W., Willman B., 2012, ApJ, 760, 15
. T Goto, I Szapudi, B R Granett, MNRAS. 42277Goto T., Szapudi I., Granett B. R., 2012, MNRAS, 422, L77
. M Huertas-Company, L Tasca, D Rouan, D Pelat, J P Kneib, O Le Fèvre, P Capak, J Kartaltepe, A Koekemoer, H J Mccracken, M Salvato, D B Sanders, C Willott, A&A. 497743Huertas-Company M., Tasca L., Rouan D., Pelat D., Kneib J. P., Le Fèvre O., Capak P., Kartaltepe J., Koekemoer A., McCracken H. J., Salvato M., Sanders D. B., Willott C., 2009, A&A, 497, 743
. T H Jarrett, T Chester, R Cutri, S Schneider, M Skrutskie, J P Huchra, AJ. 1192498Jarrett T. H., Chester T., Cutri R., Schneider S., Skrutskie M., Huchra J. P., 2000, AJ, 119, 2498
. T H Jarrett, M Cohen, F Masci, E Wright, D Stern, D Benford, A Blain, S Carey, ApJ. 735112Jarrett T. H., Cohen M., Masci F., Wright E., Stern D., Benford D., Blain A., Carey S., 2011, ApJ, 735, 112
Society of Photo-Optical Instrumentation Engineers. N Kaiser, W Burgett, K Chambers, SPIE) Conference Series. 773Kaiser N., Burgett W., Chambers K., 2010, 'Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series', 773
. A Kovács, I Szapudi, B R Granett, Z Frei, MNRAS. 43128Kovács A., Szapudi I., Granett B. R., Frei Z., 2013, MNRAS, 431, L28
. P A Abell, LSST Science CollaborationJ Allison, LSST Science CollaborationS F Anderson, LSST Science CollaborationJ R Andrew, LSST Science CollaborationJ R P Angel, LSST Science CollaborationL Armus, LSST Science CollaborationD Arnett, LSST Science CollaborationS J Asztalos, LSST Science CollaborationT S Axelrod, LSST Science CollaborationArXiv e-printsLSST Science Collaboration Abell P. A., Allison J., Anderson S. F., Andrew J. R., Angel J. R. P., Armus L., Arnett D., Asz- talos S. J., Axelrod T. S., et al. 2009, ArXiv e-prints
. K Małek, A Solarz, A Pollo, A Fritz, B Garilli, M Scodeggio, A Iovino, B R Granett, U Abbas, C Adami, S Arnouts, J Bel, W J Percival, S Phleps, M Wolk, G Zamorani, A&A. 55716Małek K., Solarz A., Pollo A., Fritz A., Garilli B., Scodeggio M., Iovino A., Granett B. R., Abbas U., Adami C., Arnouts S., Bel J., Percival W. J., Phleps S., Wolk M., Zamorani G., 2013, A&A, 557, A16
. A Pollo, P Rybka, T T Takeuchi, A&A. 5143Pollo A., Rybka P., Takeuchi T. T., 2010, A&A, 514, A3
. G T Richards, X Fan, H J Newberg, M A Strauss, D E Vanden Berk, D P Schneider, B Yanny, C Stoughton, M Subbarao, D G York, AJ. 1232945Richards G. T., Fan X., Newberg H. J., Strauss M. A., Vanden Berk D. E., Schneider D. P., Yanny B., Stoughton C., SubbaRao M., York D. G., 2002, AJ, 123, 2945
. R P Saglia, J L Tonry, R Bender, N Greisel, S Seitz, R Senger, J Snigula, S Phleps, C W Stubbs, R J Wainscoat, ApJ. 746128Saglia R. P., Tonry J. L., Bender R., Greisel N., Seitz S., Senger R., Snigula J., Phleps S., Stubbs C. W., Wainscoat R. J., 2012, ApJ, 746, 128
. D Schlegel, F Abdalla, T Abraham, C Ahn, Allende Prieto, C Annis, J Aubourg, E Azzaro, M Zhai, C Zhang, P , Schlegel D., Abdalla F., Abraham T., Ahn C., Allende Prieto C., Annis J., Aubourg E., Azzaro M., Zhai C., Zhang P., 2011, ArXiv e-prints
. M F Skrutskie, R M Cutri, R Stiening, M D Weinberg, S Schneider, J M Carpenter, C Beichman, R Capps, T Chester, J Elias, S Wheelock, AJ. 1311163Skrutskie M. F., Cutri R. M., Stiening R., Weinberg M. D., Schnei- der S., Carpenter J. M., Beichman C., Capps R., Chester T., Elias J., Wheelock S., 2006, AJ, 131, 1163
. A Solarz, A Pollo, T T Takeuchi, A Pȩpiak, H Matsuhara, T Wada, S Oyabu, T Takagi, T Goto, Y Ohyama, C P Pearson, H Hanami, T Ishigaki, A&A. 54150Solarz A., Pollo A., Takeuchi T. T., Pȩpiak A., Matsuhara H., Wada T., Oyabu S., Takagi T., Goto T., Ohyama Y., Pearson C. P., Hanami H., Ishigaki T., 2012, A&A, 541, A50
. D Stern, R J Assef, D J Benford, A Blain, R Cutri, A Dey, P Eisenhardt, R L Griffith, T H Jarrett, S Lake, F Masci, S Petty, S A Stanford, C.-W Tsai, E L Wright, L Yan, F Harrison, K Madsen, ApJ. 75330Stern D., Assef R. J., Benford D. J., Blain A., Cutri R., Dey A., Eisenhardt P., Griffith R. L., Jarrett T. H., Lake S., Masci F., Petty S., Stanford S. A., Tsai C.-W., Wright E. L., Yan L., Harrison F., Madsen K., 2012, ApJ, 753, 30
The Dark Energy Survey Collaboration. ArXiv Astrophysics e-printsThe Dark Energy Survey Collaboration 2005, ArXiv Astrophysics e-prints
. E C Vasconcellos, R R De Carvalho, R R Gal, F L Labarbera, H V Capelato, H Frago Campos Velho, M Trevisan, R S R Ruiz, AJ. 141189Vasconcellos E. C., de Carvalho R. R., Gal R. R., LaBarbera F. L., Capelato H. V., Frago Campos Velho H., Trevisan M., Ruiz R. S. R., 2011, AJ, 141, 189
. P R Woźniak, S J Williams, W T Vestrand, V Gupta, AJ. 1282965Woźniak P. R., Williams S. J., Vestrand W. T., Gupta V., 2004, AJ, 128, 2965
. E L Wright, P R M Eisenhardt, A K Mainzer, AJ. 1401868Wright E. L., Eisenhardt P. R. M., Mainzer A. K., et al. 2010, AJ, 140, 1868
. L Yan, E Donoso, C.-W Tsai, Yan L., Donoso E., Tsai C.-W., et al. 2012, ArXiv e-prints
|
[] |
[
"Detailed Electronic Structure of the Three-Dimensional Fermi Surface and its Sensitivity to Charge Density Wave Transition in ZrTe 3 Revealed by High Resolution Laser-Based Angle-Resolved Photoemission Spectroscopy",
"Detailed Electronic Structure of the Three-Dimensional Fermi Surface and its Sensitivity to Charge Density Wave Transition in ZrTe 3 Revealed by High Resolution Laser-Based Angle-Resolved Photoemission Spectroscopy"
] |
[
"Shou-Peng Lyu \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"Li Yu \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"Jian-Wei Huang \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"Cheng-Tian Lin \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"Qiang Gao \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"Jing Liu \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"Guo-Dong Liu ",
"Lin Zhao \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"Jie Yuan \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"Chuang-Tian Chen \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nTechnical Institute of Physics and Chemistry\nChinese Academy of Sciences\n100190BeijingChina\n",
"Zu-Yan Xu \nTechnical Institute of Physics and Chemistry\nChinese Academy of Sciences\n100190BeijingChina\n",
"Xing-Jiang Zhou \nNational Lab for Superconductivity\nBeijing\n\nInstitute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n\nCollaborative Innovation Center of Quantum Matter\n100871BeijingChina\n"
] |
[
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"Technical Institute of Physics and Chemistry\nChinese Academy of Sciences\n100190BeijingChina",
"Technical Institute of Physics and Chemistry\nChinese Academy of Sciences\n100190BeijingChina",
"National Lab for Superconductivity\nBeijing",
"Institute of Physics\nNational Laboratory for Condensed Matter Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"Collaborative Innovation Center of Quantum Matter\n100871BeijingChina"
] |
[] |
The detailed information of the electronic structure is the key for understanding the nature of charge density wave (CDW) order and its relationship with superconducting order in microscopic level. In this paper, we present high resolution laser-based angle-resolved photoemission spectroscopy (ARPES) study on the three-dimensional (3D) hole-like Fermi surface around the Brillouin zone center in a prototypical qusi-one-dimensional CDW and superconducting system ZrTe 3 . Double Fermi surface sheets are clearly resolved for the 3D hole-like Fermi surface around the zone center. The 3D Fermi surface shows a pronounced shrinking with increasing temperature. In particular, the quasiparticle scattering rate along the 3D Fermi surface experiences an anomaly near the charge density wave transition temperature of ZrTe 3 (∼63 K). Signature of electron-phonon coupling is observed with a dispersion kink at ∼20 meV; the strength of the electron-phonon coupling around the 3D Fermi surface is rather weak. These results indicate that the 3D Fermi surface is also closely connected to the charge-density-wave transition and suggest a more global impact on the entire electronic structure induced by CDW phase transition in ZrTe 3 .
|
10.1088/1674-1056/27/8/087503
|
[
"https://arxiv.org/pdf/1902.08872v1.pdf"
] | 119,222,409 |
1902.08872
|
a10404efd64d9fbe3c455153fb5186a7366257ce
|
Detailed Electronic Structure of the Three-Dimensional Fermi Surface and its Sensitivity to Charge Density Wave Transition in ZrTe 3 Revealed by High Resolution Laser-Based Angle-Resolved Photoemission Spectroscopy
(Dated: February 26, 2019) 24 Feb 2019
Shou-Peng Lyu
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
Li Yu
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
Jian-Wei Huang
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
Cheng-Tian Lin
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
Qiang Gao
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
Jing Liu
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
Guo-Dong Liu
Lin Zhao
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
Jie Yuan
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
Chuang-Tian Chen
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
Technical Institute of Physics and Chemistry
Chinese Academy of Sciences
100190BeijingChina
Zu-Yan Xu
Technical Institute of Physics and Chemistry
Chinese Academy of Sciences
100190BeijingChina
Xing-Jiang Zhou
National Lab for Superconductivity
Beijing
Institute of Physics
National Laboratory for Condensed Matter Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
Collaborative Innovation Center of Quantum Matter
100871BeijingChina
Detailed Electronic Structure of the Three-Dimensional Fermi Surface and its Sensitivity to Charge Density Wave Transition in ZrTe 3 Revealed by High Resolution Laser-Based Angle-Resolved Photoemission Spectroscopy
(Dated: February 26, 2019) 24 Feb 20191
The detailed information of the electronic structure is the key for understanding the nature of charge density wave (CDW) order and its relationship with superconducting order in microscopic level. In this paper, we present high resolution laser-based angle-resolved photoemission spectroscopy (ARPES) study on the three-dimensional (3D) hole-like Fermi surface around the Brillouin zone center in a prototypical qusi-one-dimensional CDW and superconducting system ZrTe 3 . Double Fermi surface sheets are clearly resolved for the 3D hole-like Fermi surface around the zone center. The 3D Fermi surface shows a pronounced shrinking with increasing temperature. In particular, the quasiparticle scattering rate along the 3D Fermi surface experiences an anomaly near the charge density wave transition temperature of ZrTe 3 (∼63 K). Signature of electron-phonon coupling is observed with a dispersion kink at ∼20 meV; the strength of the electron-phonon coupling around the 3D Fermi surface is rather weak. These results indicate that the 3D Fermi surface is also closely connected to the charge-density-wave transition and suggest a more global impact on the entire electronic structure induced by CDW phase transition in ZrTe 3 .
How the superconducting phase competes or coexists with various magnetic or chargeordering phase is a long standing fundamental issue in modern condensed matter physics.
Especially in the low dimensional systems like high T c cuprates [1], heavy fermion superconductors [2] and iron-based superconductors [3], superconducting order can emerge in the vicinity of multiple-order environment. More and more experimental evidences that have been collected recently point to a close relationship between superconductivity and the phase competition or phase coexistence [4,5]. This triggers, on the other hand, great interest in various classical charge density wave (CDW) systems with coexisting superconductivity [6].
These systems can serve as a playground for investigating the nature of CDW order, and most importantly, its relationship with superconductivity. Examples along this line include many kinds of systems like 1T-TiSe 2 [7,8], TbTe 3 [9,10], and etc. Among all these materials, ZrTe 3 is one of the prototypical quasi-one-dimensional (quasi-1D) systems undergoing both a CDW transition at ∼63 K and a superconducting transition at ∼2 K [11]. Further pressure-dependent measurements [12][13][14] and ion substitution study in ZrTe 3 [4,[15][16][17] revealed unusual connection between CDW and superconductivity, suggesting a competingtype relationship between them at low temperature. This makes ZrTe 3 a unique candidate to study the complexity behind the CDW-superconductivity entanglement in such a qusi-1D system.
To determine the detailed electronic structure and related Fermi surface and electron dynamics for better understanding the physical properties of materials in a microscopic level, ARPES is one of the most direct techniques. The semi-metallic character of ZrTe 3 and Fermi surface topology have been studied by several earlier ARPES measurements [18,19]. Up to now, most ARPES studies on ZrTe 3 concentrate on the quasi-1D Fermi surface sheet which is believed to be most relevant for the CDW formation. A partial gap or pseudogap feature is observed around the D-point region at the Brillouin Zone (BZ) corner which was associated with strongly fluctuating CDW order that kicks in at a high temperature above 200 K [18], even though the CDW transition temperature is at a much lower temperature 63 K. Such strong fluctuating character of CDW order is common in many low-dimensional systems.
The nearly commensurate Fermi surface nesting vector q n deduced by connecting neighbor pseudogap regions is consistent with the CDW vector q CDW =(1/14,0,1/3) determined from direct electron microscope measurements [20], which supports a conventional Fermi surface nesting mechanism of CDW formation in ZrTe 3 [21,22]. However, recent Raman scattering measurement suggested that electron-phonon coupling can play a dominant role for the CDW order beyond the conventional nesting picture [23]. It remains to be investigated whether the rest part of the quasi-1D Fermi surface and the 3D Fermi surface around Γ, are possibly responsible or responsive to CDW order in ZrTe 3 system. It is still an open question on how and where superconductivity emerges in the momentum space in ZrTe 3 .
In this paper, we present detailed ARPES measurements on the 3D Fermi surface and its associated electron dynamics by using a newly developed laser-based ARPES system with ultra-high instrumental resolution. We resolved clearly the double-Fermi surface sheets in the 3D hole-like Fermi surface around Γ point. The large shrinking of the 3D Fermi surface with increasing temperature is observed. In particular, the quasiparticle scattering rate along the 3D Fermi surface exhibits an anomaly near the CDW transition temperature ∼63 K, indicating the sensitivity of the 3D Fermi surface to the CDW transition. Signature of electron-phonon coupling is revealed with a dispersion kink at ∼20 meV; the electronphonon coupling strength is rather weak. These observations provide new and comprehensive information on the Fermi surface topology and electron dynamics on the 3D Fermi surface in ZrTe 3 that are important for understanding CDW formation, superconductivity and their relationship in ZrTe 3 .
The ZrTe 3 single crystals used in this work were prepared by chemical vapor transport method with iodine as the transport agent. As shown in the resistivity measurement of ZrTe 3 in Fig. 1(b), a clear hump like resistivity anomaly pops out at about 63 K along the a-axis (blue solid-line) which is attributed to the CDW formation, while a filamentary superconducting transition occurs below 2 K. Such a CDW signature is not present in the resistivity-temperature dependence along the b-axis (red dashed line in Fig. 1(b)). ARPES measurements were performed at our new laser-based system equipped with 6.994 eV vacuum ultraviolet laser based on non-linear optical crystal KBBF. It is equipped with the time-offlight electron energy analyser (ARToF 10K by Scienta Omicron) which has two-dimensional probing capability in momentum space, i.e.,it can detect all photoelectrons simultaneously within a detector receiving angle 30 • (±15 • ). The energy resolution is ∼1meV, and the angular resolution is ∼0.1 • . Some detailed description of the ARPES system can be found in reference [24]. Samples were cleaved in situ at 20 K and measured in ultrahigh vacuum with a base pressure better than 5×10 −11 mbar.
The crystal structure of ZrTe 3 (space group P21/m, Fig. 1(a)) consists of infinitely stacked ZrTe 3 trigonal prisms along the b-direction, forming quasi-1D prismatic chains [25,26]. The monoclinic unit cell contains two neighboring chains related reversely with each other and binded through nearest inter-chain Zr-Te(1) bonds to form layers in the ab-plane.
The inter-layer bonding, however, relies on Van de Waals force which makes cleaving of the single crystal sample naturally along the ab-planes. Within each layer, the nearest Te(2) and Te(3) atoms bond together to form a dimerised chain along the a-axis [26]. This is widely believed to be the essential element for the electronic properties in the CDW state. Band structure calculations [27] suggest that the Fermi surface of the ZrTe 3 consists of two major components, as sketched in Fig. 2(a), a 3D hole-like Fermi surface sheet (blue) around the BZ center which is formed by the hybridization between Zr-4d orbitals and the Te(1)-5p orbitals via nearest inter-prism hopping, and a quasi-1D electron-like Fermi surface (green) at the BZ boundary along b* direction, which is formed by the nearest hopping via Te (2) Very clear band splitting feature can be observed in ZrTe 3 , as seen in Fig. 3. Two sets of Fermi surface sheets can be extracted, as shown in Fig. 3 (a) where the red and blue lines represent the main Fermi surface and the split Fermi surface, respectively. The corresponding band structures are shown in Fig. 3 (d) measured along k y direction for four typical momentum cuts, as marked in Fig. 3 (a). Corresponding momentum distribution curves (MDCs) at the Fermi level are plotted in Fig. 3 (c). It is clear that, for each momentum cut, the corresponding MDCs have two sets of bands, one main band and one shoulder split band, with a total of four peaks which can be well fitted by four Lorentzian peaks, as marked by the four black arrows in each panel in Fig. 3(c). The presence of four bands can be directly seen from Fig. 3(b) which represents the second derivative image from the leftmost panel of Fig. 3(d). The detailed analysis of the band splitting around the Fermi surface gives a two-Fermi surface sheet picture shown in Fig. 3(a). Fig. 3 (e) presents energy distribution curves (EDCs) corresponding to the measured band structure Cut 1 in In addition to the band splitting on the Fermi surface, it is clear that, for the hole-like band near Γ leftmost penal in Fig. 3(d)), there is a sharp band with its top at ∼240 meV (see also EDCs in Fig. 3(e)), and a broad distribution of spectral weight extending to a binding energy of ∼120 meV. Similar result was reported before that was suggested to be caused by the bilayer-splitting effect at the sample surface layers [19]. Our results indicate that the feature near the binding energy of ∼200 meV does not represent two well-defined bands, but one well-defined band plus an envelope of braod spectral weight distribution.
There are similar results observed in recent work of ZrTe 5 [28]. Such broad shoulder and hump-like feature that appears on top of a sharp band can be caused by k z -effect [28]. Fig. 4(a) and the corresponding band structure are shown in Fig. 4(b). Such a free choice of cut direction in momentum space only becomes possible because of the unique ARToF 3D-data property which maintains the same data continuity and measurement condition among the entire 2D area. For each image in Fig. 4(b), the band structure is analysed by MDC fitting with two Lorentzian peaks. The obtained band dispersion of the main band is plotted on each panel of Fig. 4(b). From these quantitative MDC fitting, the MDC width at the Fermi level along the Fermi surface, and the Fermi velocity, can be obtained, as shown in Fig. 5. Fig. 5(a). shows the position of ten Fermi momenta along the Fermi surface corresponding to 10 momentum cuts in Fig. 4(a).
The EDCs along the Fermi surface on these 10 Fermi momenta are shown in Fig. 4(b), and the corresponding MDCs at the Fermi level along the 10 momentum cuts are shown in Fig. 4(c). The EDC width, full width at half maximum, is plotted in Fig. 5(d) The major topic of this work is to investigate the temperature dependence of the 3D Fermi surface and its relationship with the CDW transition. The first issue addressed here is relative change of the 3D Fermi surface topology with temperature. Due to the oval shape and crystal symmetry, the Fermi surface distance is defined in MDCs at the Fermi level Fig. 6 (b) measured along typical momentum cuts as indicated in Fig. 6 (a). The distance between the two branches of main band (black arrows) along the Fermi surface short axis or vertical direction (k x = 0, 0.05, 0.1Å −1 ) at the Fermi level, can be used to measure the 3D Fermi surface size change and its temperature dependence is presented in Fig. 6 (d). A clear increase of the Fermi surface distance with decreasing temperature in the scale of 0.01Å −1 has been revealed between 120 K and 20 K, which represents nearly 3 % relative expansion with decreasing temperature. This change of the Fermi surface distance can be accurately measured due to the utilization of ARToF-ARPES technique. The three momentum cuts show similar variation of Fermi surface distance with temperature, as shown in Fig. 6(d).
This suggests an overall expansion of the 3D Fermi surface with decreasing temperature in ZrTe 3 . It is natural to ask whether this observation can be caused by the lattice shrinkage with decreasing temperature. Neutron scattering measurements [29] of ZrTe 3 demonstrate a shrinkage of the lattice constant about 0.1 % from 120 K to 20 K. Such a small change of lattice constant clearly cannot account for the ∼3% change of the 3D Fermi surface size we have observed.
In order to understand the origin of the 3D Fermi surface change with temperature, we also measured the variation of the band position for the 200 meV band near Γ at different temperatures. To be more specific, this top-most energy location can be determined by the lorentzian-fit of the EDC spectra extracted at Γ point, as plotted in Fig. 6 (c). The fitted temperature dependence of the peak position is shown in Fig. 6 (e). It reveals about 10 meV band shift upwards when the sample temperature changes from 120 K to 20 K. To our best knowledge, this is the first time that such a significant band shifting with temperature is observed for the 3D-Fermi surface of ZrTe 3 . If we assume a rigid band shift, such a 10 meV energy shift of the chemical potential would give rise to ∼0.006Å −1 Fermi surface distance change when the Fermi velocity is assumed to be 3 eV ·Å. This is close to the Fig. 7 (b) and (c), respectively. The black circles or lines are experimental data in both sub-figures and red lines represent corresponding Lorentzian-fit results. As the temperature decreases from 120 K to 30 K, the corresponding fitted MDC width in Fig. 7 (d) clearly shows an non-monotonic behavior. The MDC width first decreases from 120 K to about 60 K, suggesting a drastic decrease of the quasiparticle scattering rate. By further lowering the temperature towards to 30 K, the MDC width rises up again implying an increase of the quasiparticle scattering rate at low temperature. The overall temperature dependence of the MDC width reveals a 8 clear minimum that is close to the CDW coherent transition temperature T CDW ∼63 K. We applied the same analysis on all the cuts along the entire 3D Fermi surface. The fitted MDC width from several momentum cuts along the Fermi surface as function of temperature are summarized in Fig. 7(d). They all show similar behavior with a minimum located at T CDW ∼63 K. This finding suggests that such MDC width anomaly across T CDW is a general property along the whole 3D Fermi surface. To our best knowledge, it is the first spectroscopic signature to find that the 3D Fermi surface is directly associated with the CDW order in ZrTe 3 . Following the same procedure, one can also apply similar linewidth analysis of EDCs at k F from the main band which is associated with quasiparticle scattering rate. It reveals similar temperature-dependent behavior with the MDC linewidth, as shown in Fig. 7 (e). The quasiparticle scattering rate decreases from 120 K, reaches a minimum around T CDW ., and rises up again at low temperature.
The consistent finding from both MDC and EDC linewidth analyses uncovers a new scattering channel of quasiparticle in the main band in the CDW state. On the one hand, the reduction of the quasiparticle scattering rate above T CDW is consistent with the increase of the metallicity as observed in the resistance measurement ( Fig. 1 (b)). On the other hand, its rising below T CDW is not compatible with the transport results. Except for the hump structure popping out along a-axis around T CDW , the resistance along both a-and b-axes shows a metallic behavior towards low temperature. This inconsistency between the temperature evolution of the scattering rate and the resistance clearly suggest two important issues. First, the hump anomaly and further metallicity below T CDW in transport property can be contributed by the quasi-1D Fermi surface on the BZ boundary, which plays a dominant role in the CDW formation at low temperature. This finding also agrees with earlier works [18,19]. Second, it is quite unusual that the CDW formation impacts on the 3D Fermi surface quasi-particle properties even though it plays a minor role in generating the CDW order state in ZrTe 3 . In a classical CDW system, when some part of the Fermi surface is gapped out at low temperature, the rest part of the electronic states should be less coupled and scattered since some scattering channels have been blocked. On the contrary, here the electronic states of the main band of the 3D Fermi surface suffer extra scattering process while part of the quasi-1D Fermi surface has been gapped out. Especially if one stays with the conventional nesting picture for CDW in ZrTe 3 , it would be quite unusual to find such an involvement of the electronic states in the CDW state which is far from the nesting part 9 of the Fermi surface.
To achieve more insight on understanding such an electronic renormalization effect developed in the CDW state, self-energy analysis of the measured ARPES spectral function in semi-quantitative level has been applied on the very same data based on the single-particle
Green's function model, as shown in Fig. 8. The measurement temperature is 30 K which is below T CDW . The E-k image (Cut1) is shown in Fig. 8(a) and the fitted dispersions are shown in Fig. 8(b) measured along several typical momentum cuts as indicated in Fig. 7 (a). Here the so-called bare band is approximated as a single straight line linking between the k F and the band location at E b = 100 meV. The MDCs at each binding energy are fitted by the Green's function, from which the real and imaginary parts of the self-energy can be extracted, as shown in Fig. 8 (c) and (d), respectively. The overall self-energy is small; it remains almost unchanged except in the low energy region where a clear kink appears in the real part of self-energy at an energy of ∼20 meV. For a metallic system like ZrTe 3 , the electron-phonon coupling can serve as the most natural explanation for the self-energy anomaly observed here. This means the low energy phonon, especially those with 20 meV in energy scale, can couple with the main band electronic states in the CDW order. However, we note that the overall electron-phonon coupling is weak. At present, there is no straight answer to the question on how the CDW formation is related to the electron-phonon coupling along the 3D Fermi surface.
In summary, we have performed high resolution laser-based ARPES measurements on ZrTe 3 and focused on studying the 3D-hole-like Fermi surface around Γ and related electron dynamics. We have clearly resolved double Fermi surface sheets for the 3D hole-like Fermi surface around the zone center. The 3D Fermi surface shows a pronounced shrinkage with increasing temperature. In particular, the quasiparticle scattering rate along the 3D Fermi surface experiences an anomaly near the charge density wave transition temperature of ZrTe 3 (∼63 K). Signature of electron-phonon coupling is observed with a dispersion kink at ∼20 meV; the strength of the electron-phonon coupling around the 3D Fermi surface is rather weak. These results indicate that the 3D Fermi surface is closely coupled to the chargedensity-wave transition and suggest a more global impact on the entire electronic structure induced by CDW phase transition in ZrTe 3 .
Acknowledgement
and Te(3)-5p orbitals along the dimerised chains. Corresponding 2D projected BZ is plotted underneath the 3D BZ that are used in the ARPES measurements. The letters Γ, B, Y, and D denote the high symmetry points in the projected BZ.
Figure 2
2shows the Fermi surface and constant energy contours of ZrTe 3 measured at 30 K. Typical bands structure along some typical momentum cuts are shown inFig. 3. Thanks to the high efficiency of our new ARToF analyzer-based ARPES measurements, the entire 3D hole-like Fermi surface can be covered by two measurements, one is to cover the central Fermi surface region (Fig. 2(b)) while the other is to cover the Fermi surface tip region (Fig. 2(d)). After careful joining these two measurement results, and symmetrize in twofold manner within the first BZ, a complete description of the 3D hole-like Fermi surface is obtained, as shown inFig. 2(b). It reveals a clear Fermi surface with an anisotropic oval-shape centered at Γ point. This Fermi surface shape is consistent with the results from earlier ARPES results[18,19].Fig. 2(c)and (d) present the constant energy contours of ZrTe 3 at different binding energies E b ; three characteristic energies are picked up here, E b = 0, E b = 180 meV, and E b = 300 meV, respectively. The contour area increases with increasing E b , which is consistent with the hole-like nature of the 3D Fermi surface. At E b = 180 meV, there appears a shadow feature near the BZ center due to the touching of another hole-like bands near Γ. At E b = 300 meV, a new oval feature appears near the BZ center due to the second hole-like band near Γ, as is shown in the band structure inFig. 3.
Fig. 3
3(d). Sharp EDC peaks are observed near the Fermi momenta, k L and k R .
Figure 4
4shows band structure of ZrTe 3 with the momentum cuts perpendicular to the 3D Fermi surface which facilitates the study of band dispersion and electron dynamics. The location of the 10 momentum cuts are marked in
along the Fermi surface, which vary between 30 to 40 meV. The MDC width (full width at half maximum), on the other hand, increases when the momentum shifts from the central region to the tip region along the 3D Fermi surface. The Fermi velocity, obtained from the extracted MDC dispersions in Fig. 4(b), shows a slight maximum in between the central and tip regions of the 3D Fermi surface. The maximum and minimum Fermi velocities are about 4.19eV ·Å (corresponding to 6.6×10 5 m · s −1 ) and 2.18 eV ·Å (corresponding to 3.4×10 5 m · s −1 ), respectively. The extraction of these quantities in Fig. 5 will be important in understanding the role of the 3D Fermi surface in dictating the physical properties of ZrTe 3 .
observed 0 .
001Å −1 change of the Fermi surface distance, indicating the chemical potential shift plays a major role in causing the Fermi surface distance change with temperature. The reason behind such a considerable chemical potential shift with temperature can be either by the lattice constant variation or by strong density of states (DOS) unbalance distribution around the Fermi level, similar to the mechanism in semiconductors. It is well known in the semiconductor community that the Fermi level can be tuned away from the band gap middle. In this case, the model of Fermi level or the chemical potential is E F = Ec -K B T·log(N c /N v ), where Ec is the energy of conduction band, K B is the Boltzmann constant, T is the absolute temperature, N c is the effective DOS in the conduction band, and N v are the effective DOS in the valance band. The E F can be tuned if N c /N v is not equal to one. Recent examples from WTe 2[30] and ZrTe 5[28] have already shown that the Fermi level E F can change with temperature. The semi-metal character of ZrTe 3 suggests that the E F shift with temperature might arise from similar mechanism found in WTe 2 and ZrTe 5 .
Figure 7
7presents quasiparticle scattering rate change with temperature along the 3D Fermi surface in ZrTe 3 . The temperature-dependence of the MDCs at the Fermi level and the EDCs at the Fermi momentum k F of the main band are presented in
FIG. 4 : 16 FIG. 5 : 17 FIG. 6 :
4165176Band structure of ZrTe 3 with momentum cuts perpendicular to the Fermi surface. (a) Fitted Fermi surface lines by two lorentz functions. Red and blue lines represent main Fermi surface and split Fermi surface, respectively. (b) Band structures measured along momentum cuts 1 to 10. The locations of these momentum cuts are indicated in (a). Fitted dispersions of main band by Lorentz functions are shown in (b), and indicated by black lines. Scattering rate and Fermi velocity of ZrTe 3 along the 3D Fermi surface. (a) Fitted Fermi surface lines by two lorentz functions. EDCs and MDCs along momentum points 1 to 10 are shown in (b) and (c), respectively. The locations of these momentum points are indicated in (a), which are acquired from Fig. 4 (b). (d) and (e) indicate EDC width and MDC width, respectively. (f) Evolution of Fermi velocity along the Fermi surface, which are acquired from fitted dispersions of Fig. 4 (b). Observation of 3D Fermi surface shrinkage with temperature in ZrTe 3 . (a) Fermi surface acquired by one time measurement. (b) MDCs at the Fermi level of Cut1 at different temperatures. The arrows represent the peak position, and the distance between two peaks is marked as Fermi surface distance (FSD). Evolution of FSD with temperature are shown in (d), the locations of these momentum cuts are indicated in (a). (c) EDCs at Γ point at different temperature. The arrow represents the EDC peak position. Evolution of EDC peak position with temperature is shown in (e).
18 FIG. 7 : 19 FIG. 8 :
187198Scattering rate of ZrTe 3 at different temperatures. (a) Fermi surface acquired by one time measurement. (b) MDCs at Fermi level of Cut1 at different temperatures. Black circles and red lines represent the raw data and fitted results by lorentz functions, respectively. Evolution of MDC width with temperature is shown in (d). (c) EDCs at k F of main band at different temperatures. Evolution of EDC width with temperature is shown in (e). The locations of these momentum cuts are indicated in (a). Self-energy of ZrTe 3 measured at T = 30 K. (a) Band structure of Cut1 indicated in Fig. 7 (a). (b) Band dispersions in a small energy window from E b = 0 to E b = 100 meV acquired from MDC fitting. Black, blue and red circles represent the cut 1, cut 2 and cut 3, respectively. For clarity, cut 2 and cut 3 are offset by 0.05 and 0.1Å −1 , respectively. (c) The effective real part of the electron self-energy for the 3 cuts, which are indicated in Fig. 7 (a). The bare bands are chosen for each band as straight lines connecting the two points on the measured dispersion at the Fermi level and 100 meV binding energy. The arrow indicates the kink position. (d) The imaginary part of the electron self-energy for the 3 cuts.
atoms. (b) Temperature dependence of normalized resistivity for our ZrTe 3 single crystal samples.There is a clear resistivity hump at ∼ 63 K along the a-axis and a superconducting transtion at ∼ 2 K along both a-and b-axes. The cleaved surface morphology of a ZrTe 3 sample is shown in the inset of (b); there are one-dimensional thread-like structures running along the b-axis.
the National Natural Science Foundation of China (Grant Nos. 11574360, 11534007, and 11334010), and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences. Grant No. XDB070203002015CB921301), the National Natural Science Foundation of China (Grant Nos. 11574360, 11534007, and 11334010), and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB07020300).
Correspondence and requests for materials should be addressed to L. Y.([email protected]) and X. J.Z.([email protected]). Correspondence and requests for materials should be addressed to L. Y.([email protected]) and X.J.Z.([email protected])
. A M Gabovich, A I Voitenko, T Ekino, M S Li, H Szymczak, M Pekala, Adv. Cond. Matter Phys. 201040Gabovich A. M., Voitenko A. I., Ekino T., Li M. S., Szymczak H., Pekala M., Adv. Cond. Matter Phys. 2010, 40 (2010).
. C Pfleiderer, Reviews of Modern Physics. 811624Pfleiderer C., Reviews of Modern Physics 81, 1624 (2009)
. He Shaolong, He Junfeng, Zhang Wenhao, Zhao Lin, Liu Defa, Liu Xu, Mou Daixiang, Ou Yun-Bo, Wang Qing-Yan, Li Zhi, Wang Lili, Peng Yingying, Liu Yan, Chen Chaoyu, Yu Li, Liu Guodong, Dong Xiaoli, Zhang Jun, Chen Chuangtian, Xu Zuyan, Chen Xi, Ma Xucun, Xue Qikun, X J Zhou, Nature Materials. 12605He Shaolong, He Junfeng, Zhang Wenhao, Zhao Lin, Liu Defa, Liu Xu, Mou Daixiang, Ou Yun-Bo, Wang Qing-Yan, Li Zhi, Wang Lili, Peng Yingying, Liu Yan, Chen Chaoyu, Yu Li, Liu Guodong, Dong Xiaoli, Zhang Jun, Chen Chuangtian, Xu Zuyan, Chen Xi, Ma Xucun, Xue Qikun, Zhou X. J., Nature Materials 12, 605 (2013)
. X Zhu, W Ning, L Li, L Ling, R Zhang, J Zhang, K Wang, Y Liu, L Pi, Y Ma, H Du, M Tian, Y Sun, C Petrovic, Y Zhang, Sci. Rep. 626974Zhu X., Ning W., Li L., Ling L., Zhang R., Zhang J., Wang K., Liu Y., Pi L., Ma Y., Du H., Tian M., Sun Y., Petrovic C., Zhang Y. Sci. Rep. 6, 26974 (2016)
. Cui Shan, He Lan-Po, Hong Xiao-Chen, Zhu Xiang-De, Petrovic Cedomir, Li Shi-Yan, Chin. Phys. B. 2577403Cui Shan, He Lan-Po, Hong Xiao-Chen, Zhu Xiang-De, Petrovic Cedomir, Li Shi-Yan, Chin. Phys. B 25, 077403 (2016)
. S J Denholme, A Yukawa, K Tsumura, M Nagao, R Tamura, S Watauchi, I Tanaka, H Takayanagi, N Miyakawa, Sci. Rep. 745217Denholme S. J., Yukawa A., Tsumura K., Nagao M., Tamura R., Watauchi S., Tanaka I., Takayanagi H., Miyakawa N., Sci. Rep. 7, 45217 (2017)
. T E Kidd, T Miller, M Y Chou, T C Chiang, Phys. Rev. Lett. 88226402Kidd T. E., Miller T., Chou M. Y., Chiang T. C., Phys. Rev. Lett. 88, 226402 (2002)
. Kolekar Sadhu, Bonilla Manuel, Ma Yujing, Horacio Diaz, Batzill Coy, Matthias, Mater, 515006Kolekar Sadhu, Bonilla Manuel, Ma Yujing, Diaz Horacio Coy, Batzill Matthias, 2D Mater. 5, 015006 (2017)
. Chu W Kwang-Hua, Chem. Phys. 40940Kwang-Hua, Chu W., Chem. Phys. 409, 40 (2012)
. F Schmitt, P S Kirchmann, U Bovensiepen, R G Moore, L Rettig, M Krenz, J.-H Chu, N Ru, L Perfetti, D H Lu, M Wolf, I R Fisher, Z.-X Shen, Science. 3211652Schmitt F., Kirchmann P. S., Bovensiepen U., Moore R. G., Rettig L., Krenz M., Chu J.-H., Ru N., Perfetti L., Lu D. H., Wolf M., Fisher I. R., Shen Z.-X., Science 321, 1652 (2008)
. Yamaya Kazuhiko, Takayanagi Shigeru, Tanda Satoshi, Phys. Rev. B. 8518Yamaya Kazuhiko, Takayanagi Shigeru, Tanda Satoshi, Phys. Rev. B 85, 18 (2012)
. R Yomo, K Yamaya, M Abliz, M Hedo, Y Uwatoko, Phys. Rev. B. 7113Yomo R., Yamaya K., Abliz M., Hedo M., Uwatoko Y., Phys. Rev. B 71, 13 (2005)
. S Tsuchiya, K Matsubayashi, K Yamaya, S Takayanagi, S Tanda, Y Uwatoko, New J. Phys. 1963004Tsuchiya S., Matsubayashi K., Yamaya K., Takayanagi S., Tanda S., Uwatoko Y., New J. Phys. 19, 063004 (2017)
. K Yamaya, M Yoneda, S Yasuzuka, Y Okajima, S Tanda, J. Phys.: Condens., Matter. 1410770Yamaya K., Yoneda M., Yasuzuka S., Okajima Y., Tanda S., J. Phys.: Condens., Matter. 14, 10770 (2002)
. C Mirri, A Dusza, Lei Zhu Xiangde, Ryu Hechang, Hyejin, L Degiorgi, C Petrovic, Phys. Rev. B. 8935144Mirri C., Dusza A., Zhu Xiangde, Lei Hechang, Ryu Hyejin, Degiorgi L., Petrovic C., Phys. Rev. B 89, 035144 (2014)
. Zhu Xiangde, Lei Hechang, C Petrovic, Phys. Rev. Lett. 106246404Zhu Xiangde, Lei Hechang, Petrovic C., Phys. Rev. Lett., 106, 246404 (2011)
. Lei Hechang, Zhu Xiangde, C Petrovic, EPL. 9517011Lei Hechang, Zhu Xiangde, Petrovic C., EPL 95, 17011 (2011)
. T Yokoya, T Kiss, A Chainani, S Shin, K Yamaya, Phys. Rev. B. 71140504Yokoya T., Kiss T., Chainani A., Shin S., Yamaya K., Phys. Rev. B 71, 140504 (2005)
. Hoesch Moritz, Cui Xiaoyu, Shimada Kenya, Fujimori Battaglia Corsin, Berger Shin-Ichi, Helmuth, Phys. Rev. B. 8075423Hoesch Moritz, Cui Xiaoyu, Shimada Kenya, Battaglia Corsin, Fujimori Shin-ichi, Berger Helmuth, Phys. Rev. B 80, 075423 (2009)
Phys. C: Solid State Phys. D J Eaglesham, J W Steeds, J A Wilson, J , 17698Eaglesham D. J., Steeds J. W., Wilson J. A., J. Phys. C: Solid State Phys. 17, L698 (1984)
. S L Gleason, Y Gim, T Byrum, A Kogar, P Abbamonte, E Fradkin, G J Macdougall, D J Van Harlingen, Zhu Xiangde, C Petrovic, S L Cooper, Phys. Rev. B. 91155124Gleason S. L., Gim Y., Byrum T., Kogar A., Abbamonte P., Fradkin E., MacDougall G. J., Van Harlingen D. J., Zhu Xiangde, Petrovic C., Cooper S. L., Phys. Rev. B 91, 155124 (2015)
. M Hoesch, A Bosak, D Chernyshov, H Berger, M Krisch, Phys. Rev. Lett. 10286402Hoesch M., Bosak A., Chernyshov D., Berger H., Krisch M., Phys. Rev. Lett. 102, 086402 (2009)
. Zheng Hu Yuwen, Ren Feipeng, Feng Xiao, Li Ji, Yuan, Phys. Rev. B. 91144502Hu Yuwen, Zheng Feipeng, Ren Xiao, Feng Ji, Li Yuan, Phys. Rev. B 91, 144502 (2015)
. X J Zhou, S L He, G D Liu, L Zhao, L Yu, W T Zhang, Reports on Progress in Physics. 81Zhou X. J., He S. L., Liu G. D., Zhao L., Yu L., Zhang W. T., Reports on Progress in Physics 81, 6 (2018)
. Zhu Xiyu, Lv Bing, Wei Fengyan, Xue Yuyi, Lorenz Bernd, Deng Liangzi, Sun Yanyi, Chu Ching-Wu, Phys. Rev. B. 872Zhu Xiyu, Lv Bing, Wei Fengyan, Xue Yuyi, Lorenz Bernd, Deng Liangzi, Sun Yanyi, Chu Ching-Wu, Phys. Rev. B 87, 2 (2013)
Solid State Chem. Stowe Klaus, Wagner Frank, R J , 138168Stowe Klaus, Wagner Frank R. J., Solid State Chem. 138, 168 (1998)
. C Felser, E W Finckh, H Kleinke, F Rocker, W Tremel, J. Mater. Chem. 81798Felser C., Finckh E. W., Kleinke H., Rocker F., Tremel W., J. Mater. Chem. 8, 1798 (1998)
. Zhang Yan, Wang Chenlu, Yu Li, Liu Guodong, Liang Aiji, Huang Jianwei, Nie Simin, Sun Xuan, Zhang Yuxiao, Shen Bing, Liu Jing, Weng Hongming, Zhao Lingxiao, Chen Genfu, Jia Xiaowen, Hu Cheng, Ding Ying, Zhao Wenjuan, Gao Qiang, Li Cong, He Shaolong, Zhao Lin, Zhang Fengfeng, Zhang Shenjin, Yang Feng, Wang Zhimin, Peng Qinjun, Dai Xi, Fang Zhong, Xu Zuyan, Chen Chuangtian, Zhou Xingjiang, Nature Communications. 815512Zhang Yan, Wang Chenlu, Yu Li, Liu Guodong, Liang Aiji, Huang Jianwei, Nie Simin, Sun Xuan, Zhang Yuxiao, Shen Bing, Liu Jing, Weng Hongming, Zhao Lingxiao, Chen Genfu, Jia Xiaowen, Hu Cheng, Ding Ying, Zhao Wenjuan, Gao Qiang, Li Cong, He Shaolong, Zhao Lin, Zhang Fengfeng, Zhang Shenjin, Yang Feng, Wang Zhimin, Peng Qinjun, Dai Xi, Fang Zhong, Xu Zuyan, Chen Chuangtian, Zhou Xingjiang, Nature Communications 8, 15512 (2017)
. Suard Seshadri Ram, Felser Emmanuelle, Finckh E Claudia, Maignanc Wolfgang, Tremel Antoine, Wolfgang, J. Mater. Chem. 82874Seshadri Ram, Suard Emmanuelle, Felser Claudia, Finckh E. Wolfgang, Maignanc Antoine, Tremel Wolfgang, J. Mater. Chem. 8, 2874 (1998)
. Y Wu, Jo N H Ochi, M Huang, L Mou, D Budko, S L Canfield, P C Trivedi, N Arita, R Kaminski, A , Phys. Rev. Lett. 115166602Wu Y, Jo N. H., Ochi M., Huang L., Mou D., Budko S. L., Canfield P. C., Trivedi N., Arita R., Kaminski A., Phys. Rev. Lett. 115, 166602 (2015)
Fermi surface and constant energy contours of ZrTe 3. 3FIG. 2: Fermi surface and constant energy contours of ZrTe 3 . (a) Schematic of ZrTe 3
The upper one is schematic diagram of the first Brillouin Zone and two sheets of Fermi surface. The central cylindrical shape is the three-dimensional Fermi surface. The quasione dimensional. Brillouin Zone, Fermi surface is near the boundary of the first Brillouin zone. Projection of the first Brillouin zone and the Fermi surface along ab plane is shown below the 3D Brillouin zoneBrillouin zone. The upper one is schematic diagram of the first Brillouin Zone and two sheets of Fermi surface. The central cylindrical shape is the three-dimensional Fermi surface. The quasi- one dimensional Fermi surface is near the boundary of the first Brillouin zone. Projection of the first Brillouin zone and the Fermi surface along ab plane is shown below the 3D Brillouin zone.
The complete 3D Fermi surface which is jointed and symmetrized from two measurements. (c, d) Constant energy contours of ZrTe 3 of two measurements at different binding energies of 0, 180 and 300 meV, respectively. The spectral intensity is integrated within 10 meV with respect to each binding energy. The measurement geometry is set under s polarization. High-symmetry points are indicated. (b). High-symmetry points are indicated. (b) The complete 3D Fermi surface which is jointed and symmetrized from two measurements. (c, d) Constant energy contours of ZrTe 3 of two measure- ments at different binding energies of 0, 180 and 300 meV, respectively. The spectral intensity is integrated within 10 meV with respect to each binding energy. The measurement geometry is set under s polarization.
Corresponding momentum distribution curves (MDCs) at the Fermi level for band structures in (d) are shown in (c). Four peak positions are marked by the arrows in (c). (b) MDC 2nd derivative band structure of. Observation of splitting features in ZrTe 3. Fermi surface of ZrTe 3 that has two Fermi surface sheets. Red and blue lines represent main sheet and split sheet, respectively. Band structures measured along momentum Cuts 1, 2, 3 and 4 are shown in (d). Cut 1. (e) Energy distribution curves (EDCs) of Cut 1. EDCs of Γ point and k F are indicated by red line and blue line, respectivelyFIG. 3: Observation of splitting features in ZrTe 3 . (a) Fitted Fermi surface of ZrTe 3 that has two Fermi surface sheets. Red and blue lines represent main sheet and split sheet, respectively. Band structures measured along momentum Cuts 1, 2, 3 and 4 are shown in (d). The locations of these momentum cuts are indicated in (a). Corresponding momentum distribution curves (MDCs) at the Fermi level for band structures in (d) are shown in (c). Four peak positions are marked by the arrows in (c). (b) MDC 2nd derivative band structure of Cut 1. (e) Energy distribution curves (EDCs) of Cut 1. EDCs of Γ point and k F are indicated by red line and blue line, respectively.
|
[] |
[
"One-step multi-qubit GHZ state generation in a circuit QED system",
"One-step multi-qubit GHZ state generation in a circuit QED system"
] |
[
"Ying-Dan Wang \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland\n",
"Stefano Chesi \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland\n",
"Daniel Loss \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland\n",
"Christoph Bruder \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland\n"
] |
[
"Department of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland",
"Department of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland",
"Department of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland",
"Department of Physics\nUniversity of Basel\nKlingelbergstrasse 824056BaselSwitzerland"
] |
[] |
We propose a one-step scheme to generate GHZ states for superconducting flux qubits or charge qubits in a circuit QED setup. The GHZ state can be produced within the coherence time of the multi-qubit system. Our scheme is independent of the initial state of the transmission line resonator and works in the presence of higher harmonic modes. Our analysis also shows that the scheme is robust to various operation errors and environmental noise.
|
10.1103/physrevb.81.104524
|
[
"https://arxiv.org/pdf/0911.1396v1.pdf"
] | 18,100,088 |
0911.1396
|
01b2b384f72ce87d6a60649723137437bec4d8d9
|
One-step multi-qubit GHZ state generation in a circuit QED system
7 Nov 2009
Ying-Dan Wang
Department of Physics
University of Basel
Klingelbergstrasse 824056BaselSwitzerland
Stefano Chesi
Department of Physics
University of Basel
Klingelbergstrasse 824056BaselSwitzerland
Daniel Loss
Department of Physics
University of Basel
Klingelbergstrasse 824056BaselSwitzerland
Christoph Bruder
Department of Physics
University of Basel
Klingelbergstrasse 824056BaselSwitzerland
One-step multi-qubit GHZ state generation in a circuit QED system
7 Nov 2009
We propose a one-step scheme to generate GHZ states for superconducting flux qubits or charge qubits in a circuit QED setup. The GHZ state can be produced within the coherence time of the multi-qubit system. Our scheme is independent of the initial state of the transmission line resonator and works in the presence of higher harmonic modes. Our analysis also shows that the scheme is robust to various operation errors and environmental noise.
Entanglement is the most important resource for quantum information processing. Therefore, the question of how to prepare maximally entangled states, i.e., the GHZ state, or the Bell states in the two-qubit case, in various systems remains an important issue. Superconducting Josephson junction qubits are one of the promising solid-state candidates for a physical realization of the building blocks of a quantum information processor, see e.g. 1,2,3,4 . They are undergoing rapid development experimentally, in particular, in circuit QED setups. Two-qubit Bell states have been demonstrated experimentally 5,6,7 . There are also some theoretical proposals on how to generate maximally entangled states for two or three qubits 8,9,10,11,12,13,14,15,16 . However, how to scale up to multi-qubit GHZ state generation remains an open question. Some general schemes based on fully connected qubit network is proposed but no specific circuit design is provided 17 . Most recently, preparation of multi-qubit GHZ states was proposed based on measurement 18,19 . This type of state preparation is probabilistic and the probability to achieve a GHZ state decreases exponentially with the number of qubits. In this paper, we propose a GHZ state preparation scheme based on the non-perturbative dynamic evolution of the qubitresonator system. The preparation time is short and the preparation is robust to environmental decoherence and operation errors.
I. THE COUPLED CIRCUIT QED SYSTEM
The GHZ state preparation scheme described below is based on a circuit QED setup where superconducting qubits are strongly coupled to a 1D superconducting transmission line resonator (TLR). Figure 1(a) shows the type of circuit we have in mind: a qubit array is placed in parallel with a line of length L 0 . The superconducting transmission line is essentially an LC resonator with distributed inductance and capacitance 20,21 . The oscillating supercurrent vanishes at the end of the transmission line and this provides the boundary condition for the electromagnetic field of this on-chip resonator. The qubits are fabricated around the central positions x = L 0 /2. Since the qubit dimension (several micrometer) is much smaller than the wave length of the fundamental electromagnetic modes (centimeter), the coupling between the qubits and the TLR is approximately homogeneous. Since x = L 0 /2 is an antinode of the magnetic field where the electric field is zero, the qubits are only coupled to the magnetic component, which induces a magnetic flux Φ ′ through the superconducting loop given by
Φ ′ = η (i) Φ 0 π (a + a † )(1)
with
η (i) = M (i) π Φ 0 ω 2L .(2)
Here, M (i) is the mutual inductance between the resonator and the i-th qubit, ω = π/(LC) 1/2 is the frequency of the fundamental resonator mode, Φ 0 = h/2e is the magnetic flux quantum and L (C) is the total selfinductance (capacitance) of the stripline. Here, we have assumed the qubit array to be only coupled with a single mode of the resonator, and a (a † ) is the annihilation (creation) operator of this fundamental mode. The stripline resonator can be used to couple both charge qubits and flux qubits as described below.
A. Charge qubit system
We first consider the charge qubit case. Suppose each qubit is a charge qubit (see Fig. 1(b)) consisting of a dc-SQUID formed by a superconducting island connected to two Josephson junctions. The Coulomb energy of each qubit is modified by an external bias voltage and the effective Josephson tunneling energy is determined by the magnetic flux Φ (i)
x threading the dc-SQUID. The Hamiltonian of a single charge qubit reads 22
H (i) ≡ E (i) C 4 (1 − 2n (i) g )σ (i) z − E (i) J cos(π Φ (i) x Φ 0 )σ (i) x ,(3)
where E
(i) c (E (i) J )
is the Coulomb (Josephson) energy of the i-th qubit, and n (i) g is the bias charge number that can be controlled by an external gate voltage. The Pauli matrices σ z = |0 0| − |1 1| , σ x = |0 1| |1 0| are defined in terms of the charge eigenstates |0 and |1 . |0 and |1 denote 0 and 1 excess Cooper pair on the island respectively. Φ
H = i Ω (i) (Φ (i) e )σ (i) x + g (i) (Φ (i) e )(a + a † )σ (i) x + H LC ,(4)
with the single charge qubit energy splitting
Ω (i) (Φ (i) e ) = −E (i) J cos(πΦ (i) e /Φ 0 ), g (i) (Φ (i) e ) = η (i) E (i) J sin(πΦ (i)
e /Φ 0 ), and the free Hamiltonian of the TLR H LC = ωa † a. Note that the coupling between the qubits and the TLR can be turned off by setting Φ (i) e = nΦ 0 .
B. Flux qubit system
For a flux qubit system, a circuit example to realize our proposal is shown in Fig. 1(c). The i-th qubit contains four Josephson junctions in three loops instead of one or two loops in the conventional flux qubit design 23,24 . The two junctions in the dc-SQUID have identical Josephson energies α
ϕ (i) 4 − ϕ (i) 3 = 2πΦ (i) d /Φ 0 (5) ϕ (i) 1 + ϕ (i) 2 + ϕ (i) 3 2 + ϕ (i) 4 2 = 2π(Φ (i) q1 − Φ (i) q2 )/Φ 0 (6) Φ (i) q1 + Φ (i) q2 + Φ (i) d = nΦ 0 ,(7)
where ϕ k (k = 1, 2, 3, 4) is the phase difference across the k-th junction. The total Josephson energy of the circuit is −U
(i) 0 = E (i) J cos ϕ (i) 1 + E (i) J cos ϕ (i) 2 + α (i) E (i) J cos 2πΦ (i) t /Φ 0 − (ϕ (i) 1 + ϕ (i) 2 ) (8) with Φ (i) t ≡ Φ (i) q1 − Φ (i) q2 and α (i) = 2α (i) 0 cos(πΦ (i) d /Φ 0 ). If Φ (i)
t is biased close to Φ 0 /2, the circuit becomes a flux qubit, i.e., a two-level system in the quantum regime 23,24 . Together with the charging energy, the total Hamiltonian for the i-th qubit is
H (i) = ε (i) (Φ (i) t )σ (i) z + ∆ (i) (Φ (i) d )σ (i) x .(9)
The Pauli matrices read σ z = |0 0| − |1 1|, σ x = |0 1||1 0|, and are defined in terms of the classical current where |0 and |1 denote the states with clockwise and counterclockwise currents in the loop. The energy spacing of the two current states is
ε (i) (Φ (i) t ) ≡ I (i) p (Φ (i) t − Φ 0 /2)
, and the tunneling matrix element between the two states is
∆ (i) (Φ (i) d ) ≡ ∆ (i) (α (i)
). Note that in contrast to the original flux qubit design 23,24 , this gradiometer flux qubit is insensitive to homogeneous fluctuations of the magnetic flux 25 . More importantly, it enables the TLR to couple with the dc-SQUID loop without changing the total bias flux of the qubit. As in the case of the charge qubit, the magnetic flux in the dc-SQUID loop includes two parts: Φ
(i) d = Φ (i) e + Φ ′(i) , where Φ (i) e
is due to the external control line and Φ ′(i) is due to the TLR.
For η (i) ≪ 1, one can expand the Hamiltonian in terms of η. The second-order terms ∼ η (i)2 d 2 ∆/dα 2 are much smaller than the zeroth and the first-order term. The Hamiltonian of each qubit can be written as 26
H (i) = ε (i) (Φ (i) t )σ (i) z +∆ (i) (Φ (i) e )σ (i) x +g (i) (Φ (i) e )σ (i) x (a+a † ) .(10)
The coupling coefficient is
g (i) (Φ (i) e ) = −2α (i) 0 η (i) sin(πΦ (i) e /Φ 0 ) d∆(α (i) ) dα (i) Φ (i) d =Φ (i) e .
(11) Therefore, by setting Φ (i) e = nπ, the qubit-resonator interaction can be turned off. When the interaction is on, Φ (i) e can be tuned to compensate the difference of the fabrication parameters and realize a homogeneous coupling g (i) = g. Then if each qubit is biased at the degeneracy point Φ (i) t = (n + 1/2)Φ 0 , the total Hamiltonian becomes
H = i Ω (i) (Φ (i) e )σ (i) x + g (i) (Φ (i) e )σ (i) x (a + a † ) + H LC , (12) where Ω (i) (Φ (i) e ) = ∆ (i) (Φ (i) e )
is the single qubit energy splitting. Comparing Eqs. (4) and (12), it is evident that the two Hamiltonians have the same structure: the interaction term commutes with the free term, and the interaction can be switched on and off. In the next section, we show how to generate a multi-qubit GHZ state by utilizing these features.
II. GENERATION OF A GHZ STATE
In the interaction picture,
H I (t) = i g (i) (a † e iωt + ae −iωt )σ (i) x .(13)
Since {σ
(i) x σ (j) x , aσ (i) x , a † σ (i)
x , 1} form a closed Lie Algebra, the time evolution operator in the interaction picture can be written in a factorized way as 27
U I (t) = i =j e −iAij (t)σ (i) x σ (j) x i e −iBi(t)aσ (i) x × i e −iB * i (t)a † σ (i) x e −iD(t) ,(14)
and U I (t) satisfies
i( ∂ ∂t U I (t))U −1 I (t) = H I (t) .(15)
Solving this equation for the initial condition
A ij (0) = B i (0) = D(0) = 0, we obtain B i (t) = ig (i) ω (e −iωt − 1) (16) A ij (t) = g (i) g (j) ω 1 iω (e iωt − 1) − t(17)D(t) = i (g (i) ) 2 ω 1 iω (e iωt − 1) − t .(18)
In the Schrödinger picture
U s (t) = U 0 (t)U I (t) = e −iωa † at i e −iΩ (i) σ (i) x t U I (t) .
(19) Note that B i (t) is a periodic function of time and vanishes at t = T n = 2πn/ω for integer n. At these instants of time, the time evolution operator takes the form
U (T n ) = exp(−i i =j θ ij (n)σ (i) x σ (j) x ) exp(−iD(t)) ,(20)
in the interaction picture. Here, θ ij (n) = g (i) g (j) T n /ω = g (i) g (j) 2πn/ω 2 . Thus, at these times, the time evolution is equivalent to that of a system of coupled qubits with an interaction Hamiltonian of the form ∝ σ
(i) x σ (j)
x . Therefore, by choosing appropriate coupling pulse sequences, an effective XX-coupling can be realized for multiple qubits. This coupling can be utilized to construct a CNOT gate for two qubits 26 . If the couplings are homogeneous for all qubits, i.e., g (i) = g (for i = 1, .., N ),
θ ij (n) ≡ θ(n) = g 2 ω 2 2πn ,(21)
Eq. (20) can be written as
U (T n ) = exp(−i4θ(n)J 2 x ) exp(iθ(n)N ) exp(−iD(t)) (22) with J x = i σ (i) x /2.
Suppose the initial state of the qubits is
|Ψ(0) = N i=1 |− (i) z(23)
where |± z denotes the eigenstates of σ z , σ z |± z = ±|± z . This initial state can be prepared by biasing the qubits far away from the degeneracy point, letting them relax to the ground state and then biasing them back adiabatically. Starting from the initial state, under the time evolution described by Eq. (22), the state evolves into a GHZ state 28,29 (up to a global phase factor)
|Ψ(T n ) = 1 √ 2 N i=1 |− (i) z + e iπ(N +1)/2 N i=1 |+ (i) z ,(24)
if θ(n) = (1 + 4m)π/8, where m is an arbitrary integer. A comparison with Eq. (21) shows that the integers n and m are related by
n = m ω 2 4g 2 + ω 2 16g 2 ,(25)
which is possible only if the (experimentally controllable) parameter g 2 /ω 2 is chosen to be
g 2 ω 2 = 1 + 4m 16n .(26)
Since it is difficult in practice to realize g comparable to ω, we assume m = 0. Hence Eq. (26) determines the value n min (typically larger than 1) which corresponds to the minimum preparation time of the GHZ state
T min = 2πn min ω = πω 8g 2 .(27)
The optimal case n min = 1 could be realized if it were possible to achieve g = ω/4. The same GHZ state is periodically generated at later times, with preparation time T p = T min (1 + 4m).
For both types of qubits, g is proportional to √ ω (since g is proportional to η, see Eqs. (2) and (4)). If we assume g = ξ √ ω, we obtain T min = π/8ξ 2 . Therefore, the preparation time does not depend on ω. Furthermore, the preparation time Eq. (27) does not increase with the number of qubits. If the qubits evolve under the time evolution described by Eq. (22) with θ(n) = (3 + 4m)π/8, another N -qubit GHZ state is realized,
|Ψ(T n ) = 1 √ 2 N i=1 |− (i) z + e −iπ(N +1)/2 N i=1 |+ (i) z .(28)
In the following discussion, we focus on the GHZ state Eq. (24) since it can be prepared in a shorter time.
The treatment discussed up to now is valid if the qubit number N is even. For odd N , the single-qubit rotation U ′ = exp(−πJ x /2) is needed in addition to the time evolution Eq. (22). The GHZ state that can be realized for odd N has the form
|Ψ(T n ) = 1 √ 2 N i=1 |− (i) z + e iπN/2 N i=1 |+ (i) z .(29)
To conclude: one can prepare an N -qubit GHZ state by turning on the qubit-resonator interaction for a specified time.
For this GHZ state to be useful for quantum information processing, the preparation time has to be shorter than the quantum coherence time of the whole system. In general, a short preparation time results from a strong qubit-qubit coupling. However, this conflicts with the weak-coupling condition assumed in many schemes in order to utilize virtual photon excitation or the rotatingwave approximation. Our preparation scheme for the GHZ state is based on real excitations of the quantum bus. No weak-coupling condition is required here. In principle, it can be applied to the 'ultra-strong' coupling regime that the coupling strength between the quantum bus (i.e. the TLR) and the qubits is comparable to the free system energy spacing. Hence it is possible to implement GHZ state preparation in a very short time. To get an idea of the time scale under realistic experimental conditions, we now estimate the preparation time using typical experimental parameters.
Assuming the mutual inductance between qubit and resonator M (i) = 20 pH, the self inductance L = 100 pH, and the resonator frequency ω = 1 GHz leads to η ∼ 1.76 × 10 −3 for both types of qubits. For charge qubits, we assume E should be within the interval (0.6, 0.85) so that the circuits can always work as flux qubits both with and without bias. This leads to g ≈ 144 MHz 26 . The coupling strength is much stronger for flux qubits than charge qubits because of the direct magnetic coupling to the phase degree of freedom. Therefore the interaction time to realize a GHZ state is T min = 1 µs for charge qubits and T min = 19 ns for flux qubits. The preparation time for flux qubits is much shorter than the coherence time of the TLR which can be several hundred microseconds. The typical single-qubit coherence time at the degeneracy point is several microseconds. Hence in principle, the scheme is able to prepare GHZ states for several tens of qubits. If the coupling strength can be further increased to the 'ultrastrong' regime in experiment, the preparation of a multiqubit GHZ state can be comparable to the time of a single qubit operation.
III. PREPARATION ERRORS
From the above calculation, it is clear that the essential point to prepare the GHZ state is to control the length of the dc pulse to manipulate the flux bias Φ (i) e . In the beginning, the external magnetic flux Φ (i) e is set to nπ, all the qubits and the transmission line resonator are relaxed to their respective ground states. Then the interaction between the qubits and the resonator is turned on by biasing Φ (i) e away from nπ to some appropriate value for a time T min . Finally, the interaction is switched off by setting Φ (i) e = nπ again, and the multi-qubit GHZ state is realized. Note that all the qubit biases are modified during the preparation by the same pulse, therefore all the qubits can share one control line for the magnetic flux. To accomplish this operation, two practical issues have to be considered.
The first one is the precision of the control of the pulse length to keep the error acceptable. If the pulse length is not exactly T min , the state realized is not a GHZ state and this error can be evaluated by calculating the fidelity 30,31 F (t) = Tr [ρ GHZ ρ q (t)], where ρ q (t) is the reduced density matrix of the qubits and ρ GHZ is the density matrix of the N -qubit GHZ state. In Fig. 2, the blue curves show the fidelity of state preparation, the regime with fidelity larger than 90% is marked by two green dotted lines. To realize a preparation with above 90% fidelity, the time control of the pulse should be precise to around 2.5 ns in the four qubits case, which is possible in experiment.
The second problem is the influence of the non-ideal pulse shape. In the above calculation, we have assumed that a perfect square pulse can be applied so that g (i) is a constant during the preparation. However, in experiment the dc pulse generated always has a finite rise and fall time. Since the coupling strength g (i) depends on the bias flux Φ varies slowly with time (compared with e −iωt ), the above discussions still hold except that the decoupling time T at which the qubit-resonator coupling can be canceled is shifted to satisfy
e −iωT g (i) (T ) − g (i) (0) = 0 .(30)
A GHZ state is prepared if
πω 8 = T 0 dt ′ {e iωt ′ [g (i) (t ′ )g (j) (0) + g (i) (0)g (j) (t ′ )] − 2g (i) (t ′ )g (j) (t ′ )}(31)
for all i, j. This means a GHZ state can be realized by dc pulses of finite bandwidth without introducing additional errors.
Another systematic error appears because the parameter g 2 /ω 2 cannot be controlled with arbitrary accuracy, i.e., Eq. (26) will be satisfied only approximately. In experiment, Φ (i) e is tuned to get the desired value of g; whereas ω is fixed by the geometry of the device. Suppose the experimental inaccuracy leads to a modified value for the coupling strength, g(1 + δ), where δ quantifies the magnitude of error. Hence the prepared state deviates from the GHZ state. The fidelity of the prepared state depends on δ as
F (δ) = | Ψ(T )|GHZ | 2 = 1 2 2N N r=0 C r N e i π 2 (δ 2 +2δ)( N 2 −r) 2 2 .(32)
where C r N = N !/(r!(N − r)!) is the binomial coefficient. This expression is valid for even N . For odd N , the fidelity turns out to be given by Eq. (32) with N → N +1. Figure 3 shows that the fidelity decreases as the error in the coupling coefficient increases. In the case of a 4-qubit GHZ state, a fidelity of 98% can be achieved if the error in g is within 3%. However, as the number of qubits increases, the fidelity drops more rapidly. Hence a more precise control of the flux bias is required to realize manyqubit GHZ states.
IV. ERROR CAUSED BY DECOHERENCE
An important advantage of our proposal is that the state preparation is independent of the initial state of the resonator. In general, it is not easy to prepare the system to be exactly in the ground state. For example, at typical dilution fridge temperatures, say 50 mK, there is a non-negligible probability (30%) for the first excited state of a 1 GHz resonator to be occupied. This problem is less severe for the qubits since their energy scale is much higher. Therefore a scheme which is insensitive to the initial state is desirable. Figure 2 shows the fidelity of the prepared GHZ state for two different initial states of the resonator (the ground state and the thermal state at 50 mK). Although the time evolutions are different in general, the fidelities at the decoupling time T n (indicated by black dots in the figures) are the same. This can be explained from Eq. (14), at times T n , only the first term of Eq. (14) is kept, i.e., the qubits and resonator are decoupled. No matter what the initial state of the resonator is, at these times, the resonator has evolved back to its initial state. This means the GHZ state preparation is not influenced by the initial state, or in other words, the preparation is insensitive to the decoherence that occurred before the interaction was switched on.
But the decoherence during the operation certainly changes the final output state. In general, environmental fluctuations induce both dephasing and relaxation to the system. Since the qubits are all biased at the degeneracy point, the strong dephasing effect due to 1/f noise is largely suppressed. Thus we can use a master equation which only includes relaxation as damping instead of the unitary operator Eq. (14) to fully characterize the time evolutionρ
(t) = −i[H, ρ(t)] + L Q ρ(t) + L R ρ(t) ,(33)
where ρ(t) is the density matrix of the system (qubits + resonator) in the interaction picture and L R represents the decoherence of the resonator
L R ρ = κ 2 (N th + 1)(2aρa † − a † aρ − ρa † a) + κ 2 N th (2a † ρa − aa † ρ − ρaa † ) .(34)
Here, κ is the resonator decay rate and N th = (exp(ω/k B T ) − 1) −1 the average number of photons in the resonator. Finally, L Q represents the decoherence of the qubits
L Q ρ = γ 2 (2σ − ρσ + − ρσ +σ− −σ +σ− ρ) ,(35)
where γ is the qubit decay rate andσ ± are written in the diagonal basis of σ x . The quality factor of a TLR can be as high as than 10 6 . The qubit T 1 -time at the degeneracy point is several µs at most in present experiment. To be on the safe side, we assume for the resonator Q = 2×10 3 , κ = 0.5 MHz, and for the qubit T 1 = 100 ns, i.e., the decay rate is γ = 10 MHz. Here we neglect excitations of the qubit since its energy spacing is much larger than the thermal fluctuation. To investigate the influence of decoherence, we compare the fidelity to prepare a GHZ state with/without decoherence. The result is shown in Fig. 4 where the difference of the two fidelities ∆F = F − F d (where F d is the fidelity in the presence of decoherence) is plotted as a function of time. The red dots mark the difference at the GHZ preparation times T p . Obviously, the error due to decoherence increases with time. As we analyzed in the previous section, the preparation time is much shorter than the decoherence time. Therefore the error is still quite small at the minimum preparation time T min (indicated by the first dot): the error caused by decoherence is around 3.7% in the 4-qubit case.
V. DISCUSSION AND CONCLUSION
In the above discussion, for simplicity, we assumed that the qubits only interact with a single mode of the resonator. However, since we did not invoke the rotating wave approximation in our calculation, higher modes of the TLR 20,32 will also contribute to the coupling. Therefore, the interaction Eq. (13) should include a sum over multiple modes whose frequencies are below a cut-off ω c . The cut-off is determined by a number of practical issues, e.g., by the superconducting gap, or the fact that the resonator is not strictly one-dimensional 20 . The time evolution including higher modes is of the same form as Eq. (14) but includes a product over all the relevant modes. Neglecting the small nonlinear effect due to output coupling, the frequencies of all higher modes are multiples of the frequency of the fundamental mode, ωñ =ñω and ω c =ñ c ω, all the coupling coefficients between the qubits and different modes of the TLR, B i,ñ (t) = ig
(i)
n (e −iωñt − 1)/ωñ are still zero for t = T n = 2nπ/ω. Here g (i) n ≡ g (i) (ωñ). Hence the only correction to our scheme is including a sum over all the relevant modes in the definition of A ij (t) in Eq. (17),
A ij (t) =ñ c ñ=1 g (i) n g (j) n ωñ t .(36)
For low excitation modes whose wave lengths are still much larger than the qubit dimension, the homogeneous coupling assumption still approximately valid, i.e., g
n ≡ gñ. For example, consideringñ c = 10 for a 10 cm transmission line, around the center there is a 0.32 mm-long region where the magnetic field varies within 5 %. The distance between the center of two qubits is roughly 10µm. This means up to around 30 qubits are coupled to the resonator approximately homogeneously. One can also tune Φ i e to further compensate the slight inhomogeneity. The correction to the time evolution Eq. (22) can be simply written as θ(n) = (2πn)g 2 (ñ c /2)/ω 2 . Therefore, the effect of the higher excitation modes actually amounts to increasing the coupling coefficient g → g ñ c /2, which helps to reduce the operation time.
The electric field of the higher modes has little effect on the flux qubits but will change the voltage bias of the charge qubits and couple to their σ z component. However, at the degeneracy point, where the free Hamiltonian is proportional to σ x , these coupling terms are rapidly oscillating and are expected to have a small effect on the system. However, the situation is less advantageous than for flux qubits, and higher modes should be suppressed by choosing high fundamental mode frequencies in the charge-qubit case.
For a small number of qubits, an (lumped) LC circuit can also be used as a quantum bus 33,34 to generate a GHZ state by following our scheme. In this case, only one single mode contributes.
In conclusion, we have proposed a scheme to prepare an N -qubit GHZ state in a system of superconducting qubits coupled by a transmission line resonator. We have analyzed the preparation scheme for both charge qubits and flux qubits. With this method, a multi-qubit GHZ state can be prepared within the quantum coherence time. In the case of flux qubits that is especially favorable, the preparation time is two orders of magnitude shorter than the qubit coherence time. The preparation time can be reduced further if the coupling strength is increased to the ultra-strong coupling regime, where the coupling strength is comparable to the free qubit Hamiltonian. The preparation scheme is insensitive to the initial state of the resonator and robust to operation errors and decoherence. The coupling can be switched by dc pulses of finite rise and fall times without introducing additional errors. In addition, the scheme described in this paper utilizes a linear coupling which is intrinsically error-free if proper dc control is achieved. Due to all these advantages, this proposal could be a promising candidate for GHZ state generation in systems of superconducting qubits.
VI. ACKNOWLEDGEMENT
FIG. 1 :
1(Color online) Schematic diagram of our setup. (a) The qubits are coupled through a superconducting stripline resonator (the blue stripe). Each 'crossed box' denotes one qubit which can be either a charge qubit or a flux qubit; the dashed red line shows the magnitude of the magnetic field. (b) Detailed schematic of a charge qubit. The crosses denote Josephson junctions; (c) Detailed schematic of a gradiometertype flux qubit. The crosses denote Josephson junctions.
the flux Φ ′(i) . For small η (i) , the Josephson energy can be expanded to linear order in η (i) , which results in an additional linear coupling between the x-component of the qubits and the bosonic mode. If all the qubits are assumed to be biased at the degeneracy point n
the ratio between the Josephson energy of the smaller junction and that of the two bigger junctions23,24 . The other two junctions are assumed to have the Josephson energy E(i) J . The superconducting loops are penetrated by magnetic fluxes Φ respectively. The corresponding phase re-lations are
, Ω (i) = 10 GHz, and that the bias during the coupling period satisfies sin(πΦ
e
/Φ 0 ) = 0.8. This leads to a coupling strength of g = 19.71 MHz. For flux qubits, we assume a qubit frequency Ω (i) = 10 GHz, E (i) J = 345 GHz, α (i) 0 = 0.42, the bias satisfies sin(πΦ (i) e /Φ 0 ) = 0.71, and at this bias, d∆/dα = 112 GHz. Both 2α
eFIG. 2 :
2, the modulation of the magnetic flux results in a time-dependent coupling strength g (i) = g (i) (t). If g (i) (Color online) Time dependence of the fidelity of the prepared GHZ state for two different initial resonator states: the ground state (blue line) and the thermal state (red line) in the case of (a) two qubits, (b) four qubits. The black dots indicate the time when the resonator and qubits are effectively decoupled. The green lines limit the regime in which the fidelity is larger than 90%. The following parameter values were used: qubit frequency Ω (i) = 10 GHz, resonator frequency ω = 1 GHz, coupling strength g = 144 MHz. The time is given in units of Tmin.
FIG. 3 :
3(Color online) Dependence of the fidelity F on the error of the coupling coefficient δ. The curves correspond to N = 2 (top), 4, 6, and 8 (bottom).
FIG. 4 :
4(Color online) Time dependence of the error due to decoherence. ∆F is the difference of the fidelity of the prepared GHZ state with/without environment decoherence for (a) 2 qubits (b) 4 qubits. The red dots mark the times at which the GHZ state is prepared. The following parameter values were used: qubit frequency Ω (i) = 10 GHz, resonator frequency ω = 1 GHz, coupling strength g = 144 MHz, qubit decay rate γ = 10 MHz, and resonator decay rate κ = 0.5 MHz. The time is given in units of Tmin.
The authors acknowledge helpful discussions with A. Wallraff and Yong Li. This work was partially supported by the EC IST-FET project EuroSQIP, the Swiss SNF, and the NCCR Nanoscience.
. Y Makhlin, G Schön, A Shnirman, Rev. Mod. Phys. 73357Y. Makhlin, G. Schön, and A. Shnirman, Rev. Mod. Phys. 73, 357 (2001).
. J Q You, F Nori, Phys. Today. 5842J. Q. You and F. Nori, Phys. Today 58, 42 (2005).
G Wendin, V Shumeiko, Handbook of Theoretical and Computational Nanotechnology. Los AngelesASPG. Wendin and V. Shumeiko, in Handbook of Theoreti- cal and Computational Nanotechnology (ASP, Los Angeles, 2006).
. J Clarke, F K Wilhelm, Nature. 453J. Clarke and F. K. Wilhelm, Nature (London) 453, 2008 (2008).
. M Steffen, M Ansmann, R C Bialczak, N Katz, E Lucero, R Mcdermott, M Neeley, E M Weig, A N Cleland, J M Martinis, Science. 3131423M. Steffen, M. Ansmann, R. C. Bialczak, N. Katz, E. Lucero, R. McDermott, M. Neeley, E. M. Weig, A. N. Cleland, and J. M. Martinis, Science 313, 1423 (2006).
. J Plantenberg, P C Groot, C J P M Harmans, J E Mooij, Nature. 447836J. Plantenberg, P. C. d. Groot, C. J. P. M. Harmans, and J. E. Mooij, Nature 447, 836 (2007).
. S Filipp, P Maurer, P J Leek, M Baur, R Bianchetti, J M Fink, M Göppl, L Steffen, J M Gambetta, A Blais, A Wallraff, Phys. Rev. Lett. 102200402S. Filipp, P. Maurer, P. J. Leek, M. Baur, R. Bianchetti, J. M. Fink, M. Göppl, L. Steffen, J. M. Gambetta, A. Blais, and A. Wallraff, Phys. Rev. Lett. 102, 200402 (2009).
. F Plastina, R Fazio, G. Massimo Palma, Phys. Rev. B. 64113306F. Plastina, R. Fazio, and G. Massimo Palma, Phys. Rev. B 64, 113306 (2001).
. L F Wei, Y.-X Liu, F Nori, Phys. Rev. Lett. 96246803L. F. Wei, Y.-X. Liu, and F. Nori, Phys. Rev. Lett. 96, 246803 (2006).
. F Bodoky, M Blaauboer, Phys. Rev. A. 7652309F. Bodoky,and M. Blaauboer, Phys. Rev. A 76, 052309 (2007).
. M D Kim, S Y Cho, Phys. Rev. B. 77100508M. D. Kim and S. Y. Cho, Phys. Rev. B 77, 100508(R) (2008).
. J Zhang, Y.-X Liu, C.-W Li, T.-J Tarn, F Nori, Phys. Rev. A. 7952308J. Zhang, Y.-X. Liu, C.-W. Li, T.-J. Tarn, and F. Nori, Phys. Rev. A 79, 052308 (2009).
. A Galiautdinov, J M Martinis, Phys. Rev. A. 7810305A. Galiautdinov and J. M. Martinis, Phys. Rev. A 78, 010305(R) (2008).
. B Röthlisberger, J Lehmann, D S Saraga, P Traber, D Loss, Phys. Rev. Lett. 100100502B. Röthlisberger, J. Lehmann, D. S. Saraga, P. Traber, and D. Loss, Phys. Rev. Lett. 100, 100502 (2008).
. B Röthlisberger, J Lehmann, D Loss, Phys. Rev. A. 8042301B. Röthlisberger, J. Lehmann, and D. Loss, Phys. Rev. A 80, 042301 (2009).
. C L Hutchison, J M Gambetta, A Blais, F K Wilhelm, Can. J. Phys. 87225C. L. Hutchison, J. M. Gambetta, A. Blais, and F. K. Wilhelm, Can. J. Phys. 87, 225 (2009).
. A Galiautdinov, M W Coffey, R Deiotte, arXiv:0907.2225A. Galiautdinov, M. W. Coffey, and R. Deiotte, arXiv:0907.2225.
. F Helmer, F Marquardt, Phys. Rev. A. 7952328F. Helmer and F. Marquardt, Phys. Rev. A 79, 052328 (2009).
. L S Bishop, L Tornberg, D Price, E Ginossar, A Nunnenkamp, A A Houck, J M Gambetta, J Koch, G Johansson, S M Girvin, R J Schoelkopf, New J. Phys. 1173040L. S. Bishop, L. Tornberg, D. Price, E. Ginossar, A. Nun- nenkamp, A. A. Houck, J. M. Gambetta, J. Koch, G. Jo- hansson, S. M. Girvin, and R. J. Schoelkopf, New J. Phys. 11, 073040 (2009).
. A Blais, R S Huang, A Wallraff, S M Girvin, R J Schoelkopf, Phys. Rev. A. 6962320A. Blais, R. S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 69, 062320 (2004).
. A Wallraff, D I Schuster, A Blais, L Frunzio, R S Huang, J Majer, S Kumar, S M Girvin, R J Schoelkopf, Nature. 431162A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R. S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf, Nature (London) 431, 162 (2004).
. Y Nakamura, Y A Pashkin, J S Tsai, Nature. 398786Y. Nakamura, Y. A. Pashkin, and J. S. Tsai, Nature (Lon- don) 398, 786 (1999).
. J E Mooij, T P Orlando, L Levitov, L Tian, C H Van Der Wal, S Lloyd, Science. 2851036J. E. Mooij, T. P. Orlando, L. Levitov, L. Tian, C. H. van der Wal, and S. Lloyd, Science 285, 1036 (1999).
. T P Orlando, J E Mooij, L Tian, C H Van Der Wal, L S Levitov, S Lloyd, J J Mazo, Phys. Rev. B. 6015398T. P. Orlando, J. E. Mooij, L. Tian, C. H. van der Wal, L. S. Levitov, S. Lloyd, and J. J. Mazo, Phys. Rev. B 60, 15398 (1999).
. F G Paauw, A Fedorov, C J P M Harmans, J E Mooij, Phys. Rev. Lett. 10290501F. G. Paauw, A. Fedorov, C. J. P. M. Harmans, and J. E. Mooij, Phys. Rev. Lett. 102, 090501 (2009).
. Y D Wang, A Kemp, K Semba, Phys. Rev. B. 7924502Y. D. Wang, A. Kemp, and K. Semba, Phys. Rev. B 79, 024502 (2009).
. J Wei, E Norman, J. Math. Phys. 4575J. Wei and E. Norman, J. Math. Phys. 4, 575 (1963).
. K Molmer, A Sorensen, Phys. Rev. Lett. 821835K. Molmer and A. Sorensen, Phys. Rev. Lett. 82, 1835 (1999).
. L You, Phys. Rev. Lett. 9030402L. You, Phys. Rev. Lett. 90, 030402 (2003).
. A Uhlmann, Rep. Math. Phys. 9273A. Uhlmann, Rep. Math. Phys. 9, 273 (1976).
. R Jozsa, J. Mod. Opt. 412315R. Jozsa, J. Mod. Opt. 41, 2315 (1994).
. M Göppl, A Fragner, M Baur, R Blanchetti, S Filipp, J M Fink, P J Leek, G Puebla, L Steffen, A Wallraff, J. Appl. Phys. 104113904M. Göppl, A. Fragner, M. Baur, R. Blanchetti, S. Filipp, J. M. Fink, P. J. Leek, G. Puebla, L. Steffen, and A. Wall- raff, J. Appl. Phys. 104, 113904 (2008).
. I Chiorescu, Y Nakamura, C J P M Harmans, J E , I. Chiorescu, Y. Nakamura, C. J. P. M. Harmans, and J. E.
. Mooij, Science. 2991869Mooij, Science 299, 1869 (2003).
. J Johansson, S Saito, T Meno, H Nakano, M Ueda, K Semba, H Takayanagi, Phys. Rev. Lett. 96127006J. Johansson, S. Saito, T. Meno, H. Nakano, M. Ueda, K. Semba, and H. Takayanagi, Phys. Rev. Lett. 96, 127006 (2006).
|
[] |
[
"Semantic Annotation and Search for Educational Resources Supporting Distance Learning",
"Semantic Annotation and Search for Educational Resources Supporting Distance Learning"
] |
[
"Mrs Nithya C #1 \nDepartment of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia\n",
"Mr K Saravanan \nDepartment of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia\n",
"# Student \nDepartment of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia\n"
] |
[
"Department of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia",
"Department of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia",
"Department of Computer Science\nEngineering Regional Centre of Anna University Tirunelveli (\nIndia"
] |
[
"International Journal of Engineering Trends and Technology"
] |
Multimedia educational resources play an important role in education, particularly for distance learning environments. With the rapid growth of the multimedia web, large numbers of education articles video resources are increasingly being created by several different organizations. It is crucial to explore, share, reuse, and link these educational resources for better e-learning experiences. Most of the video resources are currently annotated in an isolated way, which means that they lack semantic connections. Thus, providing the facilities for annotating these video resources is highly demanded. These facilities create the semantic connections among video resources and allow their metadata to be understood globally. Adopting Linked Data technology, this paper introduces a video annotation and browser platform with two online tools: Notitia and Sansu-Wolke. Notitia enables users to semantically annotate video resources using vocabularies defined in the Linked Data cloud. Sansu-Wolke allows users to browse semantically linked educational video resources with enhanced web information from different online resources. In the prototype development, the platform uses existing video resources for education articles. The result of the initial development demonstrates the benefits of applying Linked Data technology in the aspects of reusability, scalability, and extensibility.
|
10.14445/22315381/ijett-v8p252
|
[
"https://arxiv.org/pdf/1403.0068v1.pdf"
] | 9,980,032 |
1403.0068
|
5b675f118da97a3b4be3282964744f2e4cb078ec
|
Semantic Annotation and Search for Educational Resources Supporting Distance Learning
Feb 2014
Mrs Nithya C #1
Department of Computer Science
Engineering Regional Centre of Anna University Tirunelveli (
India
Mr K Saravanan
Department of Computer Science
Engineering Regional Centre of Anna University Tirunelveli (
India
# Student
Department of Computer Science
Engineering Regional Centre of Anna University Tirunelveli (
India
Semantic Annotation and Search for Educational Resources Supporting Distance Learning
International Journal of Engineering Trends and Technology
86Feb 2014Page 277Linked DataSemantic searchCloud ApplicationsWeb servicesSemantic annotationOntology
Multimedia educational resources play an important role in education, particularly for distance learning environments. With the rapid growth of the multimedia web, large numbers of education articles video resources are increasingly being created by several different organizations. It is crucial to explore, share, reuse, and link these educational resources for better e-learning experiences. Most of the video resources are currently annotated in an isolated way, which means that they lack semantic connections. Thus, providing the facilities for annotating these video resources is highly demanded. These facilities create the semantic connections among video resources and allow their metadata to be understood globally. Adopting Linked Data technology, this paper introduces a video annotation and browser platform with two online tools: Notitia and Sansu-Wolke. Notitia enables users to semantically annotate video resources using vocabularies defined in the Linked Data cloud. Sansu-Wolke allows users to browse semantically linked educational video resources with enhanced web information from different online resources. In the prototype development, the platform uses existing video resources for education articles. The result of the initial development demonstrates the benefits of applying Linked Data technology in the aspects of reusability, scalability, and extensibility.
I. INTRODUCTION
In the modern world e-learning activities are essential for distance learning in higher education. Digital video as one type of multimedia resource plays a vital role in distance learning. With the increased number of video resources being created , it is important to accurately describe the video content and enable searching of potential videos to enhance the quality and features of e-learning. It is critical to efficiently search for all related distributed educational video resources together to enhance the e-learning activities of the students. This paper adopts Semantic Web technology, more precisely; the Linked Data approach identified some primary challenges. Video resources should be described precisely, Descriptions of video resources should be accurate and machine-readable to support related search, Link the video resources to useful knowledgeable data from the web.
Video annotation ontology is designed by following Linked Data principles and reusing existing ontologies. It provides the foundation for annotating videos based on both time instance and duration in the video streams and more precise description details to be added to the video. A semantic video annotation tool (Notitia) is implemented for annotating and publishing education articles video resources based on the video annotation ontology. Notitia allows annotators to use domain specific vocabularies from the Linked Open Data cloud to describe the video resources. These annotations link the video resources to other web resources. A semantic-based video searching browser (Sansu-Wolke) is provided for searching videos. It generates links to further videos and education articles video resources from the Linked Open Data cloud and the web.
A. Cloud Computing
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models. The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming. Typically this is done on a pay-per-use or charge-per-use basis. A cloud infrastructure is the collection of hardware and software that enables the five essential characteristics of cloud computing. The cloud infrastructure can be viewed as containing both a physical layer and an abstraction layer. The physical layer consists of the hardware resources that are necessary to support the cloud services being provided, and typically includes server, storage and network components. The abstraction layer consists of the software deployed across the physical layer, which manifests the essential cloud characteristics. Conceptually the abstraction layer sits above the physical layer. Ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them.
Sharing common understanding of the structure of information among people or software agents is one of the more common goals in developing ontologies. For example, suppose several different Web sites contain medical information or provide medical e-commerce services. If these Web sites share and publish the same underlying ontology of the terms they all use, then computer agents can extract and aggregate information from these different sites. The agents can use this aggregated information to answer user queries or as input data to other applications.
Enabling reuse of domain knowledge was one of the driving forces behind recent surge in ontology research. For example, models for many different domains need to represent the notion of time. This representation includes the notions of time intervals, points in time, relative measures of time, and so on. If one group of researchers develops such ontology in detail, others can simply reuse it for their domains. Additionally, if we need to build a large ontology, we can integrate several existing ontologies describing portions of the large domain. We can also reuse a general ontology, such as the UNSPSC ontology, and extend it to describe our domain of interest.
Making explicit domain assumptions underlying an implementation makes it possible to change these assumptions easily if our knowledge about the domain changes. A hardcoding assumption about the world in programming-language code makes these assumptions not only hard to find and understand but also hard to change, in particular for someone without programming expertise. In addition, explicit specifications of domain knowledge are useful for new users who must learn what terms in the domain mean. Separating the domain knowledge from the operational knowledge is another common use of ontologies. We can describe a task of configuring a product from its components according to a required specification and implement a program that does this configuration independent of the products and components themselves.
Analysing domain knowledge is possible once a declarative specification of the terms is available. Formal analysis of terms is extremely valuable when both attempting to reuse existing ontologies and extending them. Often ontology of the domain is not a goal in itself. Developing ontology is akin to defining a set of data and their structure for other programs to be used. Problem-solving methods, domain-independent applications, and software agents use ontologies and knowledge bases built from ontologies as data.
C. Linked Data Technology
Traditional video annotations using free-text keywords or predefined vocabularies are insufficient for a collaborative and multilingual environment. They do not properly handle the annotation issues, such as accuracy, disambiguation, completeness, and multi-linguality. For example, free-text keywords annotation easily fails on accuracy issues as they may contain spelling errors or be ambiguous. Furthermore, they are insufficient for a collaborative and multilingual environment. Our approach uses Linked Data to tackle the above issues in video annotations. It brings the following benefits. Each vocabulary is controlled and accurately defined in the Linked Data Cloud. It owns a unique URI to distinguish it from other vocabularies, so there are no conflicts between different vocabularies and meanings.
Linked data for improving student experience in searching e-learning resource persuades more and more people every day, because of the easy spreading web-based systems. Consequently, the web provides not only several data sources with useful and relevant information with e-learning purposes, but also information that is not easy to retrieve, and therefore wasted data. Sometimes, the information is inappropriate, and they must be filtered, in order to be relevant. Improving the way to exploit the web is an important step in the development of e-learning technology. The web would be a useful mechanism in the learning process if we take advantage of the information which is placed there. These improvements are related to the distribution of tasks: computers can be faster than humans in searching information with organized data. For humans, the search process on the web becomes a difficult task, and also very tedious, as a result of the large amount of information disposable to be consulted. For computers, large amount of data is not a problem.
II. PROBLEM DEFINITION
Video resources should be described precisely. It is difficult to use only one general description to accurately tell the whole story of a video because one section of the video stream may have plenty of information (e.g., on historical figures and hidden events in the conversations) but some of them might not related to the main points of the video when it was created. Therefore, the normal paragraph-based description process is not good enough for annotating videos precisely. A more accurate description mechanism, based on the timeline of the video stream, is required. The descriptions of the educational resources should be accurate and machine-understandable, to support related search functionality. Although a unified and controlled terminology can provide accurate and machineunderstandable vocabularies, it is impossible to build such a unified terminology to satisfy different description requirements for different domains in practice. Link the video resources to useful knowledge data from the web. More and more knowledge and scientific data is published on the web by different education and educational organizations (e.g., Linked Open Data), and so it is useful to break the teaching resource boundaries between closed institutions and the Internet environment to provide richer learning materials to both educators and learners. ISSN: 2231-5381 http://www.ijettjournal.org
Page 279
III. RELATED WORK
A. A Lightweight Approach to Semantic Annotation of Research Papers
A novel application of a semantic annotation system, named Cerno, to analyse research publications in electronic format. Specifically, we address the problem of providing automatic support for authors who need to deal with large volumes of research documents. To this end, we have developed Biblio, a user-friendly tool based on Cerno. The tool directs the user's attention to the most important elements of the papers and provides assistance by generating automatically a list of references and an annotated bibliography given a collection of published research articles. The tool performance has been evaluated on a set of papers and preliminary evaluation results are promising.
B. Video Annotation through Search and Graph Reinforcement Mining
Graph reinforcement method driven by a particular modality (e.g., visual) is used to determine the contribution of a similar document to the annotation target. The graph supplies possible annotations of a different modality (e.g., text) that can be mined for annotations of the target. Multimedia annotation algorithms can be said to be supervised or unsupervised based on whether it uses known training data. An annotation method can also be described as a computer vision approach if it builds word-specific models from lowlevel visual features, or a data mining approach if it mines correlations among annotations or propagates existing information. The graph reinforcement technique represents an inductive learning process that uses the weak predictions afforded by each similar video to create a stronger prediction of appropriate annotations for the set of videos.
C. Semantic Video Search Using Natural Language Queries
The indexing process assumes that the video annotations are made from a fixed set of vocabularies that change infrequently. Although this process can be efficient, the fixed set of vocabulary may introduce a gap between user's knowledge and indexed annotations, especially in the education environment, in which videos are often annotated by different groups of teachers or students, who may apply different annotation terms to the same video in the context of different courses and key points. Ontology is a broader knowledge model with a reasoning mechanism that facilitates knowledge sharing on the semantic web. The knowledge representation language is used to create a set of terms and assumptions (axioms) about the meanings of the terms as well as to specify classes, properties and relationships between classes and objects in the domain.
D. Multimodal Fusion for Video Search Re-ranking
Videos are traditionally searched by syntactic matching mechanisms. Recently, with more videos being annotated or tagged in the Linked Data manner, educationists have begun to search videos in a more Semantic-Web oriented fashion. The two major approaches are the semantic indexing process and the natural language analysis process.
E. Semantic Web for Content Based Video Retrieval
The natural language analysis process focuses more on adding semantic tags to the user's search inputs. However, most of these approaches require machine learning mechanisms to assist dynamically adding tags. Hence, they restrict their applications to small and closed domains of discourse.
IV. PROPOSED SYSTEM
Video annotation ontology is designed by following Linked Data principles and reusing existing ontology. It provides the foundation for annotating videos based on both time instance and duration in the video streams. This allows more precise description details to be added to the video.
A semantic video annotation tool (Notitia) is implemented for annotating and publishing educational video resources based on the video annotation ontology. Notitia allows annotators to use domain specific vocabularies from the Linked Open Data cloud to describe the video resources. These annotations link the video resources to other web resources.
A semantic-based video searching browser (Sansu-Wolke) is provided for searching videos. It generates links to further videos and educational resources from the Linked Open Data cloud and the web. This process can be efficient; the fixed set of vocabulary may introduce a gap between user's knowledge and indexed annotations, especially in the educational environment, in which videos are often annotated by different groups of teachers or students, who may apply different annotation terms to the same video in the context of different courses and key points. The natural language analysis process focuses more on adding semantic tags to the user's search inputs. However, most of these approaches require machine learning mechanisms to assist dynamically adding tags. Hence, they restrict their applications to small and closed domains of discourse.
A. Notitia -The Annotation Tool
The first tool is Notitia, which handles the input of annotations from users. Notitia provides a simple Web browser interface split into four main areas:
a video player a list of videos, and existing annotations for the current video a set of controls for the video player, and input widgets to enter the annotations a set of panels to aid in finding suitable linked data annotations ISSN: 2231-5381 http://www.ijettjournal.org
Page 281
Fig. 2 Architecture of Notitia
In operation, the user first selects which of the videos to annotate. The videos of interest at the moment are transcoded at the request of the course team, and we serve them directly from the Notitia server, but the architecture supports using video served from anywhere on the Web.Having selected a video, the user will be presented with a timeline of any annotations previously made by herself and others: the user can skip to particular instants or durations noted in the annotations. Each vocabulary is controlled and accurately defined in the Linked Data Cloud. It owns a unique URI to distinguish it from other vocabularies, so there are no conflicts between different vocabularies and meanings.
Different vocabularies, which describe the same thing, are linked using the owl: sameAs property as an equation definition. Meanwhile, a number of semantic annotations are used to build the relationships between different vocabularies, such as rdfs: subclass of and rdfs. : see Also. Once a vocabulary is applied to an annotation, the related vocabularies are associated with the annotation. Therefore, the collaborative and multilingual issues are well addressed. The most basic way to create an annotation is simply to pause the video at the appropriate point, enter duration if appropriate, and add a Semantic Web/Linked Data URI. This is sent to the server, and the annotation recorded.
Fig. 3 Annotation of Video Resource
International Journal of Engineering Trends and Technology (IJETT) -Volume 8 Number 6-Feb 2014
ISSN: 2231-5381 http://www.ijettjournal.org
Page 282
Annotations can be optionally marked as applying to something visible in the video, audible in the soundtrack, or to the conceptual subject matter of the video. Since finding appropriate URIs is non-trivial, the fourth part of the interface is dedicated to helping the user find them. We implemented the system in the java and jsp language, using Sesame as an RDF quad store and RDF2Go as an abstraction over the store. Notitia provides programmatic APIs in the form of a SPARQL end point for querying, and Restful interfaces for adding and removing annotations, and exploring existing ones. It is our intent that Notitia's RDF become part of the Linked Open Data cloud. In the client we use JavaScript with the Yahoo YUI library and the Query, and Flow Player for video playback.
B. Sansu-Wolke -Semantic Search Browser
The second tool is Sansu-Wolke used to search the annotated videos, and explore related materials through Linked Data approach. The Sansu-Wolke architecture has two layers: the application layer, responsible for presentation and interaction, and the semantic data layer which finds and integrates data that has been published in the LOD cloud by different organizations, including annotation data from Notitia.
The Semantic Web data is obtained through Restful services and SPARQL endpoints. The application layer allows the user to query and navigate the video search results using the semantic data. Sansu-Wolke is designed to help the learner in navigating learning resources using Linked Data, so its first task is to connect the learner with appropriate concepts in the Semantic Web. Having found relevant conceptual URIs in the Linked Data cloud, Sansu-Wolke can then allow the user to navigate the links. Of course, Sansu-Wolke can query Notitia for videos that are tagged using the URIs of interest, but it can also find related video material through YouTube and Open Learn services. The user can be prompted with related concepts, Web documents, and other resources by Sansu-Wolke's analysis of neighbouring point LOD cloud, including Restful RDF services.
In addition to the Linked Data Services that are applied in the Notitia process, some other Linked Data Services and nonsemantic services are used in Sansu-Wolke. The OU Linked Data that is currently under development and aims to extract and interlink previously available educational resources in various disconnected institutional repositories of the Open University and publish them into the Linked Open Data cloud and a semantic search engine, which crawls and collates the Semantic Web (including micro formats), and provides services such as keyword-based searching for linked data and accessing cached fragments of the Semantic Web. For example, when a user copies and pastes the learning content from lecture notes into the text field, all related knowledge concepts are listed, which enables the user to select further video searching activities. The Google map service is deployed for gathering the geo information about a place so that the user may click on the map to search related videos. The searching results do not only contain the OU educational video resources with their annotations but also include relevant learning resources about the videos and related videos from other services.
The Semantic Data Mining and Reasoning Layer has four different types of mining and reasoning processes: namely syntax parsing, document analysis, and annotation inferencing. The syntax parsing is the basic reasoning process to match syntax-based keywords to a URI identifier from the Linked Open Data Cloud. The syntax parsing process is triggered by the basic concept search functionality. The result of the parsing is a RDF description including the URI identifier. These syntax parsing results are the fundamental elements which perform the further video repository query and advanced reasoning. The document analysis process is used to analyse a document that is used to guide the study topic. Typical documents are lecture notes and online webpages. The analysis results are key learning points, knowledge, and concepts with their URI identifiers from the Linked Open Data cloud. These key points are matched to URI identifiers in DBpedia, Wikipedia, and Freebase for gaining further related educational resources. The annotation inferencing process uses the tree-structure advantages of the ontology-based semantic annotations. By using the annotation reasoning process, the searching results are more accurate and widely covered. Although different video resource providers may use different Linked Data vocabularies to annotate their videos, they are linked together as search results through the Sansu-Wolke browser.
ISSN: 2231-5381 http://www.ijettjournal.org Page 283 The members all use eLearning systems on a daily basis for teaching distant learning students. In addition, three of them have experience of using textual tools for annotating the video learning materials. The students from ages 18-28 are studying undergraduate courses at first year level. None of the students had any experience of using professional educational video searching tools before but often use Google or Yahoo when searching online. The evaluation process includes four steps: Demonstration: we used one video as an example to show the annotation functionalities that use different Linked Data resources and web services. For students, we demonstrated how to use Sansu-Wolke with both basic concept search and advanced search Practice: Two practice tasks are designed. One allows staff to annotate a certain video and the other allows student to search the videos related to the topics that the staff annotated. Evaluation: We designed two sets of tasks for evaluating Notitia and Sansu-Wolke. Each set of tasks includes simple activities and more advanced activities such as using two different URIs to annotate one concept in the video or collaborative annotation. Each task has a 15-minute limit, and we monitored each user's time spent on each of the tasks. Feedback collection and analysis: We used two evaluation questionnaires to collect feedback from users for Notitia and Sansu-Wolke, inquiring about the quality, performance and usability of the tools.
A. Notitia-The Annotation Tool
All members used an example "Java Tutorial" video. First, asking them to use free text or any references they would like to use to annotate the features in the video stream. Second, asking them to use You Tube or Google URI to identify the features in the video stream. Using the same video to try to find the concepts behind this video and annotate them. To do this, they should use the mode option to correctly identify if the event is clearly introduced in the video stream, conversation or the audio stream.
Using all different Linked Data suggestions from the suggestion panel to annotate the same video that is relevant to the "class concept" (at least three annotations).Trying to find another video that is related to the "class concept" class by searching for the Linked Data annotations. Asking people who sat next to each other to annotate the 2minute stream "Inheritance" video together, first, monitoring whether they both annotate the same items in the video; second, to check if they used the same Linked Data URIs for the same annotations; Third, if the same items are annotated by different Linked Data, to see if they can go through the link to find each other. Finally, to check if they can correct each other or get agreement to delete or keep the duplicated annotations.
After the evaluation, we provided an Notitia evaluation questionnaire which consists of a rating for interface simplicity and usability, a rating for the quality and accuracy, identifying the most used annotation resources, identifying the most used annotation terms, and comments on using Linked Data technologies.
B. Sansu-Wolke -Semantic Search Browser
Students think the Sansu-Wolke can help their studies is the most interesting part of our evaluation task for SEG. The Sansu-Wolke evaluation tasks contains: Using the basic search functions to find as many as possible (no more related new video can be found) videos that related to topics and were stored in OU video repository. We examined the quantity of the found videos. Using the place search function and enter keyword to search for related videos. Go through all video resources from different video search providers to identify at least five videos that relate to topics.
Using the person search function and enter any person's name that you believe is related to class and identify at least two videos from different resources and two URIs that describe either the person we searched for related topics. Using the map search function to search for videos and information related to topic. Taking a particular text content from a lecture note that is used for the class to search the used annotation resources related and useful resources to prepare the class based on the highlights in the lecture note.
By monitoring how long it takes the students to identify all the related videos and useful information for a history lecture note (500 words), we found that most of the students can finish this task within 10 minutes. The videos or data voted most useful come from the OU linked open data set, Yahoo, NPTel and Google. The most important lesson learned from the evaluation at this stage is that students are more interested in the data that comes directly from the education-oriented services rather than social information websites such as YouTube.
The other parts of Sansu-Wolke evaluation questionnaire consist of the rating of the usability, quality, and accuracy of the tool. The major concern is the response time of some searches at runtime. As the Sansu-Wolke is a search tool that invokes different Linked Data services at the same time after search request is received, services' response time are different because of the quality of their own services and servers' runtime workload. This is a trade-off between the quality and the accuracy. Since we invoke different Linked Data services at the runtime, the newest information and various data are found, which reflects the high accuracy satisfaction rate in the survey. Note that since both tools are still in prototype testing, there is still much work to be done to integrate them to the current OU distance learning systems and processes.
The limitation of our evaluation is that we cannot evaluate how much the Sansu-Wolke can improve the tuition without applying it on live course teaching and examination processes.
VI. CONCLUSION
This paper illustrated the Notitia and the Sansu-Wolke and no student thought platform that uses Linked Data technologies to semantically annotate and search educational video resources from the Open University video repository and link the videos to all other education resources on the web. In the semantic annotation process, Annotation of ontology is defined to support Linked Data annotations. Dynamic annotation URI suggestions are fully supported by integrating Linked Data Services into the Notitia interface; Collaborative functionalities are implemented to enhance the teamwork capability.
In the semantic search process, the search methods are based on the data retrieved through Linked Data Services and URIs, which links different resources together to enrich the original video search results. Sansu-Wolke shows that elearning resources distributed across different education organizations can be linked together to provide more valueadded information.
In education, collaboration has long been considered an effective approach for generate knowledge. Around the worldwide web, there are many contributions in terms of elearning resources. These resources can solve the information needs of the people and can enhance the e-learning world. However, there are big challenges related to the resources information. One of them is the correct appropriation of these information, and other is the appropriate storage and use of these resources.
In this paper, we make use of the criteria selected by the students for creating an approach related to a collaborative elearning environment, in which information provided by the LOD cloud diagram could be explored and use by different kind of e-learners, and also, we open the possibility to extend the linking open data community, giving the e-learners a place in which they can contribute, not only with new resources but also with qualifications of the information previously linked. This information would be used by other people in order to enlarge the collaboration regarding the knowledge around the worldwide web.
Fig. 1
1Annotations are accurate and free of spelling errors, ambiguity, and multi-linguality issues. The semantics of the annotations are process able by machine, which fosters the accuracy of searching and collecting related learning resources. The educational resources from different educational institutions are shared, reused, and semantically connected. Architecture of Linked data Technology With Notitia and Sansu-Wolke
Fig. 4
4Architecture of Sansu-Wolke
Fig. 5
5Search Result of Annotated Video V. EVALUATION AND RESULT
. Yu Hong Qing, Carlos Pedrinaci, Stefan Dietze, John Dominguez, Hong Qing Yu, Carlos Pedrinaci, Stefan Dietze, and John Dominguez
Class Differences Online Education in the United States. E Allen, J Seaman, E. Allen and J. Seaman, "Class Differences Online Education in the United States," http://sloanconsortium.org/publications, 2010.
Satellite-Based Distance Learning Using Digital Video and the Internet. J W Brackett, IEEE Multimedia. 53J.W. Brackett, "Satellite-Based Distance Learning Using Digital Video and the Internet," IEEE Multimedia, vol. 5, no. 3, pp. 72-76, July-Sept. 1998.
Video Learning Object Extraction and Standardized Metadata. D Wu, Y.-Y Yeh, Y.-M Chou, Proc. Int'l Conf. Computer Science and Software Eng. Int'l Conf. Computer Science and Software Eng6D. Wu, Y.-Y. Yeh and Y.-M. Chou, "Video Learning Object Extraction and Standardized Metadata," Proc. Int'l Conf. Computer Science and Software Eng., vol. 6, pp. 332-335, 2008.
Understanding Linked Open Data as a Web-Scale Database. M Hausenblas, M Karnstedt, Proc. Second Int'l Conf. Advances in Databases Knowledge and Data Applications (DBKDA). Second Int'l Conf. Advances in Databases Knowledge and Data Applications (DBKDA)M. Hausenblas and M. Karnstedt, "Understanding Linked Open Data as a Web-Scale Database," Proc. Second Int'l Conf. Advances in Databases Knowledge and Data Applications (DBKDA), pp. 56-61, Apr. 2010.
The Semantic Web. T Berners-Lee, J Hendler, O Lassila, Scientific Am. Magazine. T. Berners-Lee, J. Hendler, and O. Lassila, "The Semantic Web," Scientific Am. Magazine, 2001.
F Applications, D Baader, D L Calvanese, D Nardi Mcguiness, Univ, The Description Logic Handbook: Theory, Implementation, and. Cambridge Univ.of Denmark, master's thesisThe Description Logic Handbook: Theory, Implementation, and Applications, F. Baader, D. Calvanese, D.L. McGuiness, D. Nardi Univ. of Denmark, master's thesis, 2010. and P.F. Patel-Schneider, eds. Cambridge Univ., 2003.
Linked Data. Berners-Lee, Berners-Lee, "Linked Data," http://www.w3.org/Design Issues/LinkedData.html, 2006.
Linked Data-The Story So Far. C Bizer, T Heath, T Berners-Lee, Int'l J. Semantic Web and Information Systems. 5C. Bizer, T. Heath, and T. Berners-Lee, "Linked Data- The Story So Far," Int'l J. Semantic Web and Information Systems, vol. 5, pp. 1-22, 2009.
SPARQL Query Language for RDF. E Prudʼhommeaux, A Seaborne, World Wide Web Consortium. E . Prudʼhommeaux and A. Seaborne, "SPARQL Query Language for RDF," technical report, World Wide Web Consortium, Jan. University.2008.
Linked data -Connect Distributed Data across the Web . ‖ Retrieved. T Heath, Heath, T -Linked data -Connect Distributed Data across the Web . ‖ Retrieved June 24, 2011, from http://linkeddata.org/
G & Klyne, J J Carroll, Resource Description Framework (RDF): Concepts and Abstract Syntax. ‖ W3C Recommendation. RetrievedKlyne, G & Carroll, JJ 2004, -Resource Description Framework (RDF): Concepts and Abstract Syntax. ‖ W3C Recommendation 10 February 2004. Retrieved May 22, 2011, from http://www.w3.org/TR/rdf- concepts/
Reasoning and proofing services for semantic web agents. K Kravari, K Papatheodorou, Antoniou, ‖ in Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence. BarcelonaKravari, K, Papatheodorou, K, Antoniou, G & Bassiliades Nick 2011, -Reasoning and proofing services for semantic web agents,‖ in Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona. Retrieved from http://www.aaai.org/ocs/index.php/IJCAI/IJCAI11/paper /viewFile/3164/3651
Mcguiness, Dl & Harmelen, Van, OWL Web Ontology Language. Overview.‖ W3C Recommendation. RetrievedMcGuiness, DL & Harmelen, F van 2004, -OWL Web Ontology Language. Overview.‖ W3C Recommendation 10 February 2004. Retrieved May 24, 2011, from http://www.w3.org/TR/2004/REC-owl-features- 20040210/
E & Prudʼhommeaux, A Seaborne, SPARQL Query Language for RDF.‖ W3C Recommendation 15. RetrievedPrudʼhommeaux, E & Seaborne, A 2008, -SPARQL Query Language for RDF.‖ W3C Recommendation 15 January 2008. Retrieved May 24, 2011, from http://www.w3.org/TR/rdf-sparql-query/
|
[] |
[
"SOLID PROPELLANTS",
"SOLID PROPELLANTS"
] |
[
"B P Mason \nDEPARTMENT OF PHYSICS\nNAVAL POSTGRADUATE SCHOOL\nMONTEREY\n93943-5216CA\n",
"C M Roland \nDEPARTMENT OF PHYSICS\nNAVAL POSTGRADUATE SCHOOL\nMONTEREY\n93943-5216CA\n"
] |
[
"DEPARTMENT OF PHYSICS\nNAVAL POSTGRADUATE SCHOOL\nMONTEREY\n93943-5216CA",
"DEPARTMENT OF PHYSICS\nNAVAL POSTGRADUATE SCHOOL\nMONTEREY\n93943-5216CA"
] |
[
"NAVAL RESEARCH LABORATORY"
] |
Solid propellants are energetic materials used to launch and propel rockets and missiles. Although their history dates to the use of black powder more than two millennia ago, greater performance demands and the need for ''insensitive munitions'' that are resistant to accidental ignition have driven much research and development over the past half-century. The focus of this review is the material aspects of propellants, rather than their performance, with an emphasis on the polymers that serve as binders for oxidizer particles and as fuel for composite propellants. The prevalent modern binders are discussed along with a discussion of the limitations of state-of-the-art modeling of composite motors.
|
10.5254/rct.19.80456
| null | 91,184,014 |
1904.01510
|
9155b82ba419c27acffbdca990e5128513ca3c4f
|
SOLID PROPELLANTS
2019
B P Mason
DEPARTMENT OF PHYSICS
NAVAL POSTGRADUATE SCHOOL
MONTEREY
93943-5216CA
C M Roland
DEPARTMENT OF PHYSICS
NAVAL POSTGRADUATE SCHOOL
MONTEREY
93943-5216CA
SOLID PROPELLANTS
NAVAL RESEARCH LABORATORY
WASHINGTON, DC61051201910.5254/rct.19.80456]
Solid propellants are energetic materials used to launch and propel rockets and missiles. Although their history dates to the use of black powder more than two millennia ago, greater performance demands and the need for ''insensitive munitions'' that are resistant to accidental ignition have driven much research and development over the past half-century. The focus of this review is the material aspects of propellants, rather than their performance, with an emphasis on the polymers that serve as binders for oxidizer particles and as fuel for composite propellants. The prevalent modern binders are discussed along with a discussion of the limitations of state-of-the-art modeling of composite motors.
distinguished from double-base propellants, 1 typically composed of nitrocellulose (NC) and nitroglycerin, used in smaller charges for firearms and mortars. The advantages of solid rocket propellants include: (i ) simplicity, which is important for maintenance costs and savings in high production rate systems; (ii ) storage stability, with service lifetimes that can be as long as 30 years; (iii ) resistance to unintended detonation; (iv) reliability, related to their simplicity and chemical stability; and (v) high mass flow rates during launch, and consequently high thrust (propulsion force), a requirement for the initial phase of missiles, all of which use solid propellant boosters. Two disadvantages of solid propellants are the difficulty in varying thrust on demand (i.e., solid fuel rockets generally cannot be throttled or operated in start-stop mode) and relatively low specific impulse (time integral of the thrust per unit weight of propellant), I sp , in comparison with liquid fuel motors. These preclude their use as the main propulsion method for commercial satellites and space probes, although solid rocket motors (SRM) have a long history as boosters. They also find application in aircraft using jet-assisted takeoff (JATO) to launch quickly or when overloaded. Initial experiments with gliders date to the 1920s, and SRM were developed for JATOs in World War II for aircraft using short runways and for rudimentary barrage rockets.
Rocket propellants differ from other fuels by not requiring an external source of oxygen; the oxidizer is a component of the propellant. SRM are by far the most widely used engines for missile propulsion. A related motor type is the ramjet, which typically uses liquid fuel, although solid-fuel ramjets are in limited use. In a ramjet, oxygen is brought in from the surrounding atmosphere. 2 A variation on ramjets is the turbojet motor, which uses a compressor to enhance air speed, allowing subsonic operation. Hybrid rocket motors combine solid and liquid propellants, affording throttling and start-stop capabilities lacking in conventional SRM. 3 Other systems include liquid propellants, used primarily for space applications, and nuclear motors, which offer enormous energy densities and the largest specific impulse of any engine. Nuclear systems have been limited to surface ships and submarines, although very recently there have been unverified claims of nuclear-propelled missiles and underwater drones. 4,5 The performances of the propulsion systems employing inert binders are compared in Figure 1 (adapted from ref 6), which shows representative specific impulse values versus missile speed. Specific impulse is a measure of motor efficiency, and for solid propellants, this depends on operating conditions, such as the combustion chamber pressure, nozzle expansion rate, and the
II. BACKGROUND
The earliest chemical explosives, dating to the first millennium AD, were based on gunpowder (''black powder''), a mixture of potassium nitrate (saltpeter), sulfur, and charcoal (or less often coal). Used for fireworks (Chinese ''fire drug''), ordnance-employed gunpowder was limited to moderate propellant grain sizes, because of the difficulty in compressing large quantities without introducing cracks or holes that caused erratic combustion. Although the story is perhaps apocryphal, the first astronaut was reputed to have used solid propellants ( Figure 2). (The launch vehicles for the first Soviet and American astronauts employed liquid fuel engines.) A major advance in the late 19th century was development of ''smokeless powder,'' based on NC, nitroglycerin, and/or nitroguanidine. The term derives from the low concentration of particulates, and thus smoke, among the combustion products. Many developments ensued, mainly directed to improving the stability and reliability of smokeless powder. Note that outside of the United States, smokeless powder is referred to simply as propellant, a generic term for proprietary versions such as Ballistite and Cordite.
Presently, solid propellants are used for the launch systems of many civilian and military rockets, 7 mainly because of their greater safety and reliability in comparison with liquid fuel. Early booster charges were relatively small (,30 kg); in comparison, each booster on the Space Shuttle had 500 000 kg of solid propellant. The thrust-to-weight ratio is a dimensionless quantity that indicates a vehicle's acceleration capability. For a solid propellant booster rocket, it can exceed a ratio of 100, which is two orders of magnitude greater than for supersonic fighter aircraft. The largest SRM were the two booster rockets on NASA's Space Launch System (SLS). Each booster burned six tons of poly(butadiene-acrylonitrile)/ammonium perchlorate (AP) propellant per second, achieving a combined maximum thrust of almost 40 MN. The design of the SLS included three additional SRM systems: a jettison motor, an abort system, and an attitude control motor. Other systems using solid propellant motors as boosters and retrorockets (used for deceleration and FIG. 2. -Wan Hu was the first recorded astronaut (ca. 1500 AD). Forty-seven solid rocket motors based on potassium nitrate, sulfur, and charcoal contained in bamboo tubes were simultaneously lit, resulting in a successful launch or a catastrophic explosion, depending on the account. In neither version was Hu ever seen again. (Image courtesy of L. T.
DeLuca, Politecnico di Milano.) turning) include the Atlas V and Delta IV mediumþ rockets. Solid propellants also serve as the primary thrust system in missile defense systems such as the Aegis and Patriot missiles and the Minuteman III ICBM. The European commercial launch vehicles Ariane and Vega employ multiple solid rocket boosters. The most common application of solid propellant is automobile airbags, although the propellant is neither a composite nor a polymer. A variety of compounds, including sodium azide mixtures, nitroguanidine, and tetra-and triazoles, are used. In response to various sensors monitoring acceleration, impact, wheel speed, and so forth, electrical ignition of the propellant causes gas production that inflates the airbag. The time from vehicle impact to full deployment is less than 0.1 s. The largest recall in automotive industry history occurred when ammonium nitrate (AN) was used as the propellant. Exposure to moisture or heat destabilizes AN, with its subsequent reaction transpiring too fast. Rather than inflating the airbag, the rapid gas evolution shatters the steel case containing the airbag. About 70 million vehicles in the United States were affected by the recall, with 20 deaths worldwide attributed to the faulty airbags.
III. INTERNAL AERODYNAMICS OF SRMs
Considerable effort is devoted to developing accurate descriptions of the internal aerodynamics of SRM, with the objective of predicting operation during ignition, steady-state operation, and termination upon exhaustion of the fuel. Particularly in the unsteady regimes, during which the pressure changes strongly with time, a detailed analysis of flow inside the combustion chamber is required. Even during steady-state burning, which occupies most of the operating time, the propellant burn rate can be affected by pressure, temperature, and gas flow rate. Important issues are motor stability, the relationship of pressure to thrust, and the velocity of the combustion products. The chemistry is complex, 8 involving flowing reactants that reach temperatures that can exceed 3000 K, at pressures as high as 5 MPa (for large booster rockets). Because the combustion area varies during flight, thermodynamic conditions are constantly changing. The severity of these conditions precludes detailed measurements inside the combustion chamber; usually only exit pressures are available. Modeling involves substantial computation times and is hindered by the accuracy of input data and uncertainties regarding the effects of flow turbulence, interactions between metal droplets, unsteady propellant combustion, vibration of motor components, and so forth. For these reasons, calculations intended to optimize motor design are semiempirical and rarely can be quantitatively validated.
IV. PERFORMANCE OF SOLID ROCKET PROPELLANTS
A rocket is defined as a device or vehicle for which the accelerating force (thrust) is achieved by expelling mass. Strictly speaking, a rocket is a means of propulsion, distinguished from a missile, which is something that is propelled (for example, by a rocket). The essential components of SRM are the combustion chamber and its exit port (the nozzle), the igniter, and the propellant. Typically, the propellant consists of oxidizing particles (e.g., AP or AN) embedded in a fuel, which includes both a polymeric binder and metal powder. Light metals such as aluminum are used; their function is to enhance the degree of combustion of the binder. 9,10 The particles comprising the propellant are referred to as the grain. In practice, most solid SRM use nonenergetic binders, which are polymers that require an oxidizer to burn.
By far, the most popular inorganic oxidizer is AP. It produces completely gaseous products, an advantage over potassium nitrate and potassium perchlorate. AN is an alternative to AP but is less energetic and has a slower burn rate. 11 Dual-oxidizer systems, such as AP or AN with ammonium dinitramide (ADN), can offer advantages, such as enhanced specific impulse and lower HCl production. 12 The properties of various oxidizers relevant to propellants are compared in Table I. 13 The use of oxidizers distinguishes composite propellants from inherently energetic materials such as NC. 14 When conventional energetic binders cannot provide the required structural integrity for a particular motor application, inert polymers, such as polyurethanes and polyethers, can be made energetic by introduction of reactive groups. 15 Binders based on polymer networks have superior mechanical properties, with the crosslinking providing shape stability and resistance to cracking and void formation, while also serving as fuel. The amount of binder is determined by the size of the combustion chamber. Upon propellant ignition, combustion products form that are emitted through the nozzle. The thrust for a given motor can be varied by changing the composition of the energetic materials or their burn rates, the latter by changing, for example, the nozzle geometry. Early inert binders were blends of a high concentration of potassium perchlorate with asphalt (see discussion below). The mechanical properties were poor, and these have been replaced entirely with formulations based on polymer networks, most commonly polybutadiene with functional end-groups. An example is hydroxyl-terminated polybutadiene (HTPB), which is reacted with isocyanates or epoxies to form networks. By design,
SOLID PROPELLANTS
the crosslinking agent is usually below stoichiometric levels to ensure a significant soluble fraction of polymer remains after network formation. The main requirement of the fuel is that its oxidation is strongly exothermic and accompanied by the production of gaseous products, with the heat and kinetic energy of the effluents serving as the propulsion mechanism. The relation describing this is the classic rocket equation obtained from conservation of momentum and Newton's second law:
6 v f À v i ¼ v e ln m i m fð1Þ
where v is the rocket velocity and m its mass, with subscripts i, f, and e denoting initial conditions, post burnout, and the exhaust velocity, respectively. The principal parameters in the engineering of a solid propellant are the burn rate and grain design (e.g., particle size and shape). 7 Except during ignition or when instabilities arise, the burn rate of conventional SRM is essentially constant ( Figure 1) and depends only on the initial temperature and pressure. In pulsed rocket motors, the solid propellant is partitioned into multiple sections. 16 Propellant in the forward segment burns quickly, providing rapid acceleration, with the remaining segments ignited on command. The slower rates of acceleration reduce stresses on the rocket, and the partitioning provides control, including the ability to stop and subsequently reignite the motor. Another method to control burn rate 17 is to incorporate a surfactant into the grain that causes the combustion of the propellant to depend on pressure. At a sufficiently high pressure, the combustion of the propellant is extinguished. Pressure affects combustion rates because of its effect on the gas phase above the combustion surface. 18,19 Ideally, only a thin layer of propellant at the surface combusts, so that the grain temperature prior to combustion remains close to the external temperature.
A typical solid propellant formulation is shown in Table II. Filler volume content is typically in the range of /¼70 to 80%, although concentrations as high as 90% have been used. Higher levels of solids loading obviously makes processing and casting of the propellant mixture more difficult. For the very high particle concentrations relevant to solid propellants, the viscosity, g, of the precured binder is critical. It must be low enough to allow processing but sufficiently high to facilitate dispersion of the particles. 20 The processing problem can be overcome to some extent by using a blend of small and large particles, with small particles occupying the interstitial regions around larger particles. Thus, multimodal size distributions are used in solid propellants to increase the packing fraction, otherwise limited to~64% by volume for a uniform particle distribution. 21 Having large particles 10-fold greater in size than the smaller particles enables the mixture to be treated as a suspension of the former in fluid containing the latter. The details of the particle distribution are important, potentially affecting binder adhesion, ''hot-spot'' formation during combustion, and agglomerate uptake by the gas flow. 22 Both modeling and experiments have been brought to bear to characterize the particular structure in solid propellants. [23][24][25][26] Figure 3 shows a representative size distribution of the HTPB solid propellant with / ¼ 0.76 of AP and aluminum flake. 27 The maximum amount of propellant yields the best motor performance; however, grain design reflects a compromise between the maximum packing, / m , and the desired thrust. A greater propellant content increases the bore diameter and ignition area at the nozzle entrance, critical factors in achieving the desired thrust.
V. POLYMERIC BINDERS
The polymer in a solid propellant functions as both the binder for the ingredients of the grain and as a fuel. The first requires a polymer of sufficient strength, but it should have low viscosity prior to curing (crosslinking or chain extension) to facilitate mixing and casting. An obvious requirement of any fuel is a high specific energy yield, and for propellants, the combustion products must be gaseous. Particularly for tactical rockets, in which the propellant is exposed to varying ambient temperatures, an insensitivity of burn rate to temperature and pressure is advantageous. Other requirements include storage stability and good adhesion to the oxidizer and other filler particles and (for case-bonded propellants) to the liner. Note that the compositions of SRM binders are similar to those used for plastic-bonded explosive warheads. 28 The most historically important polymers for modern propellant applications are discussed below.
A. NITROCELLULOSE SRM using NC occupy an important place in the history of propellants. NC is technically the first modern SRM polymer binder, used initially in nearly pure form (single base) or plasticized with nitroglycerin (double base) for small arms and larger bore weapons in the late 19th century. 29 Early NC-based barrage rockets of the Second World War used essentially the same extruded powders of their gun counterparts. Although accuracy was poor, NC-based rockets were well-regarded when used for concentrated artillery bombardment because of their psychological effect on the enemy. 30 It is worth noting that while the elastomeric polymer binders discussed below were in development, parallel efforts using NC-based composite rocket motors reached a high level of maturity. These motors used NC ''casting powders,'' a simple mixture of NC and various solid ingredients, extruded and cut to form right circular cylinders of roughly 1 mm. The powder was then loaded into a mold (often the bond-lined rocket case itself, or a free-standing grain subsequently cartridge loaded), with interstitial space filled with nitroglycerin and other plasticizers. Done competently, interdiffusion of the solid and liquid ingredients created a single monolithic rocket motor grain. Such motors were used for the Scout family of rockets and the Naval Research Laboratory's Project Vanguard. 31 Like their elastomer-based counterparts, double-base rocket motor technology culminated in composite-modified double-base (CMDB) propellants, which contained aluminum, AP, and on occasion high explosives like Her Majesty's Explosive (HMX). CMDB is typically used in ballistic missile motors, such as the third stage of the Minuteman I missile and the second stage of the Polaris A2/A3 missiles. 32 Double-base formulations incorporating NC and NG are not obsolete; they are low cost, reliable, easy to ignite, and composed of readily available ingredients. Double-base motors are still in production and used in current U.S.-made tactical rocket motors, often in the form of a ''carpet roll,'' that is, a sheet of double base that has been rolled around a removable mandrel to form a cylindrical motor having a center perforation.
B. ASPHALT
Modern elastomer-based propellants are typically traced to the ''first'' SRM propellant, a mixture of asphalt and potassium perchlorate developed at the California Institute of Technology Gugenheim Aeronautical Laboratory (GALCIT, which subsequently became the Jet Propulsion Laboratory), led by Frank Malina, a graduate student working under the direction of GALCIT director Theodore von Kármán. The goal was to build a rocket that could attain an altitude of 100 000 feet. 33 Discouraged by repeated experimental failures, von Kármán and Malina showed mathematically that a constant ratio between the burning surface area of the solid motor and the nozzle area would result in stable pressure in the rocket body. 34 Using this idea, their initial black powder motors were redesigned, with an unprecedented 12 s of burning with 125 pounds of thrust achieved. 33 However, these motors were overly sensitive to changes in ambient temperature, failing catastrophically in hot weather.
The solution was a new formulation developed by GALCIT member Jack Parsons, consisting of 76% potassium perchlorate oxidizer in a slurry of 7 parts road asphalt binder and 3 parts plasticizing oil. 35 This mixture was heated and cast directly into the motor case without a liner. 33 Although the asphalt-perchlorate formulation was used by the U.S. Navy for JATO rockets, the motors had a number of drawbacks, in particular a limited operational temperature range (ambient 6 30 8C) and low solids loading that resulted in poor mechanical properties. 36 However, this type of motor is usually deemed the first viable SRM, and Parsons has been called the ''Father of Rocket Science,'' 37 an honorific shared with Werner von Braun, Robert Goddard, and Konstantin Tsiolkovsky.
C. POLYSULFIDES
In the latter stages of World War II, SRM binder work for JATOs led to the use of a styrenebutadiene rubber, Buna-S, developed by IG Farben in Germany before the war. 38 Although Buna-S performed well over a broad temperature range, dewetting of the motor grain from the case proved an insurmountable problem. Adhesion at the motor's bond line is critical because the debonded surface of the grain will burn prematurely, causing motor failure. By 1945, polychloroprene (CR) binder formulations with potassium perchlorate were developed at the Jet Propulsion Laboratory. These operated over an acceptable temperature range and adhered well to the liner. 34 However, the CR was high molecular weight, and thus processing the binders required large roll mills. The problem was exacerbated by the high loading of the perchlorate solid oxidizer. Various plasticizers were tried, but none adequately improved the processing. SCHEME 1. -Synthesis of polysulfide prepolymer. The next step was to replace the CR with a curable liquid prepolymer made by Thiokol Chemical Corporation. Such telechelic prepolymers had been discovered in 1920 39,40 and manufactured by Thiokol since the 1930s. In fact, these liquid rubbers were already in use by the American military for sealing fuel tanks. The first domestic synthetic rubber was Thiokol polysulfide, a widely used sealant, for which the inventor, J. C. Patrick, received the Goodyear Medal in 1958. (Ironically, Thiokol gained notoriety for its role in the Space Shuttle Challenger disaster, in which an SRM booster failed at liftoff due to leakage of a fluoroelastomer O-ring seal. 41 ) The initial formulations were based on polythiol prepolymers made by reacting dichloroether with sodium polysulfide (Scheme 1). 42 This rubber could then be treated with sodium hydrosulfide and sodium sulfite to cleave it in a somewhat controlled fashion into liquid prepolymers featuring thiol end groups (M W ¼ 1000-4000 Da). These oligomers could then be formulated with oxidizers and other ingredients and cured to oxidize the thiols back into disulfides, typically with p-quinonedioxime.
Polysulfide-based propellant formulations became the standard for U.S.-made SRM. The technology reached maturity as the fuel for the MGM-29 Sergeant surface-to-surface missile fielded by the U.S. Army in 1961, as well as the second and third stages of the Jupiter-C sounding rocket. The end of polysulfide-based rocket motors came with the discovery in 1955 that aluminum powder added to the standard perchlorate binder significantly increased specific impulse. This discovery could not be applied to polysulfide systems because reactions between the aluminum and binder caused ''storage instabilities'' (i.e., they exploded). 43
D. PLASTISOLS
In 1950, a group at Atlantic Research Corporation (ARC) focused their SRM binder efforts on plastisols of NC and poly(vinyl chloride) (PVC). 44 Similar in concept to NC casting powders, plastisol particulates were much smaller, on the order of 50 microns or less, 45 which allowed them to be suspended in an equal amount of plasticizer and mixed with the oxidizers and metal fuels. One notable quality of PVC plastisol was its long pot life; PVC in a plasticizer such as dibutyl sebacate was stable for years without dissolution. 46 There would be no appreciable hardening of the slurry until it reached 104 8C, and at 150 8C, the mixture would ''cure'' (that is, the PVC would dissolve in the plasticizer, solidifying the binder in about 5 min). The obvious drawback of such a material was the absence of chemical crosslinks, which limited the working temperature range and prevented bonding to a rocket motor case. Despite these limitations, PVC plastisols found a number of applications in gas generators (e.g., the Polaris A-3 thrust vector control) and in rocket sustainer motors, such as the Mark 30 in ARC's Standard Missile, as well as in the Stinger Launch and the fairing motor for the Trident I C-4. 46 These motors typically showed good aging characteristics and could remain in service for several decades.
Plastisol nitrocellulose (PNC), sometimes called ''spheroidal'' or ''pelletized'' nitrocellulose, was developed at ARC and also featured a long pot life, although not as long as PVC's. PNC used NC nitrated at 12.6%, dissolved in nitromethane and ethyl centralite, and emulsified in water. The nitromethane was then leached out and 5-50 micron spheres collected by centrifugation. 47 While used sparingly in SRM, PNC was employed in warheads and gun propellants. The U.S. Navy deemed PNC important enough that it manufactured the material itself at the Naval Propellant Plant in Indian Head, Maryland, starting in 1958. Production ended when accidents forced closure of the facility in the 1990s. 48
E. POLY(BUTADIENE-ACRYLIC ACID)
In the mid-1950s, butadiene-containing copolymers reappeared with poly(butadiene-acrylic acid; PBAA), a random copolymer developed as a binder by Thiokol at Redstone Arsenal. 43 PBAA had a molecular weight of about 3000 Da, but being made by free radical emulsion polymerization, it was a mixture of various polyfunctional chains, including some nonfunctional, resulting in poor reproducibility. Crosslinking was carried out by ring opening of difunctional epoxides and aziridines by the acrylic acid groups. 49 Although the prepolymer had a sufficiently low viscosity to allow high solids loading, the resulting crosslinked polymer had poor mechanical properties because of the irregular network structure and for this reason was abandoned. However, PBAA was used by Thiokol as the binder for the first stage of the Minuteman I ICBM. 32
F. POLY(BUTADIENE-ACRYLONITRILE-ACRYLIC ACID)
Because of the difficulties in reproducibly crosslinking PBAA, Thiokol formulators replaced it with poly(butadiene-acrylonitrile-acrylic acid; PBAN), which was synthesized in a random, free radical polymerization almost identical to that for PBAA. The inclusion of the acrylonitrile improved the spacing between the acrylic acid groups (i.e., the crosslink sites), allowing for better elastomeric properties. PBAN also had less propensity to surface harden in comparison with PBAA, a problem caused by oxidative crosslinking of the unsaturation carbons, common to polybutadienes and their copolymers. 49 Both the PBAA and PBAN prepolymers were synthesized by emulsifying the monomers in water using a quaternary ammonium salt as an emulsifier and azobisisobutyronitrile as the free radical initiator. Although the stoichiometry of the monomers could be varied over a wide range, typically the amount of acrylic acid was kept low enough that a 3000 Da molecular weight chain would nominally have only two carboxylic acid groups, with about 6% by weight cyano groups (thought to limit oxidative crosslinking of the olefins in the polymer backbone).
By any measure, PBAN was a successful prepolymer; more PBAN has been made and consumed than any other rocket motor binder. It was estimated that about 2.6 million kg of PBANbased propellant was produced through 1997. 50 The large consumption was in part due to the sizes of the motors PBAN was used in, which included the massive Titan III/IV-A UA120 and Space Shuttle strap-on boosters. 32
G. CARBOXYL-TERMINATED POLYBUTADIENE
A needed improvement to PBAA and PBAN was ensuring that the carboxylate groups were spatially separated along the prepolymer chain, to improve elasticity. In the late 1950s, chemists at Thiokol synthesized carboxyl-terminated polybutadiene (CTPB) using free radical polymeriza-SCHEME 2. -Side reactions in aziridine-based curing.
tion. 51 A dicarboxylic acid peroxide (usually glutaric acid peroxide) or azo dicarboxylic acid initiator was used to polymerize and terminate butadiene in a pressurized solution, with molecular weights around 3500-5000 Da. 49 Because this was an uncontrolled free radical polymerization, the prepolymer had a polydisperse molecular weight and was highly branched (similar to HTPB, see discussion below). The polymer produced using peroxide initiation was made by Thiokol (later Morton International, then Rohm & Haas) under the name HC-434; the azo-initiated polymer was produced by BF Goodrich under the Hycar name.
Anionic polymerization of butadiene reduced the polydispersity and eliminated branching. The synthesis was a complicated process, with the organolithium salt of methyl naphthalene used to make the initiator, the dilithium salt of isoprene. Once the desired molecular weight of the dilithium salt of polybutadiene was attained, the ends of the polymer chain were capped with carbon dioxide, followed by anhydrous hydrochloric acid to yield CTPB and lithium chloride as a by-product. 52 Anionically polymerized CTPB was produced by Phillips under the Butarez CTL name.
Although both CTPB and PBAN were successful materials, they relied on carboxylate chemistry for curing, which caused problems. The hydrogen bonding from the pendant carboxylates caused the prepolymers to have fairly high viscosities, complicating mixing, and the curing with multifunctional aziridines or epoxides was slow, sometimes tying up production equipment for weeks at a time. The aziridines could also rearrange to form oxazolines, which react far slower with carboxylic acids and could even homopolymerize in the presence of AP 49 (Scheme 2). The homopolymerization and oxazoline problems could be so bad, in fact, that 20-30% of the aziridines added to a mix did not contribute to curing at all. These problems were tolerated because of the belief that the aziridine homopolymer might form a shell around the AP, increasing adhesion between the binder and oxidizer, and because the oxazolines would eventually revert back to aziridines and cure the carboxylic acid groups. However, what generally resulted were ill-defined mixtures of both epoxide and aziridine curatives, chosen empirically based on the acidities and polarities of the mixes and the intuition of the formulators.
Notwithstanding these problems, CTPB formulations were ubiquitous in solid propellant formulations of the U.S. military throughout the 1960s. However, the curing problems were so prevalent that within a decade, most newly qualified rocket motors were using polyurethane chemistry. Legacy systems continued with CTPB, owing mostly to the difficulty and cost of requalifying old motors for new ingredients. However, the changeover was accelerated by a fire in 1996 at Phillips's Butarez plant in Borger, Texas. Phillips declined to rebuild the plant, ending production of linear CTPB, and thereby leaving a number of U.S. missile programs without a major ingredient. Although attempts were made to replace Butarez with free radically-polymerized CTPB, in 2001 Goodrich was bought out by a group of investors, and Rohm & Haas announced an end to production of CTPB. So dominant was CTPB throughout the 1960s that although HTPB was first synthesized for use in binders in 1961, it did not see actual use in a rocket motor until 1968, when Aerojet used it in a dual-thrust radial-burning motor grain formulation for the Astrobee D, a NASA-sponsored meteorological sounding rocket. 50 Gradually, HTPB gained widespread use as a replacement binder in older systems employing CTPB, for example, in the Maverick, Stinger, and Sidewinder missiles. After 50 years of use, HTPB remains the standard binder for nearly all U.S.-made SRM. HTPB-based polyurethane binders are relatively inexpensive, have low viscosity prepolymers, and exhibit good mechanical and aging properties. The binary system enables a high solids content, providing one of the highest specific impulses among solid propellants. Figure 4 shows the I sp of HTPB-based solid propellants with various metals. 12 Notwithstanding its excellent properties, HTPB popularity no doubt benefitted from its being the most widely available prepolymer at the time SRM development activity largely ended in the United States.
The synthesis of the HTPB prepolymer is by free radical polymerization of butadiene in ethanol or isopropanol using hydrogen peroxide. 55 As such, it typically has a broad molecular weight distribution and substantial branching. Although there are nominally 2-3 hydroxyls per prepolymer chain, this is an average, and the chemical structure is polydisperse. Various studies of the hydroxyl group content reveal the presence of high-molecular-weight chains bearing several (sometimes more than a dozen) hydroxyl groups per chain, with the details varying from batch to batch. [56][57][58] This makes it more difficult to control the crosslinking of HTPB, in comparison with older systems based on anionically polymerized CTPB, in which a telechelic prepolymer bearing only two reactive groups was doped with a multifunctional crosslinking agent.
Curing of HTPB is typically carried out with an aliphatic diisocyanate, such as isophorone diisocyanate or hexamethylene diisocyanate. Generally, aromatic isocyanates are too reactive, shortening the pot life (toluene diisocyanate being an exception). When greater than two isocyanates per crosslinking agent are required, biuret triisocyanate is the typical choice. Although a catalyst is not necessary for urethane formation, often a Lewis acid catalyst (e.g., iron acetylacetonate, dibutyltin dilaurate, or bismuth trichloride) is used. Presumably, the metal catalyst coordinates to the isocyanate, lowering the activation energy for reaction with the hydroxyl. 59,60 The pot life of HTPB can be extended by fine-tuning of the polymer microstructure. 61 Once Aerojet recognized HTPB's potential, production was established by Sinclair/ARCO under the trade name R-45M. This HTPB production facility, now owned by Total Petrochemicals, continues to operate.
I. HYDROXYL-TERMINATED POLYETHERS As noted above, polyurethane binders based on polyethers and polyesters had been successfully formulated into fielded missile systems in the early 1950s, superseded first by CTPB in the early 1970s and followed by HTPB thereafter. This was due in part to the fact that polybutadienes, when formulated in a standard motor, yield a slightly higher specific impulse than the polyethers (about a 3 s increase). Also, early polyethers tended to embrittle when exposed to humidity, which was practically unavoidable in a realistic setting. This caused problems with propellant aging and debonding of the motor to its liner. However, use of the first-generation polyethers continued in specialty applications, such as low-burning sustainer motors in dual-thrust rockets. 54 In the late 1980s, the U.S. military began to emphasize reduced sensitivity for their ordnance; that is, a reduction of the violent effects of accidental propellant cook-off. 62 Greater insensitivity of the motor to impact could be achieved by reducing solids loading, but this reduces performance. The requirement for insensitive munitions thus precipitated a reevaluation of propellant binders to achieve better energetic performance to compensate for the lost performance. An obvious way to make a binder more energetic is to incorporate energetic groups on the prepolymer (see below). Alternatively, an energetic plasticizer could be used in the formulation. Energetic plasticizers such as N-butyl-N-nitratoethyl nitramine and trimethylolethane trinitrate are too polar to be soluble in polymers such as polybutadiene. Polyethers are an obvious solution to the solubility problem, particularly with the moisture embrittlement issue largely mitigated by judicious use of bonding agents.
Older polyethers, such PPG and PTMEG, were discussed above. A more recent example of a hydroxyl-terminated polyether binder is Terathane-PEG, a block copolymer of poly (1,4butanediol; or Terathane) and poly(ethylene glycol) that was synthesized by DuPont and formulated by ATK for use with highly polar nitroplasticizers 63 in the early 1990s.
VI. ENERGETIC POLYMERS
Under standard conditions, conventional solid propellants produce a specific impulse on the order of 265 s. 8 There are continuing efforts to develop binders that yield higher combustion energy, SCHEME 3. -Common energetic polymers. typically employing azides or nitrate esters. NC was the first energetic binder used in SRM, and there have been continuing efforts to develop more energetic binders. Invariably, these are hydroxyl-terminated prepolymers with urethane crosslinking.
A. POLY(GLYCIDYL NITRATE)
In 1953, formulators at the U.S. Naval Ordnance Test Station in China Lake, California, synthesized 800 to 3400 Da poly(glycidyl nitrate; PGN) using stannic chloride as a catalyst, 64,65 although the generation of acetyl nitrate as a side product limited its scale-up potential at the time 43 (Scheme 3a). Eventually, glycidyl nitrate would be synthesized safely in a flow reactor with dinitrogen pentoxide with high yield and purity by the Defence Research Agency in the United Kingdom in the early 1990s. 66,67 Hydroxyl-functionalized prepolymer was prepared using a boron trifluoride catalyst initiator. 68 B. POLYOXETANES Energetic polymers derived from oxetanes are polymerized using boron trifluoride chemistry, similar to PGN. These polyethers are based on 3,3-bis(azidomethyl) oxetane (BAMO), 3azidomethyl 3-methyl oxetane (AMMO), and 3,3-(nitratomethyl) methyloxetane (NIMMO). 69 Naturally, the corresponding polymers (made by Aerojet but now discontinued) are known as poly(BAMO), poly(AMMO), and poly(NIMMO) (Scheme 3c) and have found limited use in small gas-generating rocket motors. Poly(BAMO) is the most energetic of the three but is used with either poly(AMMO) or poly(NIMMO) as a copolymer because of its higher crystallinity.
C. GLYCIDYL AZIDE POLYMER
A chemically simpler polyether featuring the energetic azide group is glycidyl azide polymer, synthesized in the early 1970s 70 and evaluated as a possible binder starting in 1976 71 (Scheme 3b). In practice, glycidyl azide itself was found to be resistant to polymerization, so glycidyl azide polymer (GAP) was generated directly from the substitution of the chloride in polyepichlorohydrin by sodium azide in dimethylsulfoxide. GAP was evaluated by the U.S. Air Force as a propellant binder as early as 1981.
VII. PROCESSING
Processing of solid propellants requires shaping and curing a highly loaded material with minimal heat buildup. A rule of thumb is that the uncured propellant, including solid oxidizer and metal particulate, have a viscosity no greater than ca. 50 Pa s in order to be successfully cast in a traditional mixer. This is much less than the viscosity of most binders without added plasticizers, which therefore are incorporated at levels of 10% or more by weight (with respect to the prepolymer). It is critical that the working time of the prepolymer is long enough to permit adequate mixing, casting, and curing under vacuum. In the case of a large SRM, such as a strap-on booster, several mixes, of up to 1600 L each, are transported inside the mixers to the casting pit and then poured directly into the motor case. Working time is regulated by temperature control and judicious use of catalysts for the crosslinking reaction.
Resonant acoustic mixing has been investigated as a replacement for traditional stirring. 72 This method uses a low-frequency, high-intensity acoustic field to effect mixing, with micro-mixing regimes generated throughout the precured ''dough.'' 73 In theory, this leads to a more homogeneous mixture and allows the motor ingredients to be mixed directly in their case, obviating the need for separate mixing vessels.
A processing option that has not yet been implemented is additive manufacturing (threedimensional printing) of the propellant. The advantages would include potentially more intimate mixing and the ability to form more complex grain structures, particularly within the center perforation, and thereby better tune the burning profile. 74,75
VIII. MECHANICAL PROPERTIES OF BINDER
The SRM grain is bonded to the case to immobilize the propellant and prevent premature burning of the outer surface of the grain. Debonding can be a significant problem, and additives are included in propellant formulations to improve bonding characteristics. Thermal stresses during storage are typically small (,0.1 MPa), although thermal cycling can induce fatigue cracks. More serious is shrinkage during the cure, which gives rise to stresses at the case boundary. Stress concentration in the vicinity of oxidizer particles, particularly large ones, can cause debonding from the matrix. Stress due to cure shrinkage can be compensated for by elevating the temperature somewhat above the cure temperature. Typical accelerated aging at 60-65 8C, corresponding to as much as 15 years at ambient temperature, was found to cause only modest changes in mechanical properties of HTPB-based propellants. 76,77 However, prolonged storage at low temperature can cause stresses to develop that result in dewetting of the binder from the oxidizer and consequent voids within the motor. Even relatively small static preloads prior to ignition can initiate void formation. 78 Expansion of a small void can lead to catastrophic failure (bursting) of the SRM, if the developing hole reaches a critical size. [79][80][81][82] The largest stresses are encountered during ignition and flight, with typical magnitudes of ca. 0.5 MPa. Although this is well below the strength of the materials, the stresses are sufficient to cause substantial dewetting of the binder 83 and loss of grain structural integrity. 84 Accumulation of interfacial cracks and holes during cure, storage, and operation affects the mechanical properties and structural stability of the propellant and ultimately can lead to catastrophic failure. Thus, the design of solid propellants is guided by analysis and prediction of the stresses exerted on the binder-grain interface, particle-binder debonding, and the development of voids. An accurate assessment of damage tolerance is critical. Experiments and modeling are used to deduce the softening of the propellant due to damage, with the assumption usually made that any softening arises due to void formation; that is, mechanical hysteresis due to viscoelasticity or anelastic effects is usually neglected.
IX. MODELING
Modeling the mechanical behavior of a solid propellant is essential for rational design of SRM and to ensure both their performance and safe operation, the latter including estimation of service lifetimes. However, this modeling is challenging, as it entails virtually all factors that complicate such analyses: nonlinear strains, mixed strain modes, high strain rates, failure and fracture of the material, and temperatures and stresses that change over time. The material itself is complex, involving multiple components that have mechanical properties that are very different yet coupled. The initial step is to derive a constitutive equation, which quantifies the relationship of stress to strain, including their evolution over time. For polymers, elasticity models are used, which can be phenomenological or molecular. The former typically describe the mechanical response in terms of derivatives of the strain energy, to arrive at tractable equations that, however, have no connection to the chain molecules. [85][86][87] The strain energy is usually expressed as a series expansion in terms of strain invariants (strain descriptors that are the same for any orthonormal coordinate system used to represent the strain components), with the linear terms corresponding to ideal elasticity and the well-known Mooney-Rivlin equation. 85,88 Nonlinear terms can be added to improve the fitting of experimental data, although this can lead to large errors on extrapolation. Vahapoglu and Karadeniz 89 reviewed the various phenomenological equations through 2003. Molecular models of rubber elasticity, 88,90,91 based on chain entropy, have a degree of accuracy similar to phenomenological approaches but with the advantage of providing insight into structure-property relations.
The effect of filler reinforcement is an added complication of modeling efforts. There are a surfeit of models for the viscosity of fluids that are highly filled with particles of varying size. [92][93][94][95][96] The basic concept is straightforward: strain amplification due to the inextensibility of hard particles enhances the stiffness (albeit limited by chain scission or detachment from the filler particles). 97 That is, the strain energy arises from rubber elasticity, amplified by the presence of filler. The role of the polymer-filler interface, including bound and occluded rubber, is significant. Moreover, particle deagglomeration and slippage of interfacial chains impart a strain dependence, which can dominate the filler effect at large strains.
A polydisperse particulate ( Figure 3) is a further complication, although even generic carbon black exists in rubber over a size range that can span from 10 to 10 4 nm. 98 The number of expressions describing reinforcement reflects this dispersity. For discretely sized particles, these have the general form of a product series
gð/Þ ¼ g /¼0 P n i¼1 Hð/ i Þ ð 2Þ
in which / is the filler volume fraction and each factor corresponds to a discrete mode (bin) in the size distribution. The stiffening function, H(/), can include the effects of particle shape and interactions. The starting point is the Einstein equation, valid for low particle concentrations:
g ¼ g /¼0 ð1 þ 2:5/Þ ð3Þ
Typically, for carbon black reinforcement, a modification is the Guth-Gold equation in which g /¼0 is the viscosity in the absence of filler: 99
g ¼ g /¼0 ð1 þ 2:5/ þ 14:1/ 2 Þ ð 4Þ
Commonly, / is taken to be an effective volume fraction, larger than the actual value due to occluded rubber. 100,101 For higher concentrations of monodisperse particles, many equations have been proposed, most of which are empirical. Examples that have been found to represent experimental data well include 102
g ¼ g /¼0 1 þ 2:5/ þ j/ / / m À / 2 !ð5Þ
in which j is an adjustable parameter, and 103
g ¼ g /¼0 1 þ 0:75 /=/ m 1 À /=/ mð6Þ
which reduces to the Einstein equation for a maximum volume fraction of solids / m ¼0.605. Given the correspondence between the viscosity and modulus, g/g /¼0 ¼E/E /¼0 (for the tensile modulus), Eqs. 2-6 are also used for the effect of hard particles on the modulus. Common assumptions in modeling the mechanics of solid propellants are (i ) reversible deformation without permanent set; (ii ) no volume changes induced by strain; (iii ) an absence of strain localization, such as necking; and (iv) the effects of strain and time (or strain rate) are uncoupled. Propellants are susceptible to flow, and their viscoelastic nature must be included in the constitutive modeling. Very generally in modeling rubber, the stress is expressed as a simple product of functions of time and strain. Using some form of the Boltzmann superposition principle, for uniaxial tension the stress is
rðtÞ ¼ Z t 0 dEðt À uÞ du eðtÞ À eðuÞ du þ EðtÞeðtÞ ð 7Þ
in which e is the tensile strain. The first term in the integral represents the stress remaining at time t of stress that arose at time u. The second term is the isochronal stress. Rubber is arguably the most ''linear'' of materials, at least using an appropriate definition of E(t). 88,104 With experimental characterization of the strain and rate dependences, this approach works reasonably well for nonreversing deformations. However, when the sign of the strain is changed, Eq. 7 and similar treatments invariably underestimate the amount of energy dissipated. Both phenomenological and molecular constitutive equations share this failure to describe reversing strain histories of rubbery networks. 105,106 This means that when constitutive parameters are obtained by fitting experimental tension or shear data, the stresses predicted during recovery are too large; that is, the measured hysteresis exceeds the calculated energy dissipation. [107][108][109] There are recent efforts to incorporate anelastic features into rubber elasticity models, 110,111 although very generally, the mechanical behavior of rubber networks during recovery remains poorly understood. [112][113][114] X. BINDER FAILURE Constitutive equations for solid propellants usually include empirical damage terms to quantify the softening of the propellant, ascribing it to damage accumulation. The damage material parameters in these constitutive equations are determined from the deviation of experimental stress-strain measurements; that is, an empirical softening function is adjusted to agree with the experimental data. 78,[115][116][117][118][119][120][121] These models yield accurate stresses during initial deformation of the propellant; however, the predicted recovery stresses are too large. [122][123][124] The problem is when modeling solid propellants, the failure of expressions such as Eq. 7 for reversing strains is neglected, and thus the ''excess'' energy loss attributed to structural damage such as dewetting and void formation is overestimated.
Damage models often treat the underlying mechanism as a transition of filler particles to voids, based on the assumption that debonded particles exert no reinforcement. 78,79,116,119,120 The strength of the interfacial adhesion between particle and matrix is assumed to be a material constant, independent of the deformation conditions. [125][126][127] Figure 5 shows results for the void volume calculated for a typical solid propellant, deduced from the deviation of the stress from values calculated from the constitutive equation for the propellant. Cumulative damage obtained by an analysis of this type is shown in Figure 6. 115 The intractable problem in modeling elastomers in general and solid propellants in particular is separating irreversible structural changes such as void formation from the viscoelastic hysteresis intrinsic to an amorphous polymer above its glass transition temperature. 128 A common misperception is that this softening, referred to as the Mullins effect, is especially substantial in highly filled elastomers. This would include propellant binders, and indeed, they exhibit marked mechanical hysteresis. However, all viscoelastic materials exhibit Mullins softening, 106 which in rubber can exceed 20% of the strain energy even at low strain rates (,10 À3 s À1 ). 105 In fact, the magnitude of Mullins softening is very similar for filled and unfilled compounds when compared at the same peak stresses. 128,129 Because of the inherent limitations of the constitutive equations for rubber, and because values for the damage parameters must be obtained by fitting experimental data, the models for propellants lack predictive capability.
The range of strains, pressures, and temperatures experienced by solid propellants are very broad, making it difficult to determine the failure limits of a binder from laboratory measurements. One approach used for solid propellants is to develop failure envelopes, 130,131 based on the early work of Smith. 132,133 In this method, the stress at break is plotted versus the failure strain, with values obtained at different temperatures and strain rates presumed to fall on a single curve. This curve, the failure envelope, defines the mechanical limits of the material for arbitrary conditions. The approach assumes (incorrectly 134,135 ) that time-temperature superpositioning is valid over the range from the low-frequency polymer chain dynamics to fast local segmental motions. Shown in Figure 7 131 is the failure envelope for a typical solid propellant, which ideally would define the safe operating range of the material.
XI. AGING
An important consideration in propellant performance is aging, which can entail (i ) chemical changes in the binder, affecting the modulus and combustion behavior; (ii ) crystallization of some components; (iii ) dewetting and porosity development; (iv) phase separation of components within the grain; (v) moisture ingress, affecting modulus and burn characteristics; and (vi ) separation of the grain from the liner. Increases in stiffness due to crosslinking during storage can induce cracking. 136 This cracking is exacerbated by the very different material properties of the components of a rocket motor (Table III). 137,138 These differences include both thermal properties and consequent thermal loads as well as differences in component stiffness, which amplify vibrational perturbations. Oxygen diffusion through the rocket motor grain is quite high, and oxidative damage once the antioxidant is exhausted can be extensive, especially at elevated storage temperatures. 139,140 XII. CONCLUDING REMARKS Their low cost, long shelf life, and immediate readiness ensure that solid propellants will remain in wide use for both military and civilian applications. 141 Although the performance of solid propellants is significantly lower in comparison with liquid fuels, SRM thrust profiles are predictable and achieved with relatively small volumes. Innovative designs even provide the ability to stop and restart an SRM after ignition. Present development work is focused on increasing energy yields, in response to the need to reduce the solids content in order to meet insensitive munitions requirements and to improve the reproducibility and efficiency of processing. The latter includes exploring additive manufacturing, which offers the potential for more control of grain geometries. There is no lack of modeling efforts that address the failure of composite binders; however, these efforts suffer from the general limitations of rubber modeling, most evident when compounds are subjected to reversing strain histories.
FIG. 1 .
1-Representative values of impulse normalized by fuel weight as a function of speed for three common propulsion systems. Mach number is ratio of speed of missile to that of sound (~340 m/s, depending on temperature and altitude).6 ambient pressure. The I sp of the solid propellant becomes constant within a few milliseconds of ignition.
FIG. 3 .
3-Cumulative distribution of particles in a representative grain.27
functionalized prepolymers had been combined with multifunctional isocyanates in the early 1950s, even before the introduction of PBAN and CTPB. General Tire & Rubber experimented with urethane crosslinked polyethers and polyesters for propulsion formulations as early as 1947. However, it was Aerojet that had the initial successes with these materials under the leadership of Karl Klager, an Austrian chemist who had worked for IG Farben during the Second World War and was brought to the United States by the U.S. Office of Naval Research under Operation Paperclip in 1949. 53,54 Klager's polyurethanes were used in both stages of the Polaris A1 and in the second stage of the Minuteman I. These materials were a combination of poly(1,2propylene oxide) (PPG) and poly(1,4-tetramethylene oxide) (PTMEG), cured with toluene diisocyanate and crosslinked with triethanolamine. Other examples of such binders included poly(ethylene oxide) (PEG), poly(neopentylglycol azelate), and poly(butylene oxide); commonly referred to as B-2000. These types of binders had excellent stability, mechanical properties, and reliability for their era but nevertheless were passed over in favor of CTPB-based formulations.
FIG. 4 .
4-Specific impulse vs binder content for solid propellants employing HTPB with ammonium nitrate (squares), ammonium perchlorate (circles), ammonium dinitramide (diamonds), and an ammonium dinitramide formulation bound with glycidyl azide polymer (line). Adapted from ref 12.
FIG. 5 .
5-Development of voids, as predicted from a model ascribing softening to debonding of the binder from the particles. From ref 78. FIG. 6. -Stress (symbols) and cumulative damage (line) for an HTPB solid propellant. From ref 15.
FIG. 7 .
7-Failure envelope for a typical solid rocket propellant.131
TABLE I OXIDIZER
IRatio of oxygen in a material to amount required for its complete oxidation. d Drop height of arbitrary weight for which explosion induced.PROPERTIES 13,a
AP
AN
RDX
HMX CL-20
AND
HNF
Molecular weight, Da
118
80
222
296
438
124
183
Density, g/mL
1.95
1.72
1.81
1.91
2.04
1.81
1.86
Heat of formation, b kJ/g
À2.51 À4.95
0.325
0.28
0.85
À1.21 À0.39
Oxygen balance, c %
3 4
2 0
À22
À22
À11
26
13
Impact sensitivity, d cm
15
.49
7.5
7.4
2.5
3.7
3
Friction sensitivity, N
.100
350
120
120
124
.350
20
a AP, ammonium perchlorate; AN, ammonium nitrate; RDX, Royal Demolition Explosive: cyclotrimethylenetrinitramine;
HMX, Her Majesty's Explosive: cyclotetramethylenetetranitramine; CL-20, China Lake 20: 2,4,6,8,10,12-hexanitro-
2,4,6,8,10,12-hexaazaisowurtzitane; HNF, hydrazinium nitroformate.
b Negative value indicates exotherm.
c
TABLE II REPRESENTATIVE
IIHTPB-BASED SOLID PROPELLANT FORMULATIONComponent
Amount, %
Polymer
8-10
Isocyanate
1-2
Plasticizer
2-10
Oxidizer
60-85
Metal
0-20
Bonding agent
1
Burn rate modifier
1
Antioxidant
0.1
Catalyst
0.1
TABLE III TYPICAL
IIIMATERIAL PROPERTIES OF SOLID ROCKET MOTOR COMPONENTS 136,137Propellant
Steel case
Insulation
Modulus, MPa~1
210 3 10 3
11
Poisson's ratio
0.5
0.3
0.3
Thermal expansion coefficient, K À1
1.1 3 10 À4
0.11 3 10 À4
2.3 3 10 À4
Thermal conductivity, W/(m 2 K)
0.61
42.5
0.36
Heat capacity, J/(g K)
0.83
0.46
0.36
RUBBER CHEMISTRY AND TECHNOLOGY, Vol. 92, No. 1, pp. 1-24 (2019)
. D C Sayles, RUBBER CHEM. TECHNOL. 39112D. C. Sayles, RUBBER CHEM. TECHNOL. 39, 112 (1996).
. R S Fry, J. Prop. Power. 2027R. S. Fry, J. Prop. Power 20, 27 (2004).
Hybrid Combustion Studies on Regression Rate Enhancement and Transient Ballistic Response,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic. L Galfetti, M Boiocchi, C Paravan, E Toson, A Sossi, F Maggi, G Colombo, L T Deluca, Materials, L. T. DeLuca, T. Shimada, V. P. Sinditskii, and M. CalabroSpringerNew YorkL. Galfetti, M. Boiocchi, C. Paravan, E. Toson, A. Sossi, F. Maggi, G. Colombo, and L. T. DeLuca, ''Hybrid Combustion Studies on Regression Rate Enhancement and Transient Ballistic Response,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic Materials, L. T. DeLuca, T. Shimada, V. P. Sinditskii, and M. Calabro, Eds., Springer, New York, 2017.
Russia Releases Videos Offering an Unprecedented Look at Its Six New Super Weapons. J Trevithick, J. Trevithick, ''Russia Releases Videos Offering an Unprecedented Look at Its Six New Super Weapons,'' July 19, 2018, www.thedrive.com/the-war-zone/22270/russia-releases-videos-offering-an-unprecedented-look-at-its-six-new-super- weapons.
Russia Introduces Two New Nightmare Missiles. K Mizoyaki, K. Mizoyaki, ''Russia Introduces Two New Nightmare Missiles,'' March 5, 2018, www.popularmechanics.com/military/ weapons/a19121693/russia-introduces-two-new-nightmare-missiles.
Tactical Missile Design Concepts. F S Billig, Tactical Missile Propulsion, G. E. Jensen and D. W. NetzerAmerican Institute of Aeronautics and AstronauticsReston, VAF. S. Billig, ''Tactical Missile Design Concepts,'' in Tactical Missile Propulsion, G. E. Jensen and D. W. Netzer, Eds., American Institute of Aeronautics and Astronautics, Reston, VA, 1996.
. C E Carr, A Neri, R E Black, Aerospace Am. 5459C. E. Carr, A. Neri, and R. E. Black, Aerospace Am. 54, 59 (2016).
Introduction to Solid Rocket Propulsion. P Kuentzmann, RTO-EN-023P. Kuentzmann, Introduction to Solid Rocket Propulsion, RTO-EN-023 (2004).
. A Davis, Comb. Flame. 7359A. Davis, Comb. Flame 7, 359 (1963).
. D Sundaram, V Yang, R A Yetter, Prog. Ener. Comb. Sci. 61293D. Sundaram, V. Yang, and R. A. Yetter, Prog. Ener. Comb. Sci. 61, 293 (2017).
. C Oommen, S R Jain, J. Haz. Matl. 67253C. Oommen and S. R. Jain, J. Haz. Matl. A67, 253 (1999).
. L T Deluca, Eurasian Chem. Technol. J. 18181L. T. DeLuca, Eurasian Chem. Technol. J. 18, 181 (2016).
Survey of New Energetic and Eco-Friendly Materials for Propulsion of Space Vehicles,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic. H Singh, Materials, L. T. DeLuca, T. Shimada, V. P. Sinditskii, and M. CalabroSpringerNew YorkH. Singh, ''Survey of New Energetic and Eco-Friendly Materials for Propulsion of Space Vehicles,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic Materials, L. T. DeLuca, T. Shimada, V. P. Sinditskii, and M. Calabro, Eds., Springer, New York, 2017.
Combustion of Solid Propellants with Energetic Binders,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic Materials. S A Rashkovskiy, Y M Milyokhin, A V T Fedorychev ; L, T Deluca, V P Shimada, S. A. Rashkovskiy, Y. M. Milyokhin, and A. V. Fedorychev, ''Combustion of Solid Propellants with Energetic Binders,'' in Chemical Rocket Propulsion: A Comprehensive Survey of Energetic Materials, L. T. DeLuca, T. Shimada, V. P.
. M Sinditskii, Calabro, SpringerNew YorkSinditskii, and M. Calabro, Eds., Springer, New York, 2017.
Energetic Polymers: Binders and Plasticizers for Enhancing Performance. H G Ang, S Pisharath, WileyNew YorkH. G. Ang and S. Pisharath, Energetic Polymers: Binders and Plasticizers for Enhancing Performance, Wiley, New York, 2012.
Tactical Missile Propulsion. G E Jensen, D W Netzer, AIAA Progress in Astronautics and Aeronautics. 170American Institute of Aeronautics and AstronauticsG. E. Jensen and D. W. Netzer, ''Tactical Missile Propulsion,'' AIAA Progress in Astronautics and Aeronautics, Vol. 170, American Institute of Aeronautics and Astronautics, Reston, VA, 1996.
. E L Petersen, S Seal, M Stephens, D L Reid, R Carro, T Sammet, A Lepage, U.S. Patents. 88229E. L. Petersen, S. Seal, M. Stephens, D. L. Reid, R. Carro, T. Sammet, and A. Lepage, U.S. Patents 8,336,287B1 and 8,114,229Bl, 2012.
. H L Girdhar, A J Arora, Comb. Flame. 34303H. L. Girdhar and A. J. Arora, Comb. Flame 34, 303 (1979).
. R P Rastogi, D Deepak, Am. Inst. Aero. Astro. 14988R. P. Rastogi and D. Deepak, Am. Inst. Aero. Astro. 14, 988 (1976).
. M M Rueda, M.-C Auscher, R Fulchiron, T Périé, G Martin, P Sonntag, P Cassagnau, Prog. Polym. Sci. 6622M. M. Rueda, M.-C. Auscher, R. Fulchiron, T. Périé, G. Martin, P. Sonntag, and P. Cassagnau, Prog. Polym. Sci. 66, 22 (2017).
. L E Nielsen, Polymer Rheology, Marcel Dekker, New YorkL. E. Nielsen, Polymer Rheology, Marcel Dekker, New York, 1977.
. B Lieberthal, D S Stewart, Comb. Theor. Model. 20373B. Lieberthal and D. S. Stewart, Comb. Theor. Model. 20, 373 (2016).
. RUBBER CHEMISTRY AND TECHNOLOGY. 921RUBBER CHEMISTRY AND TECHNOLOGY, Vol. 92, No. 1, pp. 1-24 (2019)
. S Gallier, F Hiernard, J. Propul. Power. 24154S. Gallier and F. Hiernard, J. Propul. Power 24, 154 (2008).
. N Hosomi, K Otake, N Uegaki, A Iwasaki, K Matsumoto, M R Asakawa, H Habu, S Yamaguchi, Trans. Jap. N. Hosomi, K. Otake, N. Uegaki, A. Iwasaki, K. Matsumoto, M. R. Asakawa, H. Habu, and S. Yamaguchi, Trans. Jap.
. Soc. Aero. Space Sci. Aero. Technol. Jap. 1714Soc. Aero. Space Sci. Aero. Technol. Jap. 17, 14 (2019).
. F Maggi, L T Deluca, A Bandera, 533395F. Maggi, L. T. DeLuca, and A. Bandera, AIAA J. 53, 3395 (2015).
. S A Rashkovskiy, Comb. Sci. Technol. 1891277S. A. Rashkovskiy, Comb. Sci. Technol. 189, 1277 (2017).
. G Jung, S.-K Youn, Int. J. Sol. Struc. 363755G. Jung and S.-K. Youn, Int. J. Sol. Struc. 36, 3755 (1999).
Explosives. E Anderson, Tactical Missile Warheads: Progress in Astronautics and Aeronautics. J. CarleoneWashington, DCAmerican Institute of Aeronautics and Astronautics155E. Anderson, ''Explosives,'' in Tactical Missile Warheads: Progress in Astronautics and Aeronautics, Vol. 155, J. Carleone, Ed., American Institute of Aeronautics and Astronautics, Washington, DC, 1993.
The Chemistry of Powder and Explosives. T L Davis, Ch. VILas Vegas, NVT. L. Davis, The Chemistry of Powder and Explosives, Angriff Press, Las Vegas, NV, 1943, Ch. VI.
Rockets, Guns, and Targets: Rockets, Target Information, Erosion Information, and Hypervelocity Guns Developed during World War II by the Office of Scientific Research and Development. J E Burchard, Atlantic Monthly PressBostonJ. E. Burchard, Rockets, Guns, and Targets: Rockets, Target Information, Erosion Information, and Hypervelocity Guns Developed during World War II by the Office of Scientific Research and Development, Atlantic Monthly Press, Boston, 1948.
Manufacture of Cast Double-Base Propellant,'' in Propellants Manufacture, Hazards and Testing. R Steinberger, P D Drechsel, Advances in Chemistry Series. C. Boyars and K. KlagerWashington, DCAmerican Chemical Society88R. Steinberger and P. D. Drechsel, ''Manufacture of Cast Double-Base Propellant,'' in Propellants Manufacture, Hazards and Testing, in Advances in Chemistry Series, Vol. 88, C. Boyars and K. Klager, Eds., American Chemical Society, Washington, DC, 1967.
History of Solid Propellant Rocketry. What We Do and Do Not Know. J D Hunley, AIAA/SAE/ ASME Joint Propulsion Conference and Exhibit. Los Angeles, CAJ. D. Hunley, ''History of Solid Propellant Rocketry. What We Do and Do Not Know,'' presented at the 35th AIAA/SAE/ ASME Joint Propulsion Conference and Exhibit, Los Angeles, CA 1999.
Army Air Corps Jet Propulsion Research Project GALCIT Project no. 1,1939-1946: A Memoir. F J Malina, U S ''the, Proceedings of the Third Through the Sixth History Symposia of the International Academy of Astronautics. R. Cargillthe Third Through the Sixth History Symposia of the International Academy of AstronauticsVienna, AustriaF. J. Malina, ''The U.S. Army Air Corps Jet Propulsion Research Project GALCIT Project no. 1,1939-1946: A Memoir,'' in Proceedings of the Third Through the Sixth History Symposia of the International Academy of Astronautics, Vienna, Austria, October 13, 1972, R. Cargill, Ed., NASA Science and Technical Information Office, 1977.
Historical Origins of the Sergeant Missile Powerplant. P T Carroll, Astronautics Symposium of the International Academy of Astronautics. P. T. Carroll, ''Historical Origins of the Sergeant Missile Powerplant,'' Eighth History of Astronautics Symposium of the International Academy of Astronautics, Amsterdam, 1974.
. J W Parsons, U S , 2to Aerojet-General CorpJ. W. Parsons, U.S. Patent 2,783,138 (to Aerojet-General Corp.), February 26, 1957.
Advanced Binders for Solid Propellants-A Review,'' in Advanced Propellant Chemistry. M S Cohen, Adv. Chem. Ser. R. F. Gould54American Chemical SocietyM. S. Cohen, ''Advanced Binders for Solid Propellants-A Review,'' in Advanced Propellant Chemistry, Adv. Chem. Ser., Vol. 54, R. F. Gould, Ed., American Chemical Society, Washington, DC, 1966.
R Metzger, Book of Lies: The Disinformation Guide to Magick and the Occult. Newburyport, MARed Wheel/Weiser/ConariR. Metzger, Book of Lies: The Disinformation Guide to Magick and the Occult, 2nd ed., Red Wheel/Weiser/Conari, Newburyport, MA, 2008.
. E Konrad, E Tschunkur, U S , 10E. Konrad and E. Tschunkur, U.S. Patent 1,973,000 (to I.G. Farben), September 11, 1934.
. J C Patrick, U S , Patent 1,996,486J. C. Patrick, U.S. Patent 1,996,486, April 2, 1935.
Liquid Rubber,'' in Handbook of Elastomers. D C Edwards, A. K. Bhowmick and H. L. StephensMarcel DekkerNew York2nd ed.D. C. Edwards, ''Liquid Rubber,'' in Handbook of Elastomers, 2nd ed., A. K. Bhowmick and H. L. Stephens, Eds., Marcel Dekker, New York, 2000.
What Do You Care What Other People Think?. R P Feynman, NortonNew YorkR. P. Feynman, What Do You Care What Other People Think? Norton, New York, 1988.
. J S Jorczak, E M Fettes, Ind. Eng. Chem. 43324J. S. Jorczak and E. M. Fettes, Ind. Eng. Chem 43, 324 (1951).
Energetic Polymers. H G Ang, S Piharath, Wiley-VCHWeinheim, GermanyH. G. Ang and S. Piharath, Energetic Polymers, Wiley-VCH, Weinheim, Germany, 2012.
. L L Weil, U S , 2403U.S. Patent 2,967,098L. L. Weil, U.S. Patent 2,966,403 (to Atlantic Research Corp.), December 27, 1960; U.S. Patent 2,967,098, January 3, 1961.
Poly(vinyl chloride) Plastisol Propellants,'' in Propellants Manufacture, Hazards and Testing. K E Rumbel, Advances in Chemistry Series. C. Boyars and K. Klager88American Chemical SocietyK. E. Rumbel, ''Poly(vinyl chloride) Plastisol Propellants,'' in Propellants Manufacture, Hazards and Testing, Advances in Chemistry Series, Vol. 88, C. Boyars and K. Klager. Eds., American Chemical Society, Washington, DC, 1967.
J D Martin, Polyvinyl Chloride Plastisol Propellants, presented at the 20th AIAA/SAE/ASME Joint Propulsion Conference. Cincinnati, OHJ. D. Martin, Polyvinyl Chloride Plastisol Propellants, presented at the 20th AIAA/SAE/ASME Joint Propulsion Conference, Cincinnati, OH, June 11-13, 1984.
Continuous Process for Producing Pelletized Nitrocellulose. R C Wilson, T Liggett, G C Cox, J E Geist, for Naval Ordnance Systems Command (AD-764 515National Technical Information Service. Technical ReportR. C. Wilson, T. Liggett, G. C. Cox, and J. E. Geist, ''Continuous Process for Producing Pelletized Nitrocellulose,'' Technical Report for Naval Ordnance Systems Command (AD-764 515), National Technical Information Service, Springfield, VA, June 1973.
Powder and Propellants: Energetic Materials at Indian Head. R Carlisle, University of North Texas PressDenton2nd ed.R. Carlisle, Powder and Propellants: Energetic Materials at Indian Head, Maryland 1890-2001, 2nd ed., University of North Texas Press, Denton, 2002.
Solid Propellants Based on Polybutadiene Binders,'' in Propellants Manufacture, Hazards and Testing. E J Mastrolia, K Klager, Advances in Chemistry Series. C. Boyars and K. KlagerWashington, DCAmerican Chemical Society88E. J. Mastrolia and K. Klager, ''Solid Propellants Based on Polybutadiene Binders,'' in Propellants Manufacture, Hazards and Testing, Advances in Chemistry Series, Vol. 88, C. Boyars and K. Klager, Eds., American Chemical Society, Washington, DC, 1967.
Assessment of HTPB and PBAN Propellant Usage in the United States. T L Moore, AIAA/ASME/ SAE/ASEE Joint Propulsion Conference. Seattle, WAT. L. Moore, ''Assessment of HTPB and PBAN Propellant Usage in the United States,'' presented at the AIAA/ASME/ SAE/ASEE Joint Propulsion Conference, Seattle, WA, 1997.
. M B Berenbaum, G F Bulbenko, R H Gobran, R F Hoffman, 3589U.S. PatentM. B. Berenbaum, G. F. Bulbenko, R. H. Gobran, and R. F. Hoffman, U.S. Patent 3,235,589 (to Thiokol Chemical Corp.), February 15, 1966.
. C A Uraneek, J N Short, R P Zelinski, 3716U.S. Patentto Conoco PhillipsC. A. Uraneek, J. N. Short, and R. P. Zelinski, U.S. Patent 3,135,716 (to Conoco Phillips), November 11, 1958.
Blazing the Trail: The Early History of Spacecraft and Rocketry. M Gruntman, American Institute of Aeronautics and AstronauticsReston, VAM. Gruntman, Blazing the Trail: The Early History of Spacecraft and Rocketry, American Institute of Aeronautics and Astronautics, Reston, VA, 2004.
Polyurethanes, the Most Versatile Binder for Solid Composite Propellants. K Klager, AIAA/SAE/ ASME 20th Joint Propulsion Conference. Cincinnati, OHK. Klager, ''Polyurethanes, the Most Versatile Binder for Solid Composite Propellants,'' presented at the AIAA/SAE/ ASME 20th Joint Propulsion Conference, Cincinnati, OH, 1984.
. J A Verdol, P W Ryan, U.S. Patent. 3366to Sinclair Research, IncJ. A. Verdol and P. W. Ryan, U.S. Patent 3,427,366 (to Sinclair Research, Inc.), February 11, 1969.
. J N Anderson, S K Baczek, H E Adams, L E Vescelius, J. Appl. Polym. Sci. 192255J. N. Anderson, S. K. Baczek, H. E. Adams, and L. E. Vescelius, J. Appl. Polym. Sci. 19, 2255 (1975).
. S K Baczek, J N Anderson, H E Adams, J. Appl. Polym. Sci. 192269S. K. Baczek, J. N. Anderson, and H. E. Adams, J. Appl. Polym. Sci. 19, 2269 (1975).
. K N Ninan, V P Balagangadharan, K B Catherine, Polymer. 32628K. N. Ninan, V. P. Balagangadharan, and K. B. Catherine, Polymer 32, 628 (1991).
Polyurethane-Based Propellants,'' in Propellants Manufacture, Hazards and Testing. A E Oberth, R S Bruenner, Advances in Chemistry Series. C. Boyars and K. Klager88American Chemical SocietyA. E. Oberth and R. S. Bruenner, ''Polyurethane-Based Propellants,'' in Propellants Manufacture, Hazards and Testing, Advances in Chemistry Series, Vol. 88, C. Boyars and K. Klager, Eds., American Chemical Society, Washington, DC, 1969.
. J W Baker, J Gaunt, J. Chem. Soc. 919J. W. Baker and J. Gaunt, J. Chem. Soc. 9, 19 (1949).
. V Sekkar, A S Alex, V Kumar, G G Bandyopadhyay, J. Macro. Sci. A Pure Appl. Chem. 54171V. Sekkar, A. S. Alex, V. Kumar, and G. G. Bandyopadhyay, J. Macro. Sci. A Pure Appl. Chem. 54, 171 (2017).
Hazard assessment Tests for Non-Nuclear Munitions, MIL-STD-2105D. Hazard assessment Tests for Non-Nuclear Munitions, MIL-STD-2105D, 2011.
J P , High Energy Materials: Propellants, Explosives and Pyrotechnics. Weinheim, GermanyWiley-VCHJ. P. Agrawal, High Energy Materials: Propellants, Explosives and Pyrotechnics, Wiley-VCH, Weinheim, Germany, 2010.
Polyglycidyl Nitrate. Part 1. Preparation and Polymerization of Glycidyl Nitrate. W J Murbach, W R Fish, R W Dolah, 2028Naval Ordnance Test Station. NAVORD ReportPart 1W. J. Murbach, W. R. Fish, and R. W. Dolah, ''Polyglycidyl Nitrate. Part 1. Preparation and Polymerization of Glycidyl Nitrate,'' Technical Report for Naval Ordnance Systems Command (NAVORD Report 2028, Part 1), Naval Ordnance Test Station, Inyokern, CA, May 1953.
Polyglycidyl Nitrate. Part 2. Preparation and Characterization of Polyglycidyl Nitrate. J G Meitner, C J Thelen, W J Murbach, R W Dolah, 2028Naval Ordnance Test Station. 2NAVORD ReportJ. G. Meitner, C. J. Thelen, W. J. Murbach, and R. W. Dolah, ''Polyglycidyl Nitrate. Part 2. Preparation and Characterization of Polyglycidyl Nitrate,'' Technical Report for Naval Ordnance Systems Command (NAVORD Report 2028, Part 2), Naval Ordnance Test Station, Inyokern, CA, May 1953.
Characterisation and Polymerisaton Studies of Energetic Binders. A Provatas, for Defence Sciences & Technology Organisation (DSTO-1171Victoria, AustraliaTechnical ReportA. Provatas, ''Characterisation and Polymerisaton Studies of Energetic Binders,'' Technical Report for Defence Sciences & Technology Organisation (DSTO-1171), Victoria, Australia, 2001.
. N C Paul, R W Millar, P G Golding, U.S.Patent 5145974 (to U.K. GovernmentN. C. Paul, R. W. Millar, and P. G. Golding, U.S. Patent 5145974 (to U.K. Government), September 8, 1992.
R L Willer, R S Day, A G Stern, Patent 5120827 (to Thiokol Corp.). U.SR. L. Willer, R. S. Day, and A. G. Stern, U.S. Patent 5120827 (to Thiokol Corp.), June 9, 1992.
Patent 4483978 (to SRI International). G E Manser, U S , G. E. Manser, U.S. Patent 4483978 (to SRI International), November 20, 1984.
. E J Vandenberg, U.S. Patent. 3645917to Hercules IncE. J. Vandenberg, U.S. Patent 3645917 (to Hercules Inc.), February 29, 1972.
. M B Frankel, L R Grant, J E Flanagan, J. Prop. Power. 8560M. B. Frankel, L. R. Grant, and J. E. Flanagan, J. Prop. Power 8, 560 (1992).
M D Mcpherson, U S , Patent Application Publication 2010/0294113 (to Aerojet Rocketdyne). M. D. McPherson, U.S. Patent Application Publication 2010/0294113 (to Aerojet Rocketdyne), November 25, 2010.
. J G Osorio, F J Muzzio, Powder Technol. 27846J. G. Osorio and F. J. Muzzio, Powder Technol. 278, 46 (2015).
Advantages of Rapid Prototyping for Hybrid Rocket Motor Fuel Grain Fabrication. J Fuller, D Ehrlich, P Lu, R Jansen, J Hoffman, AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit. San Diego, CAJ. Fuller, D. Ehrlich, P. Lu, R. Jansen, and J. Hoffman, ''Advantages of Rapid Prototyping for Hybrid Rocket Motor Fuel Grain Fabrication,'' presented at the 47th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, San Diego, CA, 2011.
. R A Chandru, N Balasubramanian, C Oommen, B N Raghunandan, J. Prop. Power. 341090R. A. Chandru, N. Balasubramanian, C. Oommen, and B. N. Raghunandan, J. Prop. Power 34, 1090 (2018).
. R López, A Ortega De La Rosa, A Salazar, J Rodríguez, J. Prop. Power. 3475R. López, A. Ortega de la Rosa, A. Salazar, and J. Rodríguez, J. Prop. Power 34, 75 (2018).
Study of Factors Influencing the Life Predictions of Solid Rocket Motor. R V Binda, R J Rocha, L E Nunes Almeida, Energetic Materials Research, Applications, and New Technologies. R. F. B. Goncalves, J. A. F. F. Rocco, and K. IhaHershey, PAIGI GlobalR. V. Binda, R. J. Rocha, and L. E. Nunes Almeida, ''Study of Factors Influencing the Life Predictions of Solid Rocket Motor,'' in Energetic Materials Research, Applications, and New Technologies, R. F. B. Goncalves, J. A. F. F. Rocco, and K. Iha, Eds., IGI Global, Hershey, PA, 2018.
. F Xu, N Aravas, P Sofronis, J. Mech. Phys. Sol. 562050F. Xu, N. Aravas, and P. Sofronis, J. Mech. Phys. Sol. 56, 2050 (2008).
. P A Kakavas, A V Perig, Rev. Mat. 20407P. A. Kakavas and A. V. Perig, Rev. Mat. 20, 407 (2015).
. W V Mars, A Fatemi, RUBBER CHEM. TECHNOL. 77391W. V. Mars and A. Fatemi, RUBBER CHEM. TECHNOL. 77, 391 (2004).
. G R Hamed, RUBBER CHEM. TECHNOL. 78548G. R. Hamed, RUBBER CHEM. TECHNOL. 78, 548 (2005).
. P A Kakavas, W V Chang, J. Appl. Polym. Sci. 45865P. A. Kakavas and W. V. Chang, J. Appl. Polym. Sci. 45, 865 (1992).
. G Li, Y Wang, A Jiang, M Yang, J F Li, Prop. Expl. Pyro. 43642G. Li, Y. Wang, A. Jiang, M. Yang, and J. F. Li, Prop. Expl. Pyro. 43, 642 (2018).
. H C Yildirim, S Ozupek, Aerosp. Sci. Technol. 15635H. C. Yildirim and S. Ozupek, Aerosp. Sci. Technol. 15, 635 (2011).
. R S Rivlin, RUBBER CHEM. TECHNOL. 6551R. S. Rivlin, RUBBER CHEM. TECHNOL. 65, 51 (1992).
. R W Ogden, Proc. Roy. Soc. A. 326565R. W. Ogden, Proc. Roy. Soc. A 326, 565 (1972).
. C O Horgan, G Saccomandi, RUBBER CHEM. TECHNOL. 79152C. O. Horgan and G. Saccomandi, RUBBER CHEM. TECHNOL. 79, 152 (2006).
C M Roland, Viscoelastic Behavior of Rubbery Materials. Oxford, UKOxford University PressC. M. Roland, Viscoelastic Behavior of Rubbery Materials, Oxford University Press, Oxford, UK, 2011.
. V Vahapoglu, S Karadeniz, RUBBER CHEM. TECHNOL. 79489V. Vahapoglu and S. Karadeniz, RUBBER CHEM. TECHNOL. 79, 489 (2006).
J E Mark, B Erman, Rubber Elasticity: A Molecular Primer. Cambridge, UKCambridge University Press2nd ed.J. E. Mark and B. Erman, Rubber Elasticity: A Molecular Primer, 2nd ed., Cambridge University Press, Cambridge, UK, 2007.
. G Marckmann, E Verron, RUBBER CHEM. TECHNOL. 79835G. Marckmann and E. Verron, RUBBER CHEM. TECHNOL. 79, 835 (2006).
. R J Farris, Trans. Soc. Rheol. 12281R. J. Farris, Trans. Soc. Rheol. 12, 281 (1968).
. J H J Brouwers, Phys. Rev. E. 8149904J. H. J. Brouwers, Phys. Rev. E 81, 051402 (2010); 82, 049904 (2010).
. F Qi, R I Tanner, Rheol. Acta. 51289F. Qi and R. I. Tanner, Rheol. Acta 51, 289 (2012).
. A Dorr, J. Rheol. 5743A. Dorr, J. Rheol. 57, 43 (2013).
. S A Faroughi, C Huber, Phys. Rev. E. 9052303S. A. Faroughi and C. Huber, Phys. Rev. E 90, 052303 (2014).
. B Jiang, RUBBER CHEM. TECHNOL. 90743B. Jiang, RUBBER CHEM. TECHNOL. 90, 743 (2017).
. T Koga, T Hashimoto, M Takenaka, K Aizawa, N Amino, M Nakamura, D Yamaguchi, S Koizumi, Macromolecules. 41453T. Koga, T. Hashimoto, M. Takenaka, K. Aizawa, N. Amino, M. Nakamura, D. Yamaguchi, and S. Koizumi, Macromolecules 41, 453 (2008).
. E Guth, J. Appl. Phys. 1620E. Guth, J. Appl. Phys. 16, 20 (1945).
. A I Medalia, RUBBER CHEM. TECHNOL. 451171A. I. Medalia, RUBBER CHEM. TECHNOL. 45, 1171 (1972).
. A I Medalia, RUBBER CHEM. TECHNOL. 46877A. I. Medalia, RUBBER CHEM. TECHNOL. 46, 877 (1971).
. B A Horri, P Ranganathan, C Selomulya, H Wang, Chem. Eng. Sci. 662798B. A. Horri, P. Ranganathan, C. Selomulya, and H. Wang, Chem. Eng. Sci. 66, 2798 (2011).
. J S Chong, E B Christiansen, A D Baer, J. Appl. Polym. Sci. 152007J. S. Chong, E. B. Christiansen, and A. D. Baer, J. Appl. Polym. Sci. 15, 2007 (1971).
. C M Roland, RUBBER CHEM. TECHNOL. 62863C. M. Roland, RUBBER CHEM. TECHNOL. 62, 863 (1989).
. C M Roland, RUBBER CHEM. TECHNOL. 62880C. M. Roland, RUBBER CHEM. TECHNOL. 62, 880 (1989).
. P G Santangelo, C M Roland, RUBBER CHEM. TECHNOL. 65965P. G. Santangelo and C. M. Roland, RUBBER CHEM. TECHNOL. 65, 965 (1992).
. P H Mott, C M Roland, Macromolecules. 296941P. H. Mott and C. M. Roland, Macromolecules 29, 6941 (1996).
. C M Roland, P H Mott, Macromolecules. 314033C. M. Roland and P. H. Mott, Macromolecules 31, 4033 (1998).
. C M Roland, P H Mott, G Heinrich, Comput. Theor. Polym. Sci. 9197C. M. Roland, P. H. Mott, and G. Heinrich, Comput. Theor. Polym. Sci. 9, 197 (1999).
. J Diani, RUBBER CHEM. TECHNOL. 8922J. Diani, RUBBER CHEM. TECHNOL. 89, 22 (2016).
. L Khalalili, A I Azad, J Lin, R Dargazany, RUBBER CHEM. TECHNOL. 9251L. Khalalili, A.I. Azad, J. Lin, and R. Dargazany, RUBBER CHEM. TECHNOL. 92, 51 (2019).
. C R Taylor, J D Ferry, J. Rheol. 23533C. R. Taylor and J. D. Ferry, J. Rheol. 23, 533 (1979).
. P H Mott, A Rizos, C M Roland, Macromolecules. 344476P. H. Mott, A. Rizos, and C. M. Roland, Macromolecules 34, 4476 (2001).
. D E Hanson, C M Roland, J. Polym. Sci. Polym. Phys. Ed. 481178D. E. Hanson and C. M. Roland, J. Polym. Sci. Polym. Phys. Ed. 48, 1795 (2010); 52, 1178 (2014).
. E J S Duncan, J Margetson, Prop. Expl. Pyro. 2394E. J. S. Duncan and J. Margetson, Prop. Expl. Pyro. 23, 94 (1998).
. J Hur, J.-B Park, G.-D Jung, S.-K Young, Int. J. Sol. Struc. 87110J. Hur, J.-B. Park, G.-D. Jung, and S.-K. Young, Int. J. Sol. Struc. 87, 110 (2016).
. G.-D Jung, S.-K Youn, B.-K Kim, Int. J. Solids Struct. 374715G.-D. Jung, S.-K. Youn, and B.-K. Kim, Int. J. Solids Struct. 37, 4715 (2000).
. S.-Y Ho, J. Prop. Power. 181106S.-Y. Ho, J. Prop. Power 18, 1106 (2002).
. M E Canga, W B Becker, S Ozupek, Comput. Methods Appl. Mech. Eng. 1902207M. E. Canga, W. B. Becker, and S. Ozupek, Comput. Methods Appl. Mech. Eng. 190, 2207 (2001).
. Z Wang, H Qiang, T Wang, G Wang, X Hou, Mech. Time Dep. Matl. 22291Z. Wang, H. Qiang, T. Wang, G. Wang, and X. Hou, Mech. Time Dep. Matl. 22, 291 (2018).
. L Zhang, S Zhi, Z Shen, Prop. Expl. Pyro. 43234L. Zhang, S. Zhi, and Z. Shen, Prop. Expl. Pyro. 43, 234 (2018).
. S W Park, R A Schapery, Int. J. Solids Struct. 34931S. W. Park and R. A. Schapery, Int. J. Solids Struct. 34, 931 (1997).
. K Ha, R A Schapery, Int. J. Solids Struct. 353497K. Ha and R. A. Schapery, Int. J. Solids Struct. 35, 3497 (1998).
. R M Hinterhoelzl, R A Shapery, Mech. Time Dep. Mater. 865R. M. Hinterhoelzl and R. A. Shapery, Mech. Time Dep. Mater. 8, 65 (2004).
. G H Lindsey, J E Woods, J. Eng. Mat. Technol. 97271G. H. Lindsey and J. E. Woods, J. Eng. Mat. Technol. 97, 271 (1975).
. R M Christensen, J. Mech. Phys. Solids. 38379R. M. Christensen, J. Mech. Phys. Solids 38, 379 (1990).
. K S Yun, J.-B Park, G.-D Jung, S.-K Youn, Int. J. Solid Struct. 80118K. S. Yun, J.-B. Park, G.-D. Jung, and S.-K. Youn, Int. J. Solid Struct. 80, 118 (2016).
. J A C Harwood, L Mullins, A R Payne, J. Appl. Polym. Sci. 93011J. A. C. Harwood, L. Mullins, and A. R. Payne, J. Appl. Polym. Sci. 9, 3011 (1965).
. J A C Harwood, A R Payne, J. Appl. Polym. Sci. 101203J. A. C. Harwood and A. R. Payne, J. Appl. Polym. Sci. 10, 315 (1966); 10, 1203 (1966).
Solid Propellant Mechanical Properties Investigations. N Fishman, J A Rinde, No. IISRI Project No. PRU-4142. Quarterly Progress ReportN. Fishman and J. A. Rinde, ''Solid Propellant Mechanical Properties Investigations,'' Quarterly Progress Report No. II, SRI Project No. PRU-4142, 1962.
Solid Propellant Grain Structural Behavior. T H Duerr, B P Marsh, Tactile Missile Propulsion, G. E. Jensen and D. W. NetzerAmerican Institute of Aeronautics and AstronauticsReston, VAT. H. Duerr and B. P. Marsh, ''Solid Propellant Grain Structural Behavior,'' in Tactile Missile Propulsion, G. E. Jensen and D. W. Netzer, Eds., American Institute of Aeronautics and Astronautics, Reston, VA, 1996.
. T L Smith, J. Polym. Sci. A. 13597T. L. Smith, J. Polym. Sci. A 1, 3597 (1963).
. T L Smith, J. Appl. Phys. 3527T. L. Smith, J. Appl. Phys. 35, 27 (1964).
. K L Ngai, D J Plazek, RUBBER CHEM. TECHNOL. 68376K. L. Ngai and D. J. Plazek, RUBBER CHEM. TECHNOL. 68, 376 (1995).
. C M Roland, RUBBER CHEM. TECHNOL. 79429C. M. Roland, RUBBER CHEM. TECHNOL. 79, 429 (2006).
. H Cui, Z Shen, H Li, Meccanica. 532393H. Cui, Z. Shen, and H. Li, Meccanica 53, 2393 (2018).
. O Yilmaz, B Kuran, G O Ozgen, J. Spacecraft Rockets. 541356O. Yilmaz, B. Kuran, and G.O. Ozgen, J. Spacecraft Rockets 54, 1356 (2017).
. B Deng, Z.-B Shen, J.-B Duan, G.-J Tang, Sci. Chi. Phys. Mech. Astro. 57908B. Deng, Z.-B. Shen, J.-B. Duan, and G.-J. Tang, Sci. Chi. Phys. Mech. Astro. 57, 908 (2014).
. M Celina, A C Graham, K T Gillen, R A Asink, L M Minier, RUBBER CHEM. TECHNOL. 73678M. Celina, A. C. Graham, K. T. Gillen, R. A. Asink, and L. M. Minier, RUBBER CHEM. TECHNOL. 73, 678 (2000).
. M Celina, K T Gillen, Macromolecules. 382754M. Celina and K. T. Gillen, Macromolecules 38, 2754 (2005).
. A G Johnson, National Defense. 11116A. G. Johnson, National Defense C111, 16 (2019).
|
[] |
[
"Contrastive analysis for scatterplot-based representations of dimensionality reduction",
"Contrastive analysis for scatterplot-based representations of dimensionality reduction"
] |
[
"Wilson E Marcílio-Jr \nFaculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil\n",
"Danilo M Eler \nFaculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil\n",
"Rogério E Garcia \nFaculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil\n"
] |
[
"Faculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil",
"Faculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil",
"Faculty of Sciences and Technology\nSão Paulo State University (UNESP)\n19060-900Presidente PrudenteSPBrazil"
] |
[] |
A B S T R A C T Cluster interpretation after dimensionality reduction (DR) is a ubiquitous part of exploring multidimensional datasets. DR results are frequently represented by scatterplots, where spatial proximity encodes similarity among data samples. In the literature, techniques support the understanding of scatterplots' organization by visualizing the importance of the features for cluster definition with layout enrichment strategies. However, current approaches usually focus on global information, hampering the analysis whenever the focus is to understand the differences among clusters. Thus, this paper introduces a methodology to visually explore DR results and interpret clusters' formation based on contrastive analysis. We also introduce a bipartite graph to visually interpret and explore the relationship between the statistical variables employed to understand how the data features influence cluster formation. Our approach is demonstrated through case studies, in which we explore two document collections related to news articles and tweets about COVID-19 symptoms. Finally, we evaluate our approach through quantitative results to demonstrate its robustness to support multidimensional analysis.
|
10.1016/j.cag.2021.08.014
|
[
"https://arxiv.org/pdf/2101.12044v2.pdf"
] | 231,719,661 |
2101.12044
|
588109b1fc1ac654af0044547212adceba22877d
|
Contrastive analysis for scatterplot-based representations of dimensionality reduction
Wilson E Marcílio-Jr
Faculty of Sciences and Technology
São Paulo State University (UNESP)
19060-900Presidente PrudenteSPBrazil
Danilo M Eler
Faculty of Sciences and Technology
São Paulo State University (UNESP)
19060-900Presidente PrudenteSPBrazil
Rogério E Garcia
Faculty of Sciences and Technology
São Paulo State University (UNESP)
19060-900Presidente PrudenteSPBrazil
Contrastive analysis for scatterplot-based representations of dimensionality reduction
A R T I C L E I N F Ovisual interpretationdimensionality re- ductioncontrastive analysis
A B S T R A C T Cluster interpretation after dimensionality reduction (DR) is a ubiquitous part of exploring multidimensional datasets. DR results are frequently represented by scatterplots, where spatial proximity encodes similarity among data samples. In the literature, techniques support the understanding of scatterplots' organization by visualizing the importance of the features for cluster definition with layout enrichment strategies. However, current approaches usually focus on global information, hampering the analysis whenever the focus is to understand the differences among clusters. Thus, this paper introduces a methodology to visually explore DR results and interpret clusters' formation based on contrastive analysis. We also introduce a bipartite graph to visually interpret and explore the relationship between the statistical variables employed to understand how the data features influence cluster formation. Our approach is demonstrated through case studies, in which we explore two document collections related to news articles and tweets about COVID-19 symptoms. Finally, we evaluate our approach through quantitative results to demonstrate its robustness to support multidimensional analysis.
Introduction
The analysis of high-dimensional datasets through dimensionality reduction (DR) [1,2,3] presents unprecedented opportunities to understand various phenomena. Using scatterplot representations of DR results, analysts inspect clusters to understand data nuances and features' contribution to the layout organization in the projected space.
One promising strategy to analyze DR results, called contrastive analysis [4,5], is understanding how clusters differ. Thus, the motivation is to find and comprehend the unique characteristics of each cluster. For instance, a tool for labeling textual data would benefit from contrastive analysis to highlight the differences among the clusters (e.g., topics)these clusters would represent candidates for new classes because each cluster has its unique characteristics. Another important application is understanding which features describe two separated groups of patients after a medical experiment [6].
Only a few works are focusing on providing contrastive analysis [4,5]. Instead, the literature presents a handful of studies that support the analysis of DR results through the visualization of global information [7,8,9,10,11], emphasizing the importance of data features (attributes) given by DR techniques to organize the embedded space. A few main approaches [7,8] use the principal components (PCs) from PCA [12] to find which features contribute to cluster formation. However, these contributions do not emphasize the unique characteristics of the clusters. On the other hand, ccPCA [4] finds contrastive information (i.e., the unique characteristics) for each cluster through the systematic application of contrastive PCA [13]. ccPCA's main limitation comes from PCA, i.e., the prohibitively run-time execution for high- * E-mails: [email protected], [email protected], [email protected] ORCID(s): dimensional datasets-it takes ( 3 ) on the number of dimensions. Another approach, ContraVis [5], is only applied to textual data and cannot help interpret DR layouts since it is already a dimensionality reduction approach. Furthermore, the study does not present a strategy to understand the terms' contribution to the visual space layout organization.
In this work, we propose cExpression, an approach to analyze DR results using contrastive analysis along with a carefully designed visualization technique. More precisely, we use statistical variables (p-values and -scores) to find the most distinctive features of clusters (t-scores) and the confidence of the results (p-values). Although t-test is a common method for feature selection in machine learning and in bioinformatics for defining cell types, it is not well-explored to analyze dimensionality reduction results. In our visualization design, users can interact with scatterplot representations of multidimensional datasets to visualize the clusters' summaries-designed after the definition of several requirements. We use focus+context interaction on a bipartite graph to communicate the relationship between t-scores and p-values. The focus+context interaction helps users explore a higher amount of information while inspecting smallmultiples of features' histograms. A heatmap representation of the most distinctive features for each cluster also helps to overview the structures. Finally, we propose an encoding strategy to simultaneously communicate the distribution of feature values in the scatterplot representation.
As we demonstrate in the numerical experiments, cExpression can be applied to various data types and is scalable to handle big datasets with thousands of dimensions. While our approach's computational components help generate contrastive information rapidly, our visualization design is simple and effective to analyze even complex textual data.
In summary, our contributions are:
• A strategy to analyze and interpret dimensionality reduction through clusters using contrastive analysis;
• Novel visualization strategies to analyze the relationship among statistical variables and simultaneously visualize various features in the scatterplot;
• An annotated dataset of COVID-19 tweets retrieved from March 2020 to August 2020.
This work is organized as follows: Section 2 presents the related works; Section 3 delineates our methodology accompanied with motivation and the visualization design; Section 4 shows the case studies; Section 5 shows numerical evaluation; Section 6 presents discussions about the work; the work is concluded in Section 7.
Related Works
To support analysis of dimensionality reduction (DR) results, layout enrichment strategies [14] unite visualization approaches and valuable information extracted from the data on high-dimensional space concerning low-dimensional representations, usually on ℝ 2 . Examples include using bar charts and color encoding to understand three-dimensional projections [9] or encoding attribute variation using Delaunay triangulation to assess neighborhood relations in projections on ℝ 2 [15]. Another interesting work, proposed by Tian et al. [16], extends Silva et al.'s [15] by aggregating the explanatory mechanisms using a visual analytics system, together with a new local explanation method (variance ratio) that describes the dimensionality of local neighborhoods. Probing Projections [10], for example, depicts error information by displaying a halo around each dot in a DR layout besides providing interaction mechanisms to understand distortions in the projection process. The majority of the works use traditional statistical charts to visualize attribute variability [11], neighborhood and class errors [17], or quality metrics [18]. For instance, Martins et al. [17] use space-filling techniques to help users reason about the influence on neighborhood preservation and other quality aspects of parameterized projections.
More related to our work are techniques that find important features given clusters of data points. For example, the Linear Discriminative Coordinates [19] technique uses LDA [20] to produce cohesive clusters by discarding the least important features. Joia et al. [8] use PCA to find the most important features through a simple matrix transposition to later visualize feature names as word clouds within each cluster region-such an approach tends to be influenced by classes with a higher variation. Our approach, however, surpasses these problems by comparing the differences between distributions and returning the confidence levels associated with such differences. Another work, proposed by Turkay et al. [7], also uses the principal components computed by PCA to obtain MDS's representative features [21]. The problem with using PCA is that it focuses on the global characteristics of datasets, making it challenging to analyze the unique characteristics of clusters as we address them using contrastive analysis.
Recently, Fujiwara et al. [4] proposed the contrastive cluster PCA (ccPCA) technique that finds the most important features for each cluster in a projection. Fujiwara et al.'s approach is different from Joia et al.'s and Turkay et al.'s works to provide a way to understand which features highly contribute to the differentiation of clusters. ccPCA's main limitation is the prohibitively run-time execution for datasets with many dimensions, such as document collections [22,23,24,25], gene expression [26,27,28,29], or filter activations [30]. Finally, there is no consistent way to visually relate the distribution of values with the feature's contribution returned based on contrastive PCA. Unlike ccPCA, our proposal successfully analyzes datasets with very high dimensionality while presenting more consistent results-as shown in the following section. Further that, our approach yields more interpretable results since it is based upon wellknown statistics measures. Another exciting work, proposed by Le et al. [5], combines topic extraction to visualize the unique features from document collections. Although their technique can differentiate clusters well, it cannot be used to analyze other dimensionality reduction approaches, and it is limited to textual data. Our approach is uncoupled from specific dimensionality reduction techniques and can be used to analyze various data types.
In the following section, we detailed cExpression, which comprises a method for retrieving contrastive information and an associated visualization tool.
cExpression-Tool for contrastive analysis of DR results
Before detailing our approach in the following sections, Fig. 1 shows the workflow for using cExpression to interpret clusters after dimensionality reduction. First, the user has to preprocess a high-dimensional dataset by applying a dimensionality reduction technique and annotating the clusters perceived in the visual space, which results in the state (A). Then, our approach uses t-test to compute a measure of deviation (t-score) and associated confidence level (p-value) for each pair of feature and cluster-for instance, the state (B) in Fig. 1 represents two distributions associated to the feature 2 , one for the cluster 0 and another for the remaining of the dataset. The comparison using t-test answers the question of "Is this feature a unique characteristic for cluster 0?". Finally, the data generated from this process feeds a visualization tool (C) that uses coordinated views to help in the exploratory process.
Computing constrastive information
Our method is based on comparing distributions of values of different clusters and using -test to verify if the differences among these distributions are enough to describe clusters. Given a dataset , let and ′ be the values of feature for the data observations in the cluster and the values of the features for the data observations not in cluster ( ′ )-and ′ are clusters defined in the visual space Figure 1: The cExpression pipeline. The user provides a highdimensional dataset with ℝ 2 coordinates after dimensionality reduction and cluster labels (A). The clustering labels and highdimensional data are used to compute the contrastive information for each par of cluster and feature (B). The data generated from this process is used to create visualization approaches that highlight the unique characteristics of each cluster (C).
from which users want to find unique characteristics. Then, we compute the summary statistics to get the value for the -test: average and variance for and ′ , defined as , ′ and ( ), ( ′ ). Finally, the -statistic can be calculated as it follows: : Defining distinctive features. The feature's distribution (a) in a cluster of interest is compared to its distribution on the remaining dataset (b). Then, the -Student test (c) returns the probability of these two distributions to be different (d). Finally, the features are arranged in decreasing order according to their -score to communicate discriminative power (e).
= − ′ √ ( ) | | + ( ′ ) | ′ |(1)
in which | • | stands for the size of the sets. Having thestatistic, we use a statistical library to find the -value, which corresponds to the probability of rejecting a null hypothesis. In other words, the -value represents the probability in which we can assume that the distribution of values in cluster for feature ( ) and the distribution of values not in cluster for feature ( ′ ) are equal. Notice that, fixing a cluster and a feature , smaller -values stands for high importance since it means (by the null hypothesis) that the distributions are different-the t-scores are measures of standard deviation. In this case, very high or very low (negative) t-scores are associated with very small -values and are found in the tails of the -distribution-as illustrated in Fig. 2(d). A similar approach to compute explanations, the variance-based explanation [15,16] computes the relative variance of a neighborhood the entire dataset, thus, emphasizing how dimensions contribute to similarity within local neighborhoods-such an explanation approach emphasizes which dimension makes their neighbors similar while thescore is a measure to understand how much two distributions deviate from each other. The main difference between these two approaches consists of the way they find these features' relevance. While the -score compares disjoint sets, the variance-based explanation looks at the contribution to neighborhoods over the entire dataset.
To interpret clusters using contrastive analysis, we perform the process described above for each pair of cluster and feature of the dataset. Then, t-scores are used as a ranking metric. It is important to emphasize that these clusters must be perceived in the visual space, so the insight generated from the exploratory process is consistent with the organization visualized in the scatterplot. Another important aspect is that the cluster labels-encoded by different colors and responsible for defining the clusters-are not computed by our tool. Instead, users have to provide their own processed file with cluster labels and ( , ) coordinates resulted from dimensionality reduction. Fig. 2 illustrates the whole idea for a fixed cluster and a fixed feature. From the dataset in (a), the distributions of values for the cluster and feature are generated (b)-red encodes the feature distribution of data samples in the cluster, and gray encodes the distribution of data samples outside the cluster. Based on these distributions, we used the -Student test (c) to ask for the probability ( -value, d) in which the distributions are similar. Finally, ordering the cluster's features in an decreasing way of absolute -score will lead to the most defining features of the cluster being placed on top.
Let us apply the concept delineated above on a multivariate dataset. Fig. 3 (left) shows a UMAP [3] projection of the Vertebral dataset, which consists of 310 data instances described by six bio-mechanical features derived from the shape and orientation of the pelvis and lumbar spine: pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, pelvic radius, degree of spondylolisthesis-see Supplementary File with a case study with this dataset. The class of interest corresponds to patients with Spondylolisthesis, a disturbance of the spine where a bone (vertebra) slides forward over the bone below it.
One critical feature for patients with Spondylolisthesis is the degree of spondylolisthesis, which is known to be high in those patients. It is reasonable to think that an algorithm that tries to find important features would select the degree of spondylolisthesis for defining such a class. To investigate cExpression on this task, we explore the distribution of values of the important features throughout the projection. Fig. 3 (right) shows the two most important retrieved by cExpression. The scatterplot showing the normalized feature values also confirms that degree of spondylolisthesis is a reasonable candidate for a distinctive feature. That is, data instances from class present much higher values for such a feature than data instances from classes and .
Visualization Design
We designed a few visualization strategies to analyze dimensionality reduction results with cExpression. The tool in Fig. 4 shows two main views: the scatterplot view and the heatmap view. Firstly, users provide the high-dimensional dataset and the points after dimensionality reduction (ℝ 2 ), define the cell size for the cell-based encoding visualization, and specify the number of features automatically inspected using the summary visualization (a). The scatterplot representation of the projected dataset (b) serves as the basis for the interaction with the bipartite graph (c), representing the relationship among the statistical variables. In Fig. 4, for example, the bipartite graph shows information for the cluster . Users can toggle the distribution plots (d and e) to inspect the scatterplot representation distribution. The heatmap view (f) summarizes the feature importance, where a diverging color scale from purple to green encodes the tscores. The sizes of the heatmap' tiles encode the number of decimal places of the associated p-value. High t-scores (represented by high color saturation) are associated with lower p-values (greater tile sizes) in the heatmap visualization. We explain the decisions about the visualization encodings by first delineating the task and visual requirements in the next section.
Visual and task requirements
We have followed the layout enrichment [14] strategy of scatterplot representations after dimensionality reduction to design our visualizations. Thus, the following task requirements (TRs) help us to understand the cluster formation in DR results (projection):
• TR1: Compare a cluster against the remaining of the dataset;
• TR2: Compare two selected clusters.
The task requirements (TRs) help users to understand the projection structures by comparing how clusters differentiate. To achieve these tasks, we delineate the minimum requirements that our visualization must comprise:
• VR1: Visualize the importance of a feature;
• VR2: Visualize the distribution of values of the comparing components;
• VR3: Know the feature name;
• VR4: Assess how trustful the result is;
• VR5: Understand the organization of features distribution in the DR layout.
These visualization requirements (VRs) are analyzed together during exploratory data analysis. That is, knowing the most discriminative features for a cluster (VR1 and VR3), users understand intra-cluster differences from the remaining of the dataset (VR2) and how their values are distributed throughout the DR layout (VR5). VR4, related to the pvalues, helps users on the statistical relevance of the difference.
Lastly, since tasks TR1 and TR2 are not new when dealing with scatterplot visualizations [31,32,33], other works address the same problem. One similar work [34] uses feature selection to re-project high-dimensional data onto ℝ 2 based on feature selection to induce cluster separation and help in classification tasks. Instead, we focus on contrastive analysis to explain which features are important for cluster formation-we are interested in improving the understanding but not improving the projection itself.
Visualizing the feature importance
To compactly represent the VR1-4 requirements, we visualize the importance of the features as the position over a horizontal axis while providing the distribution plots for each visualized feature. Fig. 5 shows the visualization design, where the color hue for the distribution plots corresponds to cluster and the operation illustrated in Fig. 4 (c).
The features' importance is encoded by a line segment, representing the relationship between p-values and t-scores. The axes (a) and (b) in Fig. 5 present the same information, i.e., the number of decimal places after zero of each p-value. These two axes are central to our focus+context interaction, detailed in Section 3.2.6. The gray circles in the context axis (a) represent the analyzed features whose horizontal position encodes the p-values-since there is no information to encode on vertical position, we assign a random value to reduce overlapping among circles. Notice that with this design, we accomplish requirements VR1 and VR4. However, we still need to comprise other visualization requirements to differentiate between the features while understanding the confidence in the feature differentiation. Finally, the bipartite graph visualization in Fig. 5 shows the ( , 50) most important features to reduce overplotting, where stands for the dimensionality of the dataset. We focus on a selected number of features independently from the dataset dimensionality since we empirically found that the most helpful information remains on the few top ones. Further that, the use cases show how this approach is adequate for even textual data, which usually contains hundreds of dimensions. The ℝ 2 coordinates are used for the scatterplot layout (c), from which the analysis starts. Users click on points of a cluster to investigate their contrastive information. For instance, the bipartite graph (c) shows the distinctive terms for the dark-orange cluster. There is also an interaction mechanism to visualize the feature values on the scatterplot visualization (d, e). Finally, an overview analysis can be achieved through the heatmap encoding. A red-to-blue and continuous color scales encode the p-values and t-values, respectively. Notice that, we visualize the number of decimal places after zero instead of visualizing the p-value itself. For focus+context interaction, the first axis (a) gives the context of the available features while the second axis (b) correspond to the focus-in this case, the are the same. The links between the focus axis and the t-score represent the relationship between p-values and t-scores, in which green colors represent positive values and red colors represent negative values. Finally, the four most distinctive features are pre-selected and visualized as a histogram plot (c).
A red-to-blue color scale help users to identify the feature confidence-blueish colors encode more confidence (or lower p-values). Besides the confidence in the difference represented by the p-values, we encode the difference be-tween distributions (t-scores) using another axis. The t-score represents the deviation between the distributions of a feature for the analyzed cluster and the remaining of the dataset. The relationship among these statistical variables is shown through line segments, and the color indicates the signal of t-score (pink for negative and green for positive). These line segments are drawn with higher transparency when they are not selected, and the edge bundling algorithm is employed to reduce the over-plotting-see the effect on the left part of Fig. 5. As we will show in the following section, the rectangles below the features' names help dynamically assign color hues to features when exploring their distribution on the scatterplot representation.
The requirements VR2 and VR3 are fulfilled with a distribution plot enriched with other information. The distribution of the values is encoded using a histogram (see Fig. 5(c)). As in Fujiwara et al.'s work [4], the axis encodes the relative frequency of the bins, that is, the number of data observations with a particular value divided by the number of data observations in the cluster (for histograms of the cluster) or outside the cluster (for histograms of the remaining data). The color bars of the distribution plots (and their borders) assume the corresponding cluster's color. In contrast, the distribution plots of the values for the remaining dataset receive gray. For example, the distribution plots (c) have the same color ( ) as the selected cluster (b) in Fig. 4.
Visualizing feature value distribution
Visualizing the distribution of feature values helps to understand the influence of features in the DR layout. That is, in Figs. 4 (d) and (e), the user interacts with the tool to visualize the feature values of features peru and humala. A continuous color map solves the problem for a single feature (e.g., see Fig. 4 (d)). However, a higher number of features requires further attention to maintain the similarity relations and the analysis power of proximity-based scatterplot representations (e.g., see Fig. 4 (e)). So, inspired by Sarikayra et al.'s [35] work to visualize classification metrics of protein surfaces, we discretize the projected space in cells of fixed size (in pixels) to plot the contribution of each feature to the cell. Fig. 6 exemplifies the construction of the encoding for three features and only one cell. A continuous color scale is a simple yet effective option to inspect only one feature (b) since the markers only change according to the hue. The problem comes when inspecting more than one feature while trying to maintain the organization imposed by the dimensionality reduction technique on the resulting layout. Thus, we use the grid structure to group similar data points-similar as in the SADIRE [36] sampling technique-and divide the space according to the number of features expressed inside each cell. In (c), after adding feature to the inspection, the cell is divided and colored according to the features' color-encoding. Moreover, color opacity is used to communicate feature expression inside the cell-the most opaque segment for a feature indicates the highest expression for such a feature. The same happens when the feature is added to the analysis (d) when dividing the cell into three parts. Fig. 7 illustrates the encoding strategy discussed above. We selected four clusters (a). After inspecting cluster , the selection of syrian shows where the term is mostly expressed (notice that the colors representing the cluster and the term itself are not related) using a color scale, which helps users to investigate where the documents are talking about syrian. The aggregated encoding uses color saturation to communicate the expression level in regions defined by cells of a fixed size (defined by users) to visualize the co-expression of terms. Notice that the space inside each cell is divided by the number of features being inspected. The steps of Fig. 7 are achieved through user interaction with the distribution plots. As shown in Fig. 4(d-e), borders with straight lines indicate the feature inspection, and users use combinations to visualize multiple values. Notice that cells in which no data observations express any feature would not be perceived in the visualization using the opacity strategy as discussed above. We used a texture ( ) to fill these cells and to maintain the general structure of the projected dataset. Lastly, the division of each cell corresponds to the number of expressed features in the cell-that is, if three features are selected but only two expressed, the cell is divided into two equal parts.
Another important thing to mention about the cell-based encoding is the trade-off between computational performance and quality of explanations related to the cell size in pixels. As we augment the cell size, finding which points belong to a cell becomes faster, although the resemblance to the original scatterplot decreases. The contrary is also true. As we reduce the cell size, the cell-based encoding approaches more the scatterplot structure. However, it adds a computational burden-a small cell size also prejudices the visualization of the color encoding. To tackle these problems, we use a KD-tree to identify which points belong to a cell efficiently and use a cell size of 20x20 pixels-we find it a good compromise between the visualization of feature distribution and scatterplot structures. Finally, users can define the cell size before generating the visualizations (see Fig. 4).
Comparing clusters
Besides comparing a cluster of interest to the rest of the dataset, it is helpful to understand the differences between two clusters. For instance, to understand why subclusters are formed.
For comparison, users select two clusters of interest on the scatterplot view. In this case, the contrastive information is retrieved from two arbitrary clusters and . Fig. 8 shows an augmented version of the visualization of Fig. 5, where the information about each cluster is mirrored-the color scale representing the p-values is constructed based on the clusters with the longer range. When comparing clusters, the information provided to contrast clusters to the remain-ing dataset is presented to both clusters in comparison (such as distribution plots, p-values, and t-scores axes). is encoded as usual (c), the information for the second is positioned on top (a), and the context axis assumes the middle of the visualization (b). In addition, the four most distinctive features for both clusters are shown (d).
Summary View
Although our technique emphasizes differences of the dataset even for a reasonable amount of features, it would be interesting to understand more about a cluster structure, visualizing the importance of more features for some datasets. A heatmap provides an overview of the feature importance and visualizes the statistics used by our technique, as shown in Fig. 4(f). A purple to green color scale encodes the tscore, and the dimensions of the tiles encode the p-values. It is important to remember that t-scores are measures of standard deviation and p-values are probabilities. Both statistics are associated with standard t-distribution. This distribution relates standard deviations with probabilities and allows significance and confidence to be attached to t-scores and pvalues. For this particular example, Fig. 4(f) shows how the distinctive features help explain each cluster through contrastive analysis. That is, none of the features with green tiles are shared across columns.
Interaction mechanisms
As shown in Fig. 5, the four most distinctive features are presented when inspecting a cluster. However, users might inspect other features based on their p-value. Such an interaction is carried out using a selection box, as illustrated in Fig. 5 (smallest box). All of the corresponding features inside the box are detailed inspected by the visualization of their respective distribution plots. The line segments of the bipartite graph are also updated to communicate which features are selected. Another differential of our tool is assigning colors to the features being inspected in the scatterplot. While users can freely visualize many distribution plots, there is a limit in the number of colors that humans can differentiate [37]. With a limit of ten features to be simultaneously visualized, users can toggle the distribution plots. Fig. 7(c) exemplifies this operation for the terms kill and protest and their result in the scatterplot representation using our encoding. .
The selection mechanism helps users to understand other aspects of the clusters by inspecting many features. However, when these features present a similar p-value, such an interaction is not enough to select interest features. In other words, the visual space dedicated to various features could be too small. We employ focus+context [38] interaction on the p-value axis to decrease such an issue, as shown in Fig. 10. On top of the axis showing the p-values, another axis corresponds to the context, i.e., all of the available information. At first, the focus axis corresponds to the context axis (1). Then, users specify a range where the focus axis will be defined (2), as illustrated in the figure by a red arrow. The focus selection will make the axis of p-values show only the information inside the selection box (3). Finally, such a change in the focus induces a change in the features visualized as distribution plots.
Implementation
The visualization metaphors were all implemented using the D3.js [39] library, while the statistical variables were generated in the backend by using Python. For the Supplementary File with additional analyses, please visit the paper's page 1 .
Case studies
To validate the proposed technique, we explore two document collections, a dataset of news articles from 2011 col- lected from different sources and a dataset of tweets about COVID-19 symptoms collected inside the São Paulo state (Brazil) territory from March 2020 to August 2020. We also analyze multivariate data using a medical dataset in the Supplementary File.
The news dataset
In this first case study, we inspect a document collection of 495 news articles in English available in RSS format by Reuters, BBC, CNN, and Associated Press agencies. Fig. 11 shows the UMAP [3] projection of the dataset color-coded based on the Leiden [40] algorithm. We used the first 40 Principal Components (PCs) of the dataset to compute the neighborhood graph ( = 15) for the UMAP technique. The resolution parameter of the Leiden algorithm was set to 0.3. Looking at the most distinctive terms presented in Fig. 12, and recalling that the news dataset contains articles from 2011, cluster represents news articles of the earthquake that hit Japan in 2011. The terms show that the incident on the nuclear plant of Fukushima I is present in almost all of the news articles due to the earthquake. Further, the news articles related to the nuclear power plant incident are concentrated in just one part of the cluster. The projection technique successfully uncovered a subcluster of news articles referring to the same aspects of the earthquake.
To further explore cluster , we focus on the terms that were not assigned much confidence (with -values between 1 × 10 −3 and 1 × 10 −5 ) and then select a few terms, as illustrated in Fig. 13 (a). In this case, the terms refer to former Japan's primer-minister, Naoto Kan, who resigned his role Figure 12: After selecting the five most discriminative features, highlighting the terms nuclear, plant, and fukushima shows that UMAP distinguished the news articles separated the news articles related to the nuclear accident on the left side of the projection.
after the crisis provoked by the earthquake and the consequent incident in Fukushima I. The terms naoto, kan, prime, minist, pm, and japanes further support this idea. Fig. 13 (b) shows the scatterplot representation of the expression intensity of the terms naoto and kan, which validates our approach to understand the organization imposed by the dimensionality reduction technique. While news articles regarding the Fukushima I incident were positioned on the bottom of the cluster, news articles about the former primer-minister were positioned on the top of the cluster.
Proceeding to cluster in Fig. 14, there are many terms with high confidence that could be used to understand the main topics of the news articles. The first terms (highlighted in red) correspond to Ratko Mladić, a former Serbian military officer, head of the Serbian Republic Army during the Bosnian War between 1992-1995. The other group represents more specific information about the news articles contained in the cluster. The terms refer to the prison of Mladić due to war crimes. More specifically, the terms tribun and hagu refer to the fact that Mladić was extradited to The Hague city in the Netherlands to respond to his crimes in the International Court of Justice. We finish the analysis by inspecting cluster , as illustrated in Fig. 15. From the terms highlighted as discriminative, such a cluster corresponds to news articles of a strain of Escherichia Coli O104:h4 bacteria outbreak in northern Germany from May to June 2011. The majority of the news articles mention coli and outbreak. While the terms europ and germani express on opposite sides of the cluster could indicate that some news articles focus specifically on Germany, and others referring to the whole of Europe. The other terms describe the whole cluster: health, deadli, and infect. Figure 15: Inspection of germani and europ's terms shows that the dimensionality reduction separated news articles focused on the European continent from the news articles focused on Germany.
Tweets of COVID-19 symptoms
For this particular use case, we aim to analyze a complex document collection of tweets about COVID-19 common symptoms (fever, high fever, cough, dry cough, difficulty breathing, shortness of breath) retrieved from São Paulo state (Brazil) from March 2020 to August 2020. To create the dataset, we retrieved tweets mentioning one of the symptoms discussed above; then, we classified the tweets according to their relevance. We used the BERT [41] language model to train a classifier-Supplementary File contains all of the model's performance-to select only relevant tweets.
We manually classified ten thousand tweets as relevant or not and used the BERT model to automatically classify other 30 thousand tweets, which we analyzed here. Relevant tweets are those with serious comments about COVID-19 and where there is a chance of infection. Non-relevant tweets correspond to news, jokes, and other informative comments. Further, to inspect the dataset projected on ℝ 2 , we used the UMAP technique with the nearest neighbor graph set as 20. Finally, the clusters were manually defined, as shown in Fig. 16. The projection of the tweets shows four main clusters: red , dark orange , orange , and a cluster divided into five other subclusters ( , , , , ). To analyze the projection, we follow the strategy of analyzing the separated greater clusters and then proceeding to the very cohesive cluster . Fig. 17 shows that cluster is mainly related to tweets about the shortness of breath, one of the most severe COVID-19 symptoms-the term chest, for example, could indicate people describing how they were feeling and where the sensations are occurring in their body. Another interesting aspect of such a cluster is the term anxiety, which indicates a problem arising from the necessity of social isolation during the pandemic. In fact, during data reading, many tweets corresponded to people asking whether a COVID-19 infection or anxiety crisis caused their shortness of breath. Proceeding to cluster , we focus on the terms with pvalue ≤ 1e-5. Fig. 18 shows that such a cluster is related mainly to the dry cough symptom of COVID-19 due to the presence of terms cough and dry-the term sneeze is also present when Twitter users were describing their symptoms. The other terms (annoying and each) usually consist of phrases commonly identified in the tweets, such as, (directly translated from informal Portuguese) "I am with an annoying cough..." or "It is each cough...". Finally, we analyze the subclusters on the bottom right of Fig. 16. As shown in Fig. 19, both of clusters and refer to characteristics of fever symptom-see how the terms face, fever, body, and to-measure are expressed in cluster while the terms I-think, fever, getting, thermometer, and hot are expressed in cluster . Particularly for cluster , the terms indicate users that had started to feel febrile when they posted the symptom on Twitter. Such insight could be significant to reveal which cities are presenting people with developing symptoms of COVID-19. That is, regulatory policies could be made based on the geolocalization of the tweets. Cluster indicates more general aspects about the symptoms related to fever. Fig. 20 shows that the discriminative terms are related to fever-notice that we omitted the terms sick, was, dipyrona, pain, night, to-be, and bath since they are terms used to create the phrases. An interesting aspect of this cluster is the presence of the term sore, an uncommon symptom. Unlike the other clusters, cluster presents many different levels of understanding. Firstly, Fig. 21 shows that although one of the five most discriminative refers to fever, such a cluster might represent Dengue symptoms. That is, while the citizens were posting how they were worried about COVID-19, the symptoms indicated on the tweets match with those used to describe Dengue infection (pain, head, throat, body, and fever), a common disease faced in Brazil. Finally, Fig. 22 shows that a severe symptom of COVID-19 is also present in cluster : difficulty breathing. Besides that, other terms indicate common symptoms of seasonal flu, such as coryza. Finally, the term tiredness reinforces the idea of Dengue symptoms or consists of a uncommon COVID-19 symptom.
In this case study, the tweets presented three main topics: clusters is related to respiratory problems induced by COVID-19 infection, as well as related to anxiety crisis due to long periods of social isolation; cluster is related to dry cough, a common COVID-19 symptom; finally, the cluster with subclusters has a lot of aspects regarding high fever, where each one of the subcluster ( , , , , ) has its particularities. Finally, to investigate cluster , the heatmap shown in Fig. 23 shows much of the very cohesive cluster is comprised of tweets concerning terms of worrying (see worry and risk) about coryza and wearing masks.
Evaluation
To further testify cExpression to support analysis of dimensionality reduction results, we compare it against wellknown topic extraction techniques using the cohesion metric. Finally, we assess run-time execution of cExpression and ccPCA [4] upon various dimensionality values for a document collection.
Cohesion
A useful way to analyze cExpression results is by assessing how the terms selected from a document collection describe the clusters, similarly as in topic extraction tasks. Fig. 24 shows the pipeline for such a task. Figure 24: Retrieving topics using cExpression. Given a bagof-words representation of a document collection (a), we compute the dataset clusters (b). cExpression is executed to retrieve distinctive terms for each cluster (c), and the topics correspond to the first terms of each cluster.
After creating a bag-of-words representation (a.) of a document collection, we apply a clustering algorithm to group documents based on similarity (b.). Then, cExpression returns the terms in each cluster that are not likely to appear in the other clusters. As explained in Section 3.1, these terms are ordered based on their contrastive score-the first terms are returned to compose the topics of each cluster (d.).
We use the Topic coherence [42] metric to evaluate the robustness of the terms returned by cExpresion. This metric is applied to the top terms of a topic and consists of the average of the pairwise word-similarity scores of the topic. A good model will generate coherent topics with high topic coherence scores. Thus, the topic extraction technique A is better than the technique B if it has a greater score.
The evaluation was performed using three datasets: 495 news articles (205 terms) and 40794 tweets related to COVID-19 symptoms (295 terms), and the 20newsgroups 2 dataset. For the 20newsgroups, we use a subset comprising the classes alt.atheism, comp.graphics, comp.windows.x, rec.motorcycles, sci.electronics, sci.med, talk.politics.guns, talk.politics.misc, talk.religion.misc. After preprocessing the dataset-removal of English stop-words and terms appearing in a proportion of documents below 0.5-the dataset resulted in a bag-ofwords representation of 4828 documents by 3347 terms. Finally, we compared cExpression against the well-known and commonly used topic extraction approaches, LDA [43] and NMF [44]. Fig. 25 shows the cohesion when varying the number of terms in each topic for the news and covid-19 datasets. cExpression presents a slight advantage for topics with size below 20 and emphasizes the terms that are not expressed in other clusters-terms of topic A (cluster A) are not likely to appear in other topics. So, for a lower number of terms, cExpression emphasizes the differences while showing good coherence. To analyze the 20newsgroups dataset, besides varying the number of terms, we also varied the number of topics returned from the algorithm. As shown in Table 1, we computed the Area Under the Curve (AUC) to summarize the results. cExpression surpasses LDA and NMF for the most number of topics-it only presents lower AUC for two and four topics. Fig. 26 shows that for only two topics, our technique could not uncover topics as good as LDA. Using four topics, our technique presents slightly lower results when using a few terms (below ten terms). Again, cExpression performs better for only a few terms in the topic (six and eight). The remaining of the plots are in the Supplementary File.
Run-time execution
A technique must cope with high dimensionality to successfully analyze real-world data. Thus, we analyze the run- Then, to evaluate the techniques, we generated different versions of the dataset by ranging the number of features between ten and 2000 and the number of samples between 2000 and 40000. Fig. 27 shows that, for eight clusters, cExpression is faster than ccPCA when we augment the dataset dimensionality. The Supplementary File (Fig. 5) also shows run-time executions for different number of clusters presenting the same pattern of Fig. 27, which testify the superiority of cExpression over ccPCA regarding run-time execution.
Discussions and Limitations
Contrastive analysis of dimensionality reduction results offers important mechanisms to understand how clusters differ in the projected space. Although the literature already presents a method for this task, we demonstrated that interactive visualizations with well-known statistics can enhance interpretation of the differences among clusters. There are a few other aspects regarding our approach, which we discuss in the following.
Cell-based encoding of scatterplot. To facilitate local and global analyses among data points, the distribution of feature values visualized on the scatterplot helps analysts gain insight into the datasets structures. Such characteristic also reduces change blindness since uses are not required to change between color scales to visualize the feature value distribution on the scatterplot.
Usability. In the use cases, we evaluate the proposed tool regarding its strengths to help understand the cluster formation after dimensionality reduction. However, we plan to evaluate the tool's usability through user experiments to understand how it benefits analysis in well-defined tasks.
Analyzing different DR techniques. An important aspect of cExpression is how its results vary for different DR reduction techniques. In Supplementary File, we provide two additional analyzes on this characteristic. The results show, as expected, that if the clusters are the same-i.e., same data points in the same clusters-, the organization of the projected space does not matter, which can be explained by the fact that we feed the same data subsets to cExpression. We plan to use nearest neighbors to define the subsets, making the results more dependent on the projection structures than projection clusters.
Features ordering. Particularly for datasets where numbers indicate the presence of elements, such as in document collections where the bag-of-words representation indicates a term in a document, we do not order the features based on the absolute value of their t-score. By not ordering using absolute value, the features on the first positions (notice that we order in decreasing way) consist of the features present on clusters of interest. That is, features with negative t-scores for such scenarios are the ones not in the cluster.
Although our work addresses important problems in the literature, there are few other aspects that need further research.
Visual scalability of the number of clusters. In Section 3.2.2, we comment on the visual scalability related to the dataset dimensionality. Another point of view is related to the number of clusters, in which the higher the number of clusters more iterations the user would have to perform to cover all of the dataset. While we do not address this problem in this work, our methodology is fast enough not to add a burden during analysis. Further that, we plan to investigate how multilevel dimensionality reduction strategies [45,46,47] would help to alleviate the analysis by applying overview first and details on-demand operations.
Visual scalability of the number of features. When visualizing the distribution of a greater number of features on the scatterplot view, by augmenting the boxes' area to circumvent the space dedicated to visualizing feature intensity, users can better distinguish among different classes. However, such an increase can remove the scatter-plot context. The number of simultaneous features visualized can also impact the ability of users to distinguish the different intensities among the features. We plan to investigate glyph-based visualizations to encode more information-for example, using star plot and color encoding to visualize feature values and colors, respectively.
Assumptions for the t-Student test. Although t-test provides the dataset features' discriminative power, we have assumed the samples follow normal distributions. Besides that, computing the t-statistic can be problematic because the variance estimates can be skewed by features having a very low variance [48]. Consequently, these features are associated with a large t-statistic and falsely selected discriminative terms [49]. Another drawback comes from its application on small sample sizes, which implies low statistical power [26]. One way to address these limitations is already in our tool. It allows users to investigate the distribution of values in the scatterplot representation using the color scale or to use the encoding to visualize various features simultaneously.
Selection of clusters in the visual space. As discussed in the paper, we focus on explaining the visual clustersi.e., the clusters perceived in the visual space. Thus, the input data containing the cluster labels must match the clusters in the visual space for trustworthy and consistent exploratory analysis. When this is not true, and cluster labels are assigned to different clusters in the visual space, the cExpression technique will not identify truly distinctive features since the feature value distribution would be similar (see Section 3.1). However, this limitation is easily overcome by preprocessing the input file with a consistent assignment of cluster labels.
Conclusion
Understanding the influence of features on the formation of clusters and sub-clusters is a promising approach when analyzing dimensionality reduction results represented by scatterplots. Existing methods to address this task emphasize global characteristics not capable of differentiating clusters. On the other hand, current methods for contrastive analysis need unrealistic run-time execution for practical applications.
This paper presents a novel approach for contrastive analysis of dimensionality reduction results represented by scatterplots. We use a bipartite graph metaphor to represent the relation among statistics (t-score and p-value) of each cluster feature. Using focus+context analysis, we show how our approach can retrieve insights about cluster organization even for complex datasets. Finally, we also show how such an approach can be robust by comparing it against well-known topic extraction techniques.
Figure 2
2Figure 2: Defining distinctive features. The feature's distribution (a) in a cluster of interest is compared to its distribution on the remaining dataset (b). Then, the -Student test (c) returns the probability of these two distributions to be different (d). Finally, the features are arranged in decreasing order according to their -score to communicate discriminative power (e).
Figure 3 :
3UMAP projection of the Vertebral Column dataset and two distinctive features for the Spondylolisthesis class.
Figure 4 :
4The cExpression visualization tool. Users provide high-dimensional points with annotated clusters and positions onto ℝ 2 (a).
Figure 5 :
5The bipartite graph encoding feature importance.
Figure 6 :
6Steps for construction of the cell-based encoding. The sum of each feature is retrieved for each cell (a). Then, according to user demand, the cell is divided to communicate the features. The space dedicated for a feature corresponds to the number of features present in the cell. For instance, one feature in (b), two features in (c), and three features in (d).
Fig. 6
6(a) shows a cell in the projected space that contains a hypothetical number of 16 data points with color encoding the most defining feature for each data point ( , , ).
Figure 7 :
7Cell-based visualization for feature distribution on the scatterplot. After inspecting the distinctive features on the scatterplot (a), users can visualize the feature values using cellbased visualizations. For instance, when one feature is selected (b, syrian), a simple continuous color scale is sufficient. For more than one feature (c, syrian, kill, protect), cell-based encoding is employed.
Figure 8 :
8Comparison between two clusters. To compare two clusters, the visualization strategy ofFig. 5is used to communicate the information of two clusters. Essentially, while the information for the first cluster (light-blue in thefigure)
Figure 9 :
9Users can inspect the histograms of available features through lasso selection over the points encoding p-values.
Figure 10 :
10Focus+context operation. The visualization starts with the context and focus axes as the same (1). Then, users can select a range of the context axis to focus analysis (2), resulting in an update of the focus axis (3).
Figure 11 :
11UMAP projection of the news dataset.
Figure 13 :
13Inspecting terms with lower confidence. Terms with lower confidence in the light-orange cluster are related to political aspects (a) and were positioned in a different subcluster.
Figure 14 :
14Contrastive information showing levels of understanding. The terms highlighted in red show high-level information, while the terms highlighted in blue give specific hints about the news articles.
Figure 16 :
16UMAP projection of the tweets about COVID-19.
Figure 17 :
17Discriminative terms for the dark-orange clusterterms related to respiratory problems.
Figure 18 :
18Discriminative terms for the light-orange cluster. Terms indicate users tweeting their complaints about symptoms expressed during COVID-19 infection.
Figure 19 :
19Comparing two clusters with fever as the main subject.
Figure 20 :
20Distinctive terms for the light-blue cluster. In this case, there are more general aspects of symptoms related to fever. For instance, the term sore, could be related to sore throat or a new COVID-19 symptom.
Figure 21 :
21Distinctive terms from the dark-blue cluster can be considered as indicative of Dengue symptoms.
Figure 22 :
22Evaluating another term from the dark-blue cluster. The terms such as difficulty and breathing indicate the most severe COVID-19 symptoms, shortness of breath.
Figure 23 :
23Heatmap showing that the red cluster comprises the tweets where people are worried about wearing masks and coryza.
Figure 25 :
25cExpression shows competitive results for topics with a low number of terms (≤ 20).
Figure 26 :
26cExpression was able to surpass LDA and NMF for the majority of the number of topics. time execution of cExpression and ccPCA by varying the number of features and samples to perform analysis-we focus only on the techniques used to retrieve contrastive information. The experiment was performed in a dataset composed of 40794 tweets about COVID-19 symptoms (see Section 4 for a detailed description of a smaller version of it).
Figure 27 :
27Time execution in logarithmic scale for retrieving distinctive features using contrastive analysis.
Summarization of results in Cohesion using AUC. cExpression only returns lower results for two and four topics.Number of topics
LDA
NMF
cExpression
2
41.0876 21.3630
32.9847
3
38.3072 28.9405
39.0384
4
37.7737 28.8734
37.4541
5
37.9169 28.9459
38.2428
6
37.5037 27.7676
41.6532
7
37.4346 26.7356
40.2814
8
33.4367 28.1161
39.3023
9
33.2409 26.7829
36.5034
10
35.5672 26.1085
37.2073
Table 1
https://wilsonjr.github.io/cExpression
http://qwone.com/ jason/20Newsgroups/
AcknowledgementsThis research work was supported by FAPESP (São Paulo Research Foundation), grants #2018/17881-3 and #2018/25755-8, and by the Coord. de Aperfeiçoamento de Pessoal de Nível Superior -Brazil (CAPES), grant #88887.487331/2020-00. We also thank the anonymous reviewers for their valuable comments.
Least square projection: A fast high-precision multidimensional projection technique and its application to document mapping. F V Paulovich, L G Nonato, M Rosane, H Levkowitz, IEEE Transactions on Visulization and Computer Graphics. 3F. V. Paulovich, L. G. Nonato, M. Rosane, H. Levkowitz, Least square projection: A fast high-precision multidimensional projection tech- nique and its application to document mapping, IEEE Transactions on Visulization and Computer Graphics 3 (2008) 564-575.
Visualizing high-dimensional data using t-sne. L J P Maaten, G E Hinton, Journal of Machine Learning Research. 9L. J. P. Maaten, G. E. Hinton, Visualizing high-dimensional data using t-sne, Journal of Machine Learning Research 9 (2008) 2579--2605.
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. L Mcinnes, J Healy, J Melville, ArXive-printsarXiv:1802.03426L. McInnes, J. Healy, J. Melville, UMAP: Uniform Manifold Ap- proximation and Projection for Dimension Reduction, ArXiv e- printsarXiv:1802.03426.
Supporting analysis of dimensionality reduction results with contrastive learning. T Fujiwara, O.-H Kwon, K.-L Ma, IEEE Trans. Vis. and Comp. Graph. 26T. Fujiwara, O.-H. Kwon, K.-L. Ma, Supporting analysis of dimen- sionality reduction results with contrastive learning, IEEE Trans. Vis. and Comp. Graph. 26 (2019) 45-55.
T Le, L Akoglu, Contravis: Contrastive and visual topic modeling for comparing document collections. New York, NY, USAAssociation for Computing MachineryThe World Wide Web Conference, WWW '19T. Le, L. Akoglu, Contravis: Contrastive and visual topic modeling for comparing document collections, in: The World Wide Web Confer- ence, WWW '19, Association for Computing Machinery, New York, NY, USA, 2019, p. 928-938.
Exploring patterns enriched in a dataset with contrastive principal component analysis. A Abid, M J Zhang, V K Bagaria, J Zou, Nature communications. 912134A. Abid, M. J. Zhang, V. K. Bagaria, J. Zou, Exploring patterns en- riched in a dataset with contrastive principal component analysis, Na- ture communications 9 (1) (2018) 2134.
Representative factor generation for the interactive visual analysis of highdimensional data. C Turkay, A Lundervold, A J Lundervold, H Hauser, IEEE Trans. Vis. Comput. Graph. 1812C. Turkay, A. Lundervold, A. J. Lundervold, H. Hauser, Represen- tative factor generation for the interactive visual analysis of high- dimensional data, IEEE Trans. Vis. Comput. Graph. 18 (12) (2012) 2621-2630.
Uncovering representative groups in multidimensional projections. P Joia, F Petronetto, L Nonato, CGF. 343P. Joia, F. Petronetto, L. Nonato, Uncovering representative groups in multidimensional projections, CGF 34 (3) (2015) 281-290.
Explaining three-dimensional dimensionality reduction plots. D B Coimbra, R M Martins, T T Neves, A C Telea, F V Paulovich, http:/arxiv.org/abs/https:/doi.org/10.1177/1473871615600010arXiv:https:Information Visualization. 152D. B. Coimbra, R. M. Martins, T. T. Neves, A. C. Telea, F. V. Paulovich, Explaining three-dimensional dimensionality reduction plots, Information Visualization 15 (2) (2016) 154-172. arXiv:https:
. 10.1177/1473871615600010doi:10.1177/1473871615600010//doi.org/10.1177/1473871615600010, doi:10.1177/1473871615600010. URL https://doi.org/10.1177/1473871615600010
Probing projections: Interaction techniques for interpreting arrangements and errors of dimensionality reductions. J Stahnke, M Dörk, B Müller, A Thom, IEEE Trans. on Vis. and Comp. Graph. 22J. Stahnke, M. Dörk, B. Müller, A. Thom, Probing projections: In- teraction techniques for interpreting arrangements and errors of di- mensionality reductions, IEEE Trans. on Vis. and Comp. Graph. 22 (2016) 629-638.
Understanding attribute variability in multidimensional projections. L De Carvalho Pagliosa, P A Pagliosa, L G Nonato, 10.1109/SIBGRAPI.2016.04829th Conf. Graphics, Patterns and Images. Sao Paulo, BrazilL. de Carvalho Pagliosa, P. A. Pagliosa, L. G. Nonato, Understanding attribute variability in multidimensional projections, in: 29th Conf. Graphics, Patterns and Images, SIBGRAPI 2016, Sao Paulo, Brazil, October 4-7, 2016, 2016, pp. 297-304. doi:10.1109/SIBGRAPI.2016. 048. URL https://doi.org/10.1109/SIBGRAPI.2016.048
Principal Component Analysis. I Jolliffe, Springer VerlagI. Jolliffe, Principal Component Analysis, Springer Verlag, 1986.
Multiple correspondence analysis, Encyclopedia of Measurement and Statistics. H Abdi, D Valentin, H. Abdi, D. Valentin, Multiple correspondence analysis, Encyclope- dia of Measurement and Statistics (2007) 651-657.
Multidimensional projection for visual analytics: Linking techniques with distortions, tasks, and layout enrichment. L G Nonato, M Aupetit, file:/localhost/opt/grobid/grobid-home/tmp/doi.ieeecomputersociety.org/10.1109/TVCG.2018.2846735IEEE Transactions on Visualization and Computer Graphics. L. G. Nonato, M. Aupetit, Multidimensional projection for visual an- alytics: Linking techniques with distortions, tasks, and layout enrich- ment, IEEE Transactions on Visualization and Computer Graphics (2018) 1doi:10.1109/TVCG.2018.2846735. URL doi.ieeecomputersociety.org/10.1109/TVCG.2018.2846735
Attribute-based Visual Explanation of Multidimensional Projections. R R O Silva, P E Rauber, R M Martins, R Minghim, A C Telea, 10.2312/eurova.20151100EuroVis Workshop on Visual Analytics (EuroVA). E. Bertini, J. C. RobertsThe Eurographics AssociationR. R. O. d. Silva, P. E. Rauber, R. M. Martins, R. Minghim, A. C. Telea, Attribute-based Visual Explanation of Multidimensional Pro- jections, in: E. Bertini, J. C. Roberts (Eds.), EuroVis Workshop on Visual Analytics (EuroVA), The Eurographics Association, 2015. doi:10.2312/eurova.20151100.
Using multiple attribute-based explanations of multidimensional projections to explore high-dimensional data. Z Tian, X Zhai, D Van Driel, G Van Steenpaal, M Espadoto, A Telea, 10.1016/j.cag.2021.04.034Computers & Graphics. 98Z. Tian, X. Zhai, D. van Driel, G. van Steenpaal, M. Espadoto, A. Telea, Using multiple attribute-based explanations of multidi- mensional projections to explore high-dimensional data, Computers & Graphics 98 (2021) 93-104. doi:https://doi.org/10.1016/j.cag. 2021.04.034.
An approach to perform local analysis on multidimensional projection. W E Marcilio, D M Eler, R E Garcia, 30th SIBGRAPI Conf. on Graph., Patterns and Images. W. E. Marcilio, D. M. Eler, R. E. Garcia, An approach to perform local analysis on multidimensional projection, 30th SIBGRAPI Conf. on Graph., Patterns and Images (SIBGRAPI) (2017) 351-358.
Clustervision: Visual supervision of unsupervised clustering. B Kwon, B Eysenbach, J Verma, K Ng, C D Filippi, W F Stewart, A Perer, IEEE Trans. Vis. Comput. Graph. 2401B. Kwon, B. Eysenbach, J. Verma, K. Ng, C. D. Filippi, W. F. Stewart, A. Perer, Clustervision: Visual supervision of unsupervised cluster- ing, IEEE Trans. Vis. Comput. Graph. 24 (01) (2018) 142-151.
Linear discriminative star coordinates for exploring class and cluster separation of high dimensional data. Y Wang, J Li, F Nie, H Theisel, M Gong, D J Lehmann, Computer Graphics Forum. 363Y. Wang, J. Li, F. Nie, H. Theisel, M. Gong, D. J. Lehmann, Linear discriminative star coordinates for exploring class and cluster sepa- ration of high dimensional data, Computer Graphics Forum 36 (3) (2017) 401-410.
Linear Discriminant Analysis. A J Izenman, SpringerNew York, New York, NYA. J. Izenman, Linear Discriminant Analysis, Springer New York, New York, NY, 2008, pp. 237-280.
. J Kruskal, M Wish, Scaling, Sage PublicationsJ. Kruskal, M. Wish, Multidimensional Scaling, Sage Publications, 1978.
A Lopes, R Pinho, F Paulovich, R Minghim, 10.1016/j.cag.2007.01.023Visual text mining using association rules. 31A. Lopes, R. Pinho, F. Paulovich, R. Minghim, Visual text mining using association rules, Computers & Graphics 31 (3) (2007) 316 - 326. doi:https://doi.org/10.1016/j.cag.2007.01.023. URL http://www.sciencedirect.com/science/article/pii/ S0097849307000544
Topic hypergraph: hierarchical visualization of thematic structures in long documents. G Wang, C Wen, B Yan, C Xie, R Liang, W Chen, Science China Information Sciences. 565G. Wang, C. Wen, B. Yan, C. Xie, R. Liang, W. Chen, Topic hyper- graph: hierarchical visualization of thematic structures in long docu- ments, Science China Information Sciences 56 (5) (2013) 1-14.
Analysis of document pre-processing effects in text and opinion mining. D Eler, D Grosa, I Pola, R Garcia, R Correia, J Teixeira, InformationD. Eler, D. Grosa, I. Pola, R. Garcia, R. Correia, J. Teixeira, Anal- ysis of document pre-processing effects in text and opinion mining, Information.
The exploratory labeling assistant: Mixed-initiative label curation with large document collections. C Felix, A Dasgupta, E Bertini, 10.1145/3242587.3242596Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST '18, Association for Computing Machinery. the 31st Annual ACM Symposium on User Interface Software and Technology, UIST '18, Association for Computing MachineryNew York, NY, USAC. Felix, A. Dasgupta, E. Bertini, The exploratory labeling assis- tant: Mixed-initiative label curation with large document collections, in: Proceedings of the 31st Annual ACM Symposium on User In- terface Software and Technology, UIST '18, Association for Com- puting Machinery, New York, NY, USA, 2018, p. 153-164. doi: 10.1145/3242587.3242596. URL https://doi.org/10.1145/3242587.3242596
Comparison of small n statistical tests of differential expression applied to microarrays. C Murie, O Z Woody, A Y Lee, R Nadon, BMC Bioinformatics. 10C. Murie, O. Z. Woody, A. Y. Lee, R. Nadon, Comparison of small n statistical tests of differential expression applied to microarrays, BMC Bioinformatics 10 (2008) 45 -45.
edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. M D Robinson, D J Mccarthy, G K Smyth, 10.1093/bioinformatics/btp616139-140. doi:10.1093/ bioinformatics/btp616Bioinformatics. 261M. D. Robinson, D. J. McCarthy, G. K. Smyth, edgeR: a Biocon- ductor package for differential expression analysis of digital gene ex- pression data, Bioinformatics 26 (1) (2009) 139-140. doi:10.1093/ bioinformatics/btp616. URL https://doi.org/10.1093/bioinformatics/btp616
limma powers differential expression analyses for RNAsequencing and microarray studies. M E Ritchie, B Phipson, D Wu, Y Hu, C W Law, W Shi, G K Smyth, 10.1093/nar/gkv007Nucleic Acids Research. 437M. E. Ritchie, B. Phipson, D. Wu, Y. Hu, C. W. Law, W. Shi, G. K. Smyth, limma powers differential expression analyses for RNA- sequencing and microarray studies, Nucleic Acids Research 43 (7) (2015) e47-e47. doi:10.1093/nar/gkv007. URL https://doi.org/10.1093/nar/gkv007
Cyteguide: Visual guidance for hierarchical single-cell analysis. T Höllt, N Pezzotti, V Van Unen, F Koning, B Lelieveldt, A Vilanova, 10.1109/TVCG.2017.2744318IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis. 24T. Höllt, N. Pezzotti, V. van Unen, F. Koning, B. Lelieveldt, A. Vi- lanova, Cyteguide: Visual guidance for hierarchical single-cell anal- ysis, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis 2017) 24 (1) (2018) 739 -748. doi: 10.1109/TVCG.2017.2744318.
A hybrid visualization approach to perform analysis of feature spaces. W E M Júnior, D M Eler, R E Garcia, R C M Correia, L F Silva, 17th International Conference on Information Technology-New Generations (ITNG 2020). S. LatifiChamSpringer International PublishingW. E. M. Júnior, D. M. Eler, R. E. Garcia, R. C. M. Correia, L. F. Silva, A hybrid visualization approach to perform analysis of feature spaces, in: S. Latifi (Ed.), 17th International Conference on Informa- tion Technology-New Generations (ITNG 2020), Springer Interna- tional Publishing, Cham, 2020, pp. 241-247.
Empirical guidance on scatterplot and dimension reduction technique choices. M Sedlmair, T Munzner, M Tory, 10.1109/TVCG.2013.153IEEE Transactions on Visualization and Computer Graphics. 1912M. Sedlmair, T. Munzner, M. Tory, Empirical guidance on scatter- plot and dimension reduction technique choices, IEEE Transactions on Visualization and Computer Graphics 19 (12) (2013) 2634-2643. doi:10.1109/TVCG.2013.153.
Towards perceptual optimization of the visual design of scatterplots. L Micallef, G Palmas, A Oulasvirta, T Weinkauf, 10.1109/TVCG.2017.2674978IEEE Transactions on Visualization and Computer Graphics. 236L. Micallef, G. Palmas, A. Oulasvirta, T. Weinkauf, Towards percep- tual optimization of the visual design of scatterplots, IEEE Transac- tions on Visualization and Computer Graphics 23 (6) (2017) 1588- 1599. doi:10.1109/TVCG.2017.2674978.
Scatterplots: Tasks, data, and designs. A Sarikaya, M Gleicher, 10.1109/TVCG.2017.2744184IEEE Transactions on Visualization and Computer Graphics. 241A. Sarikaya, M. Gleicher, Scatterplots: Tasks, data, and designs, IEEE Transactions on Visualization and Computer Graphics 24 (1) (2018) 402-412. doi:10.1109/TVCG.2017.2744184.
Interactive image feature selection aided by dimensionality reduction. P E Rauber, R R O D Silva, S Feringa, M E Celebi, A Falcão, A Telea, EuroVA@EuroVisP. E. Rauber, R. R. O. D. Silva, S. Feringa, M. E. Celebi, A. Falcão, A. Telea, Interactive image feature selection aided by dimensionality reduction, in: EuroVA@EuroVis, 2015.
Design factors for summary visualization in visual analytics. A Sarikaya, M Gleicher, D A Szafir, Comput. Graph. Forum. 37A. Sarikaya, M. Gleicher, D. A. Szafir, Design factors for summary visualization in visual analytics, Comput. Graph. Forum 37 (2018) 145-156.
Sadire: a context-preserving sampling technique for dimensionality reduction visualizations. Journal of Visualization. 23Sadire: a context-preserving sampling technique for dimensional- ity reduction visualizations, Journal of Visualization 23 (2020) 999- 1013.
A taxonomy of glyph placement strategies for multidimensional data visualization. M O Ward, 10.1057/palgrave.ivs.9500025Information Visualization. 1M. O. Ward, A taxonomy of glyph placement strategies for mul- tidimensional data visualization, Information Visualization 1 (3/4) (2002) 194-210. doi:10.1057/palgrave.ivs.9500025. URL http://dx.doi.org/10.1057/palgrave.ivs.9500025
T Munzner, Visualization Analysis and Design, AK Peters Visualization Series. CRC PressT. Munzner, Visualization Analysis and Design, AK Peters Visualiza- tion Series, CRC Press, 2015. URL https://books.google.de/books?id=NfkYCwAAQBAJ
D3 data-driven documents. M Bostock, V Ogievetsky, J Heer, IEEE Transactions on Visualization and Computer Graphics. 1712M. Bostock, V. Ogievetsky, J. Heer, D3 data-driven documents, IEEE Transactions on Visualization and Computer Graphics 17 (12) (2011) 2301-2309.
V Traag, L Waltman, N J Van Eck, arXiv:1810.08473From louvain to leiden: guaranteeing well-connected communities. arXiv preprintV. Traag, L. Waltman, N. J. van Eck, From louvain to lei- den: guaranteeing well-connected communities, arXiv preprint arXiv:1810.08473.
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert , Pre-training of deep bidirectional transformers for language understanding. J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding (2018). URL http://arxiv.org/abs/1810.04805
Exploring the space of topic coherence measures. M Röder, A Both, A Hinneburg, Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15, Association for Computing Machinery. the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15, Association for Computing MachineryNew York, NY, USAM. Röder, A. Both, A. Hinneburg, Exploring the space of topic co- herence measures, in: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15, Asso- ciation for Computing Machinery, New York, NY, USA, 2015, p. 399-408.
Online learning for latent dirichlet allocation. M Hoffman, F R Bach, D M Blei, Advances in Neural Information Processing Systems. J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, A. CulottaCurran Associates, Inc23M. Hoffman, F. R. Bach, D. M. Blei, Online learning for latent dirich- let allocation, in: J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, A. Culotta (Eds.), Advances in Neural Information Pro- cessing Systems 23, Curran Associates, Inc., 2010, pp. 856-864.
Fast local algorithms for large scale nonnegative matrix and tensor factorizations. A Cichocki, A.-H Phan, IEICE Transactions. A. Cichocki, A.-H. PHAN, Fast local algorithms for large scale non- negative matrix and tensor factorizations, IEICE Transactions (2009) 708-721.
Hierarchical stochastic neighbor embedding. N Pezzotti, T Höllt, B Lelieveldt, E Eisemann, A Vilanova, 10.1111/cgf.12878Proceedings of the Eurographics / IEEE VGTC Conference on Visualization, EuroVis '16. the Eurographics / IEEE VGTC Conference on Visualization, EuroVis '16Goslar Germany, GermanyN. Pezzotti, T. Höllt, B. Lelieveldt, E. Eisemann, A. Vilanova, Hier- archical stochastic neighbor embedding, in: Proceedings of the Eu- rographics / IEEE VGTC Conference on Visualization, EuroVis '16, Eurographics Association, Goslar Germany, Germany, 2016, pp. 21- 30. doi:10.1111/cgf.12878. URL https://doi.org/10.1111/cgf.12878
Explorertree: A focus+context exploration approach for 2d embeddings. Big Data Research. 25100239Explorertree: A focus+context exploration approach for 2d embed- dings, Big Data Research 25 (2021) 100239.
W E Marcílio-Jr, D M Eler, F V Paulovich, R M Martins, arXiv:2106.07718Humap: Hierarchical uniform manifold approximation and projection. W. E. Marcílio-Jr, D. M. Eler, F. V. Paulovich, R. M. Martins, Humap: Hierarchical uniform manifold approximation and projection (2021). arXiv:2106.07718.
Should we abandon the t-test in the analysis of gene expression microarray data: a comparison of variance modeling strategies. M Jeanmougi, A De Reynies, L Marisa, C Paccard, G Nuel, M Guedj, PLoS One. M. Jeanmougi, A. de Reynies, L. Marisa, C. Paccard, G. Nuel, M. Guedj, Should we abandon the t-test in the analysis of gene expres- sion microarray data: a comparison of variance modeling strategies, PLoS One.
Significance analysis of microarrays applied to the ionizing radiation response, Proceedings of the. V G Tusher, R Tibshirani, G Chu, 10.1073/pnas.091062498National Academy of Sciences. 989V. G. Tusher, R. Tibshirani, G. Chu, Significance analysis of microarrays applied to the ionizing radiation response, Proceed- ings of the National Academy of Sciences 98 (9) (2001) 5116- 5121. arXiv:https://www.pnas.org/content/98/9/5116.full.pdf, doi: 10.1073/pnas.091062498. URL https://www.pnas.org/content/98/9/5116
|
[] |
[
"GEDI: Gammachirp Envelope Distortion Index for Predicting Intelligibility of Enhanced Speech",
"GEDI: Gammachirp Envelope Distortion Index for Predicting Intelligibility of Enhanced Speech"
] |
[
"Katsuhiko Yamamoto \nGraduate School of Systems Engineering\nWakayama University\nSakaedani 930640-8510WakayamaWakayamaJapan\n",
"Toshio Irino \nGraduate School of Systems Engineering\nWakayama University\nSakaedani 930640-8510WakayamaWakayamaJapan\n",
"Shoko Araki \nNTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan\n",
"Keisuke Kinoshita \nNTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan\n",
"Tomohiro Nakatani \nNTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan\n"
] |
[
"Graduate School of Systems Engineering\nWakayama University\nSakaedani 930640-8510WakayamaWakayamaJapan",
"Graduate School of Systems Engineering\nWakayama University\nSakaedani 930640-8510WakayamaWakayamaJapan",
"NTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan",
"NTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan",
"NTT Communication Science Laboratories\n2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan"
] |
[] |
In this study, we proposed a new concept, gammachirp envelope distortion index (GEDI), based on the signal-to-distortion ratio in the auditory envelope SDR env , to predict the intelligibility of speech enhanced by nonlinear algorithms. The main objective of using GEDI is to calculate the distortion between enhanced and clean speech representations in the domain of a temporal envelope that is extracted by the gammachirp auditory filterbank and modulation filterbank. We also extended the GEDI with multi-resolution analysis (mr-GEDI) to predict the speech intelligibility of sound under non-stationary noise conditions. We evaluated the GEDI in terms of the speech intelligibility predictions of speech sounds enhanced by a classic spectral subtraction and a state-of-the-art Wiener filtering method. The predictions were compared with human results for various signal-to-noise ratio conditions with additive pink and babble noise. The results showed that mr-GEDI predicted the intelligibility curves more accurately than the short-time objective intelligibility (STOI) measure and the hearing aid speech perception index (HASPI).
|
10.1016/j.specom.2020.06.001
|
[
"https://arxiv.org/pdf/1904.02096v1.pdf"
] | 102,481,336 |
1904.02096
|
aa81cd3be1b487a298d518f4d2870fbc39b5108f
|
GEDI: Gammachirp Envelope Distortion Index for Predicting Intelligibility of Enhanced Speech
3 Apr 2019
Katsuhiko Yamamoto
Graduate School of Systems Engineering
Wakayama University
Sakaedani 930640-8510WakayamaWakayamaJapan
Toshio Irino
Graduate School of Systems Engineering
Wakayama University
Sakaedani 930640-8510WakayamaWakayamaJapan
Shoko Araki
NTT Communication Science Laboratories
2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan
Keisuke Kinoshita
NTT Communication Science Laboratories
2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan
Tomohiro Nakatani
NTT Communication Science Laboratories
2-4 Hikaridai, Seika-cho, Soraku-gun619-0237KyotoJapan
GEDI: Gammachirp Envelope Distortion Index for Predicting Intelligibility of Enhanced Speech
3 Apr 2019Speech intelligibilityObjective measureSpeech enhancement 2010 MSC: 00-01, 99-00
In this study, we proposed a new concept, gammachirp envelope distortion index (GEDI), based on the signal-to-distortion ratio in the auditory envelope SDR env , to predict the intelligibility of speech enhanced by nonlinear algorithms. The main objective of using GEDI is to calculate the distortion between enhanced and clean speech representations in the domain of a temporal envelope that is extracted by the gammachirp auditory filterbank and modulation filterbank. We also extended the GEDI with multi-resolution analysis (mr-GEDI) to predict the speech intelligibility of sound under non-stationary noise conditions. We evaluated the GEDI in terms of the speech intelligibility predictions of speech sounds enhanced by a classic spectral subtraction and a state-of-the-art Wiener filtering method. The predictions were compared with human results for various signal-to-noise ratio conditions with additive pink and babble noise. The results showed that mr-GEDI predicted the intelligibility curves more accurately than the short-time objective intelligibility (STOI) measure and the hearing aid speech perception index (HASPI).
Introduction
The development of objective speech intelligibility and quality measures is essential for speech communication technologies such as assistive listening devices, including smart headphones and hearing aids (Falk et al., 2015). As international standards of objective intelligibility measures (OIMs), the speech Email addresses: [email protected] (Katsuhiko Yamamoto), [email protected] (Toshio Irino), [email protected] (Shoko Araki), [email protected] (Keisuke Kinoshita), [email protected] (Tomohiro Nakatani) intelligibility index (SII) (ANSI S3-5, 1997), and the speech transmission index (STI) (ISO 9921, 2003) have been proposed to evaluate the speech transmission qualities of public spaces and telecommunication lines assumed as a linear transmission system. However, the SII and STI are not able to account for the effects of nonlinear processing, including noise reduction and speech enhancement algorithms. For example, it is reported that the STI fails to predict the speech intelligibility of enhanced speech processed by a simple spectral (SS) algorithm (Jørgensen & Dau, 2011). For the above reasons, their evaluation methodology of speech intelligibility still involves subjective listening tests, although many noise reduction and speech enhancement algorithms have been developed thus far.
Objective measures for speech enhancement
To solve these problems, several human-auditory-based models have been proposed. These models are commonly based on two approaches, namely correlation analyses and signal-to-noise ratio (SNR) in an envelope domain. Taal et al. (2011) proposed a short-time objective intelligibility (STOI) measure, which has often been used in recent evaluations of speech enhancement algorithms. The STOI is based on the cross-correlation between the temporal envelopes of clean speech (S) and enhanced speech (Ŝ) at the output of a 1/3-octave filterbank. The STOI is intended to assess the intelligibility of speech processed by ideal time-frequency segregation (ITFS) (Kjems et al., 2009). Kates & Arehart (2014) proposed a hearing-aid speech perception index (HASPI) for hearing impaired (HI) and normal hearing (NH) listeners that was an extension of the three-level coherence speech intelligibility index (CSII) (Kates & Arehart, 2005). This measure is a combination of two indices: (1) the coherence between the outputs of an auditory filterbank for clean (S) and enhanced speech (Ŝ), and (2) the cross-correlation between the temporal sequences of the cepstral coefficients of S andŜ. The purpose of HASPI is to assess the results of nonlinear frequency compression and ITFS processing. Jørgensen & Dau (2011) proposed an alternative SNR-based model, which they refer to as the speech-based envelope power spectrum model (sEPSM). The sEPSM assumes that speech intelligibility is related to the signal-to-noise ratio (SNR) in the envelope domain SNR env that originates from (S/N) mod in (Dubbelboer & Houtgast, 2008). The SNR env is calculated from the ratios between the envelope powers of the enhanced speech (Ŝ) and residual noise (Ñ ) in the modulation frequency domain. The sEPSM is intended to assess the intelligibility of speech sounds processed by SS. The sEPSM was extended to a multiresolution version (mr-sEPSM) to perform more accurate speech intelligibility estimations for speech affected by non-stationary noise (Jørgensen et al., 2013). Chabot-Leclerc et al. (2014) extended the sEPSM with a spectro-temporal receptive field (STRF) to account for phase jitter (Chi et al., 1999). Considering the auditory filter of humans, Yamamoto et al. (2019) extended the sEPSM with a dynamic compressive gammachirp filterbank (dcGC-FB) (Irino & Patterson, 2006), in which the level-dependent frequency selectivity and gain of the auditory filter were reasonably determined by the data obtained from psychoacoustic masking experiments. It was demonstrated that the dcGC-sEPSM predicted the human results of the Wiener filtering more accurately than the original sEPSM (Jørgensen & Dau, 2011), CSII (Kates & Arehart, 2005), STOI measure (Taal et al., 2011), and HASPI (Kates & Arehart, 2014).
However, the SNR env -based OIMs (such as the sESPM and dcGC-sEPSM) are naturally limited. As shown in Fig. 1(a), the SNR env -based OIMs require the "residual noise" (Ñ ) that is estimated by a speech enhancement algorithm. The definition of the residual noise, however, has not been clarified in the original paper (Jørgensen & Dau, 2011;Jørgensen et al., 2013;Chabot-Leclerc et al., 2014). Although SS may provide an appropriate estimation of the residual noise, many speech enhancement algorithms aim to estimate the clean speech (S) directly without estimating the residual noise (Ñ ). Some non-linear speech enhancement algorithms (Fujimoto et al., 2012;Weninger et al., 2014;Smaragdis & Venkataramani, 2017) cannot provide us precise and unique estimation of the residual noise because there are several ways of estimation . Therefore, the SNR env -based OIM approaches are restricted to speech enhancement algorithms, such as SS, that can estimate the residual noise uniquely and properly. In contrast, major correlation-based OIMs, as shown in shown in Fig. 1(b), including STOI and HASPI, are solely based on the use of clear speech (S) as the reference signal without any ambiguity.
Proposed method
In this paper, we first propose a new OIM called "gammachirp envelope distortion index (GEDI)," which uses the signal-to-distortion ratio in the envelope domain (SDR env ) and clean speech (S) as the reference signal, as shown in Fig. 1(b). The internal representations in the proposed model are similar to those of the dcGC-sEPSM and original sEPSM, which use the SNR env . The original GEDI was initially reported in Yamamoto et al. (2017) with the evaluation of pink background noise. In this study, we extend the GEDI with a weighting function, called GEDI (weight), to normalize the envelope power from the output of the dcGC-FB and demonstrate the effect of this extension. We also demonstrate the prediction performance in non-stationary, babble noise conditions. The results let us extend the mr-GEDI to improve predictability under babble noise conditions (Yamamoto et al., 2018).
The prediction results of the GEDI, STOI (Taal et al., 2011), and HASPI were compared with human results by using the speech materials produced by two speech enhancement algorithms under pink and babble noise conditions.
In section 2, we give an overview of the GEDI. In sections 3 and 4, we describe the speech materials and experimental conditions of the evaluation. In section 5, the human results and predicted results are discussed. In section 6, we describe the mr-GEDI and its evaluation results. Figure 2 is a block diagram of the GEDI. The input sounds to the GEDI are the enhanced speech (Ŝ) and the clean speech (S). The main objective of the GEDI is to calculate the distortion between the temporal envelopes of the clean and enhanced speech from the outputs of an auditory filterbank. We hypothesized that speech intelligibility becomes increasingly degraded as the temporal envelopes of the enhanced speech diverge from those of clean speech.
Overview of the GEDI
Auditory filterbank
The first stage is an auditory spectral analysis using the dynamic compressive gammachirp filterbank (dcGC-FB) (Irino & Patterson, 2006), which has 100 channels equally spaced on the ERB N -number (Moore, 2013), and covers the speech range between 100 and 6000 Hz 1 . The auditory filter changes the gain and bandwidth in accordance with the input level. Therefore, the dcGC-FB was carefully set to correspond to the sound pressure level (SPL) used for subjective listening experiments.
Distortion in the temporal envelope domain
The temporal envelopes of the enhanced (eŜ) and clean speech (e S ) are calculated from the output of the individual auditory filter by using the Hilbert transform and a low-pass filter with a cutoff frequency of 150 Hz. The absolute difference between the two power envelopes is calculated to determine the temporal "envelope distortion (e D )" as where i{i|1 ≤ i ≤ 100} is the number of dcGC-FB channels, and n is the sample number of the temporal envelopes. Here, p is a constant; we set this number as p = 2 in this study. Thus, the envelope distortion e D represents these differences as absolute values. Figure 3 shows an example of envelopes e S and eŜ, as well as distortion e D calculated using Eq. 1. The use of the enhancement algorithms causes the envelope of the enhanced speech to be either emphasized or degraded relative to that of clean speech. The temporal envelope of the enhanced speech differs from that of the clean speech. The working hypothesis in this study is that the distortion between them (Eq. 1) is negatively correlated with speech intelligibility. Additionally, we hypothesize that the relative power of the distortion is closely related to the intelligibility of the speech, i.e., the speech intelligibility decreases when the power of the envelope distortion increases, and vice versa.
e D,i (n) = |{e S,i (n)} p − {eŜ ,i (n)} p | 1/p ,(1)
SDR in envelope modulation domain
The modulation spectra of the envelope distortion (e D ) and the envelope of the clean speech (e S ) are calculated using the fast Fourier transform (FFT). A filterbank that is defined based on the modulation frequency f env is applied to the absolute modulation spectra. There are seven modulation filters whose power spectra are W f c env (f env ) for the modulation center frequency of f c env , as illustrated in Fig. 2 and described in previous studies (Jørgensen & Dau, 2011;Yamamoto et al., 2019).
P env, * = 1 EŜ(0) 2 ∞ fenv>0 |E * (f env )| 2 W f c env (f env ) df env ,(2)
where the asterisk (*) represents either S or D, and EŜ(0) represents the 0-th order coefficient of the FFT, i.e., the DC component of the temporal envelope.
In the original sEPSM (Jørgensen & Dau, 2011), it was assumed that there is internal noise in the modulation domain to restrict the lower limit of P env, * . The formula, P env, * = max(P env, * , 0.01), is also used in this simulation. Because the number of dcGC-FB channels is 100 and the number of modulation filters is 7, the total number of envelope power spectra P env, * is 700. The SDR in the modulation frequency domain (SDR env ) is calculated as the ratio of the modulation power spectra of clean speech P env,S to the distortion P env,D . The individual SDR env,j for modulation filter channel j is defined as the ratio of the powers summed across the dcGC-FB channel i, and it can be written as
SDR env,j = 100 i=1 P env,S,i,j 100 i=1 P env,D,i,j .
(
The total SDR env can be calculated as
SDR env = J j=1 SDR env,j 2 ,(4)
where J is the number of the modulation filter {j|1 ≤ j ≤ 7}.
SDR env with a weighting function
The envelope power |E * (f env )| 2 in Eq. 2 is proportional to the output power of the dcGC-FB because it is linearly derived by the FFT and the modulation filterbank as shown in Fig. 2. The output power of the individual auditory filter is proportional to the bandwidth, ERB N (Moore, 2013). ERB N (f ) = 24.7(4.37f /1000 + 1), where f is the filter center frequency in Hz and the bandwidth is roughly proportional to f above 500 Hz. The auditory filters in the dcGC-FB are distributed densely on the frequency axis with considerable overlapping, as described in section 2.1. To evaluate the speech spectrum uniformly, the frequency-dependent level difference must be compensated for. Thus, Eq. 3 for SDR env is modified as
SDR W env,j = 100 i=1 W i · P env,S,i,j 100 i=1 W i · P env,D,i,j ,(5)
with a weighting function, W i , that is inversely proportional to the bandwidth of the filter at frequency f i as
W i = ERB N (1000) ERB N (f i ) .(6)
The total SDR env is then calculated using Eq. 4. We evaluated this version of the GEDI using SDR W env,j in comparison with the original version to demonstrate the effect of the weighting function. This version will be referred to as "GEDI (weight)" hereinafter.
Transformation to speech intelligibility
The following procedure is the same as that used in the sEPSM algorithm (Jørgensen & Dau, 2011;Yamamoto et al., 2019), except that SDR env is used instead of SNR env . The SDR env is converted into sensitivity index d ′ of an "ideal observer" by
d ′ = k · (SDR env ) q ,(7)
where k and q are empirically determined constants. In practice, they can be tuned such that the predicted speech intelligibility scores for the reference sounds approximately coincide with those of the human subjective score. The speech intelligibility as percent correct I predict is predicted from index d ′ using a multiple-alternative forced choice (mAFC) model (Green & Birdsall, 1988) in combination with an unequal-variance Gaussian model (Mickes et al., 2007), and can be written as
I (d ′ ) predict = 100 · Φ d ′ − µ N σ 2 S + σ 2 N ,(8)
where Φ denotes the cumulative normal distribution. The values of µ N and σ S were determined by response-set size m. In our study, the value of m was fixed at 20000 as in the previous studies of ( Yamamoto et al., 2017Yamamoto et al., , 2018; in addition, the values of q, µ N , and σ N were set to the same as those reported in the Appendix of (Jørgensen & Dau, 2011), i.e., q = 0.5, µ N = 4.0389, and σ N = 0.3297. σ S is a parameter related to the redundancy of the speech material (e.g., meaningful sentences or monosyllables) and is determined based on the speech intelligibility experiment. The values of σ S and k will be explained in section 4.2.
Speech materials for evaluation
Speech data
Speech sounds of Japanese four-mora words spoken by a male speaker (label ID: mis) from a database called the familiarity-controlled word lists 2007 (FW07) (Kondo et al., 2007) were used for subjective listening experiments and objective evaluations. The database consists of several word-familiarity ranks that correspond to the degree of lexical information. We obtained speech sounds from the set with the lowest familiarity, which prevents listeners from complementing the answer with their guesses.
Noise conditions
Pink noise and babble noise were used for the subjective listening experiments and objective predictions. Each noise was added to the clean speech to obtain noisy speech sounds, referred to as "unprocessed" sounds. The babble noise has some temporal fluctuation in power and prevents the perception of individual speech. A speech babble noise was generated from the corpus of spontaneous Japanese (CSJ) database (Furui et al., 2000;Maekawa, 2003). The noise was generated as follows: 8-minute sections were randomly extracted from each file, and all were superimposed to be babble noise. We mixed speech signals of 32 speakers after concatenating the sentences into a single-track sound. This number of 32 was chosen so that individual sounds would not be heard and it would not be a steady noise.
The pink noise and the babble noise were extracted from a random starting point before adding it to the speech sounds. When making noise speech, we randomly cut out the start point from the noise, and the length was adjusted to the original speech sound. The SNR conditions ranged from −6 dB to +3 dB in 3-dB steps for pink noise conditions, and from −6 dB to +6 dB for babble noise conditions.
Speech enhancement algorithms
In this study, we applied two speech enhancement algorithms to the unprocessed sounds. The first one is a simple SS algorithm (Berouti et al., 1979), to ensure consistency with the method previously used to evaluate the original sEPSM method (Jørgensen & Dau, 2011). The second one is a noise-suppression algorithm based on a Wiener filter with a pre-trained speech model (WF PSM ). Current speech enhancement algorithms estimate the speech and noise spectral densities through state-of-the-art approaches including a vector Taylor-series (VTS) -based model (Fujimoto et al., 2012), nonnegative matrix factorization (Weninger et al., 2014), and deep neural network (Smaragdis & Venkataramani, 2017). Thus, we selected the WF PSM algorithm based on a vector Taylor series (Fujimoto et al., 2012) for the evaluation.
Spectral subtraction
The amplitude spectrum of clean speech,Ŝ(f ), was estimated by using SS (Berouti et al., 1979), which was defined as
|Ŝ(f )| 2 = P S+N (f ) − αP N (f ) when P S+N (f ) > (α + β)P N (f ) βP N (f ) otherwise ,(9)
whereP N (f ) represents the noise power spectrum (N ) estimated from a nonspeech segment, and P S+N (f ) is the power spectrum of noisy speech (S + N ). The parameter α denotes the over-subtraction factor (α ≤ 0), and β denotes the spectral flooring parameter (0 < β ≪ 1). The oversubtraction factor, α, for the SS was fixed at 1.0 as a reference condition for comparison with the results presented in (Jørgensen & Dau, 2011). We calculated the power and phase spectra by using a short-time Fourier transform with a 1024-point Hanning window and a 50 % frame shift at a sampling frequency of 16 kHz. This method will hereafter be referred to as "SS (1.0) ".
Wiener filter with pre-trained speech model
The WF PSM used in this study was estimated using a PSM of the clean speech and noise (Fujimoto et al., 2009(Fujimoto et al., , 2012. The PSM is defined as a Gaussian mixture model that is defined in the Mel-spectrum domain by using a VTSbased model combination algorithm. This algorithm can estimate the speech component of noisy speech based on the PSM, which represents the statistical distribution of the spectral features in clean speech. The PSM was trained with a large speech database consisting of more than 30,000 sentences spoken by 180 speakers taken from the CSJ database (Furui et al., 2000;Maekawa, 2003). In this evaluation, we used the PSM with a 24-channel Mel-filterbank and set the number of Gaussian mixture components for speech and noise to 64 and 1, respectively. The WF gain applied to the noisy speech in the linear frequency domain was calculated using frequency warping from the Mel-frequency domain. The sampling frequency was set as 16 kHz owing to the limit of the program for the WF PSM .
The WF PSM can control the amount of residual noise with the parameter ε {ε|0 ≤ ε ≤ 1} of the Wiener gain shown in Eq. 18 in (Fujimoto et al., 2009). Residual noise increases as the value of ε increases. WF PSM with ε values of 0, 0.1, and 0.2 will be referred to as "WF
Evaluation conditions
We performed human, subjective experiments to estimate the intelligibility of the enhanced speech described in section 3. The proposed and conventional OIMs were evaluated based on how well they predicted the human results. Note that the speech materials used in the experiments were different for individual subjects; hence and, therefore,, the predictions were performed for the individual materials.
Subjective intelligibility
Sound presentation
For pink noise conditions, the sounds were presented diotically via a digitalto-analog (DA) converter (Fostex, HP-A8) over headphones (Sennheiser, HD-580) at a quantization level of 24 bits and a sampling frequency of 48 kHz after upsampling from 16 kHz. The level of stimulus sounds was 65 dB in L Aeq . Listeners were seated in a sound-attenuated room with a background noise level of approximately 26 dB in L Aeq .
For babble noise conditions, the sounds were presented diotically via a DA converter (OPPO, HA-1) over headphones (OPPO, PM-1) at a sampling frequency of 48 kHz. The stimulus sound levels were 63 dB in L Aeq .
Listeners
Nine (four male and five female) young NH listeners participated in the experiments with pink noise conditions, and fourteen (eight male and six female) young NH listeners participated in the experiments with babble noise conditions. Their native language was Japanese. The participants had a hearing level (HL) of less than 20 dB between 125-8000 Hz. They participated in the experiments after providing informed consent.
The participants were instructed to write down the words that they heard by using "hiragana," which roughly corresponds to the Japanese morae or consonant-vowel syllables. The total number of presented stimuli was 400 words, consisting of a combination of speech enhancement algorithm conditions and SNR conditions with 20 words per condition. Note that the words for each condition corresponded to a set of 20 words in the FW07. Each subject listened to a different word set, which was assigned randomly to avoid bias caused by word difficulty. Thus, there were fourteen sets of stimulus sounds.
Objective intelligibility measures
Model evaluations were performed for the prediction of human results under the conditions arising from the use of speech enhancement algorithms and the existence of pink noise and babble noise conditions. The STOI measure (Taal et al., 2011) was selected as a de facto standard OIM for the evaluation of state-of-the-art speech enhancement algorithms. In addition, the HASPI was also selected as a competing model because it performed better than other models in a previous study . Note that these models, including the sEPSM (Jørgensen & Dau, 2011) and CSII (Kates & Arehart, 2005), have been used for the comparison with the dcGC-sEPSM in the previous study . As a result, the sEPSM failed to predict speech intelligibility for the WF PSM s condition, and the CSII could not predict speech intelligibility for SS (1.0) . Thus, we only used the STOI and HASPI in the evaluation of this study.
We calculated the speech intelligibility from the same speech sounds, that is, the 3600 words, used in the subjective listening experiments. Therefore, the OIM predictions were derived for the word sets provided to the individual listeners. In the following OIMs, several parameters must be tuned depending on the speech material used in the evaluation. In this study, for a fair comparison, the parameter values were determined by a least squared error (LSE) method such that the model predictions matched the intelligibility scores of human results of speech intelligibility for the unprocessed conditions in each noise conditions. The capabilities of the OIMs to predict speech intelligibility based on the individual speech enhancement algorithms were investigated.
GEDI and GEDI (weight)
In the GEDI, the values of the four parameters k, q, σ S , and m in Eqs. 7 and 8 need to be determined. We fixed q = 0.5, as described in (Jørgensen & Dau, 2011), and m = 20000, as described in . Next, k and σ S were determined by the LSE method to minimize the mean-squared error of the "unprocessed" curves between the human results and the model prediction, as described above. The optimized parameter values for the GEDI and GEDI (weight) are listed in the second and third rows of Table 1. Table 1: Parameter values for the GEDI, GEDI (weight), STOI, and HASPI described in section 4 as well as for the mr-GEDI described in section 6.
STOI
The STOI consists of a one-third octave band filterbank, envelope extraction, and normalization to calculate the correlation-based intelligibility measure d, as described in (Taal et al., 2011). The speech intelligibility is derived as a percentage value by using a logistic function
I predict = 100 1 + exp(ad + b) ,(10)
as in Eq. 8 of (Taal et al., 2011). The optimized parameter values by the LSE method are listed in the fifth row of Table 1.
HASPI
The HASPI is a recent version developed to predict speech intelligibility for HI listeners using an extended version of the gammatone filterbank. This index is calculated from the normalized cross-correlation of the temporal sequence of the cepstral coefficients in addition to the auditory coherence values. Speech intelligibility using the HASPI is derived by using a logistic function,
I predict = 100 1 + exp(−p) ,(11)
as in Eqs. 1 and 7 in (Kates & Arehart, 2014). The parameter p is defined as a linear combination of feature values related to the cepstral correlations (c) and the three levels of auditory coherence (a low , a mid , and a high ) with a bias component, and it can be calculated as
p = B + C · c + 0 · a low + 0 · a mid + A high · a high .(12)
The coefficients for this feature are denoted with capital letters as B, C, and A. Note that coefficients A low and A mid have been set to zero, as described in Kates & Arehart (2014). The remaining coefficients, namely B, C, and A high , were determined by the LSE method. The optimized parameter values are listed in the sixth row of Table 1. Figure 4 shows the percent correct values of speech intelligibility as a function of the speech SNR. Panel(a) shows the human results. The other panels show the model predictions by (b) the original GEDI, (c) the GEDI with the weighting function, (d) the mr-GEDI, which will be described in section 6, (e) the STOI, and (f) the HASPI. The speech materials for evaluation were unprocessed sounds and enhanced sounds, which were produced by SS (1.0) and 3 levels of WF PSM , i.e. WF In the human results ( Fig. 4(a)), the speech intelligibility curves for WF PSM are roughly the same as the curve for the unprocessed conditions. However, the curve for SS (1.0) is lower than the curve for the unprocessed conditions. The standard deviations across listeners were approximately 10%. Multiple comparison analyses (Tukey-Kramer HSD test, α = 0.05) indicated that the speech intelligibility scores of the enhanced speech processed by SS (1.0) were significantly lower than those of the unprocessed speech. There was no significant difference in any combination between the unprocessed and other enhancement methods.
Results
Pink noise conditions
The results of the GEDI in Fig. 4(b) show that the speech intelligibility of WF (0.1) PSM is slightly higher and that of WF (0.2) PSM is slightly lower than for the unprocessed conditions. The results are similar to the human results in Fig. 4(a). In contrast, speech intelligibility for WF (0.0) PSM and SS (1.0) above 0-dB SNRs is lower than that in the human results.
The GEDI (weight) in Fig. 4(c) improved the prediction results from the original GEDI ( Fig. 4(b)), particularly for SS (1.0) above 0-dB SNRs. The predicted speech intelligibility for WF (0.1) PSM and WF (0.2) PSM is slightly lower than that for the GEDI and becomes closer to the human results. As a result, the prediction is improved by introducing the weighting function described in Eqs. 5 and 6.
The results of the STOI in Fig 4(e) show that the speech intelligibility curves for WF (0.0) PSM and WF (0.1) PSM are higher than that for the unprocessed condition. The results are inconsistent with the human results. In contrast, the speech intelligibility curve for SS (1.0) is similar to that for the human.
The HASPI (Fig. 4(f)) also predicted similar speech intelligibility to that in the STOI (Fig. 4(e)). For WF (0.0) PSM and WF (0.1) PSM conditions, speech intelligibility curves are higher than the curve for the unprocessed conditions. The curve for the SS (1.0) was slightly higher than that for the human (Fig. 4(a)). Figure 5 shows the percent correct values of speech intelligibility as a function of the speech SNR for babble noise conditions. Panel(a) shows the human In the human results ( Fig. 5(a)), the speech intelligibility curves for WF PSM , the speech intelligibility was almost identical to the unprocessed condition in all SNRs. The intelligibility score curves for the enhancement algorithms are roughly parallel across SNR conditions. Multiple comparison analyses (Tukey-Kramer HSD test, α = 0.05) indicated that the speech intelligibility scores of the enhanced speech processed by SS (1.0) were significantly lower than those for the unprocessed speech. There were no significant differences between the other algorithms and the unprocessed speech.
Babble noise conditions
The prediction result of the GEDI in Fig. 5(b) showed lower speech intelligibility than the human results for all speech enhancement algorithm conditions. In particular, the speech intelligibility for SS (1.0) was much lower than the human results.
The GEDI (weight) (Fig. 5(c)) slightly improved the speech intelligibility predictions for the SS (1.0) condition from the prediction results of the GEDI. However, the improvement is smaller than in the case of the pink noise conditions. Moreover, the speech intelligibility predicted by the GEDI (weight) decreased for the WF The results of the STOI in Fig 5(e) show that speech intelligibility curve for the SS (1.0) condition is the closest to that for the human (Fig 5(a)). However, the speech intelligibility for WF PSM s is still higher than the unprocessed condition described for the pink noise conditions in section 5.1.
In Fig 5(f)), the average intelligibility of the SS (1.0) is less than 10%. This implies that the HASPI completely failed to predict it. The curves for WF
Summary of results
In Fig. 4, we found that the GEDI predicted the intelligibility of speech sounds relatively well. Moreover, the GEDI (weight) improves the prediction. This means that the weighing function is effective for predicting speech intelligibility under the pink noise conditions. In Fig. 5, the GEDI and GEDI (weight) were, however, not good at predicting speech intelligibility under the babble noise conditions, because the model does not account for the effects of temporal fluctuations in non-stationary noise. Therefore, in the next section, we consider the extension of the GEDI with an IIR-based modulation filterbank and a short-time frame processing in the temporal envelope domain.
Multi-resolution GEDI for non-stationary noise conditions
We extended the GEDI (weight), which was successful under pink noise conditions, to a multi-resolution version (mr-GEDI), which uses temporal frames that are dependent on the modulation period used in the analysis. The main purpose is to improve the predictability of speech intelligibility under nonstationary noise conditions, such as those in everyday situations with realistic noise. The main differences from the GEDI (weight) are the temporal processing steps using IIR filters, as described in section 6.2, and segmentation using different frame lengths, as described in section 6.3. Figure 6 shows a block diagram of the mr-GEDI. The front-end processing, which includes the dcGC-FB, envelope extraction, and calculation of distortion, is common to the original GEDI shown in Fig. 2 and described in sections 2.1, 2.2, and 2.3.
Front-end processing
IIR-based modulation filterbank
Temporal envelopes e S and distortion e D are filtered using an IIR-based modulation filterbank that includes a third-order low-pass modulation filter and eight second-order modulation bandpass filters. The octave-frequency space, the range, and the Q-value of the modulation filterbank used are the same as in the mr-sEPSM study (Jørgensen et al., 2013).
Segmentation and envelope power
The output of the j-th modulation filter channel, {j|1 ≤ j ≤ 9}, is segmented into multi-resolution frames using a rectangular window without overlap and is denoted as E i,j (n). The duration of the window is the inverse of the cutoff frequency or the center frequency of the corresponding modulation filter (Jørgensen et al., 2013). For example, when the modulation filters have their center frequencies at 2 Hz, 4 Hz, and 8 Hz, the corresponding frame durations are 500 ms, 250 ms, and 125 ms, respectively. This frame processing enables us to analyze the components with the optimal resolution. The power of each frame, P env , is calculated from the squared sum of each temporal output of the modulation filterbank:
P env, * ,i,j,t = 1 [eŜ ,i ] 2 /2 [E * ,i,j,t (n) − E * ,i,j,t ] 2 ,(13)
where the asterisk (*) represents components from either the clean speech "S" or the distortion "D". t{t|1 ≤ t ≤ T (j)} is the frame index in the j-th modulation filter, and the bar indicates average over time. n is the sample number of the temporal envelopes. The denominator eŜ i in Eq.13 represents the normalization factor obtained using the DC component of the temporal envelope of the enhanced speechŜ i . P env, * ,i,j,t was restricted to be greater than −30 dB (0.001 in linear terms) as suggested by Jørgensen et al. (2013).
Calculation of SDR env and speech intelligibility
The SDR in the temporal envelope domain (SDR env ) is calculated as the power ratio between the clean speech (P env,S,i,j,t ) and the distortion signal (P env,D,i,j,t ). The individual SDR env,j,t for modulation filter channel j and frame index t is defined as the ratio of the powers summed across the dcGC-FB channel i, and it can be written as
SDR W env,j,t = 100 i=1 W i · P env,S,i,j,t 100 i=1 W i · P env,D,i,j,t ,(14)
where W i is a weight function described in section2.4. The total SDR env value is calculated as the root-mean-squared (RMS) value after averaging over the frames, T (j):
SDR env,j = 1 T (j) T (j) t=1 SDR W env,j,t ,(15)
The total SDR env is calculated by using Eq. 4 with the following number of modulation filter channels: J = 9. The SDR env in Eq. 15 is transformed into speech intelligibility by Eqs. 7 and 8, which are the same as those used in the GEDI. The parameter values optimized by the LSE method are listed in the fourth row of Table 1.
Evaluation of the mr-GEDI
Speech intelligibility curves
Figure 4(d) shows the prediction results of the mr-GEDI for pink noise conditions. The prediction results by the mr-GEDI are in sufficiently good agreement with the human results shown in Fig. 4(a). The results are almost identical to those obtained by GEDI (weight) in Fig. 4(c) because the mr-GEDI works in almost the same manner as GEDI (weight) under stationary noise, such as the pink noise. Figure 5(d) shows the prediction results of the mr-GEDI under babble noise conditions. The predicted curves are still lower than the curves of the human results in Fig. 5(a). However, these are significantly improved from those of the original GEDI ( Fig. 5(b)) and GEDI (weight) (Fig. 5(c)). This implies that the multi-resolution analysis works well under non-stationary noise conditions.
Quantitative evaluation
In sections 5 and 7.1, a comparison between human and OIM's results was performed qualitatively using Figs. 4 and 5. In this section, quantitative comparison is performed by two measures: 1) RMS error between the intelligibility scores of human and OIMs for individual speech enhancement algorithms, 2) mean difference between the unprocessed and enhanced conditions to clarify whether the prediction is an overestimation or underestimation. Table 2 shows a comparison of the OIMs in terms of the root-mean-square (RMS) error between human results and predicted results for individual speech enhancement algorithms under pink noise conditions. Bold and italic fonts indicate the smallest and second smallest values in each row, respectively. The RMS errors for SS (1.0) , WF PSM was the smallest for the mr-GEDI. The second smallest RMS errors were located in either the GEDI (weight) or the mr-GEDI, except for one exception of the STOI in the SS (1.0) .
Pink noise conditions
The mean difference of the speech intelligibility values between the unprocessed condition and the individual OIMs was calculated to quantify the relative locations of the curves shown in Figs. 4 and 5. This is a good measure to clarify whether the prediction results are close to the human results. Table 3 shows the results. In the SS (1.0) , the mean difference for the STOI was closest to that for the human results, and the GEDI (weight) was the second closest. In the WF (0.0) PSM , the mr-GEDI was first, whereas the GEDI (weight) was second. In the WF (0.1) PSM and WF (0.2) PSM , the GEDI (weight) was first. The results from the RMS error and mean difference imply that the GEDI (weight) and the mr-GEDI outperform the STOI and HASPI under pink noise conditions. The mr-GEDI is promising because it works in almost the same manner as the GEDI (weight) under stationary noise conditions, as described previously. Table 4 shows a comparison of the OIMs in terms of the RMS error under babble noise conditions. For SS (1.0) , the RMS errors were smallest in for the STOI and second smallest in for the mr-GEDI. For WF (0.0) PSM , the RMS errors were smallest in for the HASPI and second smallest in for the mr-GEDI. For WF (0.2) PSM , the RMS errors were smallest in for the STOI and second smallest in for the HASPI. Table 5 shows the mean difference of speech intelligibility values under babble noise conditions. For SS (1.0) , WF PSM , the mean differences were closer to that for the human result in for the STOI, HASPI, and STOI, respectively. The mr-GEDI is always in the second place.
One problem was observed in for the STOI. The value in of the STOI for WF It is completely inconsistent. The STOI did not properly predict intelligibility of speech enhanced by the state-of-the-art system.
The results imply that the mr-GEDI, STOI, and HASPI were very competitive in predictions under babble conditions when the prediction performance was evaluated in terms of the RMS errors and mean differences. However,But it is clear that the mr-GEDI was successfully extended from the original GEDI and GEDI (weight).
Speech reception thresholds
The SRTs were calculated to analyze the difference between the human and predicted results. The SRT is defined as an SNR value where the intelligibility curve crosses the 50%-score line (dotted horizontal line in Figs. 4 and 5. The values of the SRTs were calculated by fitting the prediction results to the human results using a cumulative Gaussian function.
Moreover, ∆SRT is defined to clarify the difference between the human result and the predictions by the OIMs. For example, the ∆SRT for the unprocessed condition is defined as ∆SRT unpr.,OIM = SRT unpr.,OIM − SRT unpr.,Human .
(16) In particular, ∆SRT is positive when the curve for the prediction of the OIM is located in on the right of the curve for the human result in Figs. 4 and 5. The positive ∆SRT means the prediction was an under-estimation and the negative ∆SRT means the prediction was an over-estimation. Figure 7(a) summarizes the SRTs for human results ( ), GEDI (△), and GEDI (weighted, ♦), mr-GEDI ( ), STOI (+), and HASPI ( * ) under pink noise conditions. Square markers are the SRT of human subjective results, and the error bar represents standard deviations across the subjects. Other markers are the average of the SRTs predicted by each OIM. For SS (1.0) , the average SRTs of the mr-GEDI, GEDI (weight), and STOI were within the standard deviation of the human SRT. For WF (0.0) PSM , the SRT of mr-GEDI was closerst to the human SRT. The SRTs of the GEDI and GEDI (weight) are much greater than the human SRT while the SRTs of the STOI and HASPI are much smaller than the human SRT. For WF (0.1) PSM , the SRTs of the STOI and HASPI are again much smaller than the human SRT. For WF (0.2) PSM , the difference between the OIMs wasere smaller. Figure 7(b) shows ∆SRTs to clarify the difference between the SRT of the OIM and the human SRT shown in Fig. 7(a). The data was submitted to multiple-comparison analysis (Tukey-Karamer HSD test, α = 0.05). The asterisks in Fig. 7(b) show the conditions, which were significantly different from the corresponding unprocessed conditions where the ∆SRTs were virtually zero. The ∆SRTs for the GEDI and GEDI (weight) were significantly positive in WF PSM were significantly negative. In contrast, the ∆SRT for the mr-GEDI were not significantly different from the zero in all speech enhancement conditions. The results show that the mr-GEDI is better than the STOI and HASPI under pink noise conditions. Figure 8(b) shows the ∆SRTs. The data was submitted to multiple-comparison analysis (Tukey-Karamer HSD test, α = 0.05). The asterisks in Fig. 8(b) show the conditions with significant difference. There were a few conditions which that were not significantly different: the STOI for SS (1.0) and WF PSM were negative. This implies that the STOI tends to over-estimate speech intelligibility.
Pink noise conditions
Babble noise conditions
Summary of the SRT results
Under pink noise conditions, the mr-GEDI predicted the human result better than the other OIMs. Under babble noise conditions, the mr-GEDI, STOI, and HASPI were competitive. In total, the mr-GEDI is advantageous.
Goodness of the model
It is essential to determine the parameter values of the OIMs in advance to predict the intelligibility of enhanced speech, as described in section 4.2. In this study, the parameter values were derived by the LSE method to minimize the prediction error of for the unprocessed conditions.
In modelling studies, goodness of the model could also be measured two factors: 1) the number of parameters or the degree of freedom and 2) predictability or stability of parameter values across various conditions. The increment of parameters improves applicability to various kind of data and goodness of fit to the existing data. However, it does not necessarily mean to improve the performance to on unknown data. A model with stable parameters is more useful in practical situations, which are more complex than laboratory conditions. The GEDI family and STOI require two parameters, as defined in Eqs. 8 and 10 while theHASPI requires, at least, three parameters as defined in Eq. 12. Reduction of one parameter in the HASPI would result in a serious degradation, although it is not allowed in by its definition. Table 1 shows the parameter values used in the evaluation. In the GEDI family, k values were between 1.1 and 1.5 and σ S values were about approximately 1.6 for pink noise and about approximately 0.6 for babble noise. The variability is relatively small. This implies the GEDI family canould be applicable to other noise conditions with a small change or tuning in the parameter values. It would also be possible to use intermediate values without fine-tuning for new noise conditions, although it could degrade the prediction slightly. In the STOI, the parameters a and b were had similar values in for pink and babble noise conditions. In contrast, the parameter values in HASPI were completely different in for pink and babble noise conditions. Current values cannot be applicable to a new noise condition. Moreover, it is not possible to predict or determine likely parameter values in advance. This also implies that the prediction by the HASPI may not be robust for noise conditions whose characteristics are varying dynamically as in everyday situations.
Conclusion
In this study, we proposed the GEDI based on the signal-to-distortion ratio in the auditory envelope, called SDR env . The main idea behind the proposed algorithm is to calculate the distortion between the temporal envelopes of the enhanced and clean speech from the output of an auditory filterbank. Moreover, the GEDI is extended with a multi-resolution analysis (mr-GEDI) to improve predictions for non-stationarlly noise conditions.
The GEDI and the mr-GEDI were evaluated in contrast with the well-known OIMs: the STOI and HASPI. Predictability of human speech intelligibility scores ware evaluated for speech sounds enhanced by simple spectral subtraction and a state-of-the-art Wiener filtering method. Additive pink and babble noise with various SNR was used for the input noisy sounds. According to the evaluation results, the mr-GEDI predicted human speech intelligibility scores better than the STOI and HASPI.
The software for the GEDI and mr-GEDI is available online: https://github.com/AMLAB-Wakayama/GEDI.
Figure 1 :
1Input signals for speech intelligibility prediction by the sEPSM and the dcGC-sEPSM (a) and by other major models, the proposed GEDI and mr-GEDI (b).
Figure 2 :
2Block diagram of the GEDI.
Figure 3 :
3Example of the temporal envelopes (e S and eŜ ) and envelope distortion (e D ) calculated from input sounds.
for the tests under babble noise conditions because of restrictions on the experimental condition.
= 1.17, σ S = 1.62 k = 1.25, σ S = 0.50 GEDI (weight) k = 1.25, σ S = 1.71 k = 1.27, σ S = 0.58 mr-GEDI k = 1.50, σ S = 1.64 k = 1.50, σ S = 0.64 STOI a = −6.44, b = 4.56 a = −8.91, b = 5.84 HASPI B = −10.88, C − 4.04, A high = 13.32 B = −61.36, C = −22.15, A high = 93.87
. The percentage of correct values is the averaged value across the nine noisy speech sets that were used for both the subjective experiments with the nine listeners and the objective predictions.
Figure 4 :
4Results of (a) the subjective listening experiments, and the objective predictions obtained via (b) the original GEDI, (c) the GEDI with the weighting function (GEDI (weight)), (d) the mr-GEDI (see section 6), (e) the STOI, and (f) the HASPI for the tests under pink noise conditions. The values and error bars represent the averages and standard deviations across the sets of speech materials.results. The other panels show prediction results by (b) the original GEDI, (c) GEDI (weight), (d) mr-GEDI in section 6, (e) STOI, and (f) HASPI. The speech enhancement algorithms are based on three conditions: SS (1.0) and the unprocessed condition for the reference. Fourteen noisy speech sets were used for both the subjective experiments and objective predictions.
SS (1.0) are lower than the curve for the unprocessed speech condition. For WF (0.2)
conditions; the results deviated greatly from the human results.
lower than the curve for the unprocessed condition.
Figure 5 :
5Results of (a) the human results, the objective predictions obtained via (b) the GEDI, (c) the GEDI with the weighting function (GEDI (weight)), (d) the mr-GEDI (see section 6), (e) the STOI, and (f) the HASPI for the tests under babble noise conditions. The values and error bars represent the averages and standard deviations across the sets of speech materials.
Figure 6 :
6Block diagram of the mr-GEDI.
the smallest for the GEDI (weight). The RMS error for WF (0.0)
Figure 7 :
7(a) SRT under pink noise conditions. Markers and error bar shows the mean and the standard deviation across the subjects or across the predictions. (b) ∆SRT calculated from the SRT. Asterisk (*) indicates that there were significant differences (p < 0.05) between the ∆SRTs for speech enhancement and unprocessed conditions.
Figure 8 :
8(a) SRT under babble noise conditions. Markers and error bar shows the mean and the standard deviation across the subjects or across the predictions. (b) ∆SRT calculated from the SRT. Asterisk (*) indicates that there were significant differences (p < 0.05) between the ∆SRTs for speech enhancement and unprocessed conditions.
Figure 8 (
8a) shows the SRTs under babble noise condition. The SRTs by all the OIMs were virtually located outside of the standard deviation of the human SRTs. This means the predictions were not very successful in all the OIMs.
Table 2 :
2RMS error between the human results and the predicted results in percentages under pink noise conditions, as shown inFig. 4. Bold and italic fonts indicate the smallest and second smallest values in each row, respectively.GEDI
GEDI
(weight)
mr-GEDI STOI HASPI
SS (1.0)
14.2
10.7
12.0
10.9
12.1
WF
(0.0)
PSM
19.9
18.6
14.6
20.0
19.5
WF
(0.1)
PSM
12.2
11.3
11.4
17.8
16.5
WF
(0.2)
PSM
11.4
10.9
11.0
12.8
11.1
Table 3 :
3Mean difference between the unprocessed and enhanced speech curves in percent points under pink noise conditions, as shown inFig. 4. Positive (negative) values imply that the speech enhancement algorithm improves (degrades) speech intelligibility. Bold and italic fonts indicate the closest and second closest conditions to the human results.Human GEDI
GEDI
(weight)
mr-GEDI STOI HASPI
SS (1.0)
−13.1
−20.9
-11.8
−8.6
-13.0
−4.8
WF
(0.0)
PSM
−3.3
−13.0
-11.5
-7.2
10.7
12.0
WF
(0.1)
PSM
−3.5
-0.3
-2.9
0.0
10.7
9.8
WF
(0.2)
PSM
2.2
6.5
-2.6
5.1
10.2
8.6
Table 4 :
4RMS error between human results and predicted results in percentages under babble noise conditions, as shown in Fig. 5. Bold and italic fonts indicate the smallest and second smallest values in each row, respectively.GEDI
GEDI
(weight)
mr-GEDI STOI HASPI
SS (1.0)
32.3
25.6
16.8
13.4
34.7
WF
(0.0)
PSM
21.3
25.8
17.0
18.9
15.7
WF
(0.2)
PSM
14.5
20.3
14.0
12.0
13.9
Table 5 :
5Mean difference between the unprocessed and enhanced speech curves in Fig. in percent points under babble noise conditions shown in Fig. 5. Bold and italic fonts indicate the closest and second closest conditions to the human results.Human GEDI
GEDI
(weight)
mr-GEDI STOI HASPI
SS (1.0)
−12.3
−36.6
−30.7
-23.2
-15.3 −38.4
WF
(0.0)
PSM
−4.9
−20.9
−24.2
-15.8
9.9
-12.9
WF
(0.2)
PSM
0.3
−9.1
−14.4
-7.2
7.7
−8.5
MATLAB code for the dcGC-FB is available in the GitHub repository.
AcknowledgementsThis research was partially supported by JSPS KAKENHI Grant Numbers JP25280063, JP16H01734, and JP16K12464.
Methods for the calculation of the speech intelligibility index. Ansi S3-, American National Standards InstituteWashington DCsku=ANSI%2FASA+S3.5-1997+(R2017)&source=blogANSI S3-5 (1997). Methods for the calculation of the speech intelligibil- ity index . Washington DC: American National Standards Institute. URL: https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI%2FASA+S3.5-1997+(R2017)&source=blog.
Enhancement of speech corrupted by acoustic noise. M Berouti, R Schwartz, J Makhoul, 10.1109/ICASSP.1979.1170788IEEE International Conference on Acoustics, Speech, and Signal Processing. 4Institute of Electrical and Electronics EngineersBerouti, M., Schwartz, R., & Makhoul, J. (1979). Enhancement of speech corrupted by acoustic noise. In IEEE International Con- ference on Acoustics, Speech, and Signal Processing (pp. 208-211). Institute of Electrical and Electronics Engineers volume 4. URL: http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=1170788. doi:10.1109/ICASSP.1979.1170788.
The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction. A Chabot-Leclerc, S Jørgensen, T Dau, 10.1121/1.4873517The Journal of the Acoustical Society of America. 135Chabot-Leclerc, A., Jørgensen, S., & Dau, T. (2014). The role of auditory spectro-temporal modulation filtering and the decision metric for speech in- telligibility prediction. The Journal of the Acoustical Society of America, 135 , 3502-3512. URL: http://asa.scitation.org/doi/10.1121/1.4873517. doi:10.1121/1.4873517.
. T Chi, Y Gao, M C Guyton, P Ru, S Shamma, Chi, T., Gao, Y., Guyton, M. C., Ru, P., & Shamma, S. (1999).
Spectro-temporal modulation transfer functions and speech intelligibility. 10.1121/1.428100doi:10.1121/1.428100The Journal of the Acoustical Society of America. 106Spectro-temporal modulation transfer functions and speech intelli- gibility. The Journal of the Acoustical Society of America, 106 , 2719-2732. URL: http://asa.scitation.org/doi/10.1121/1.428100. doi:10.1121/1.428100.
The concept of signal-tonoise ratio in the modulation domain and speech intelligibility. F Dubbelboer, T Houtgast, 10.1121/1.3001713The Journal of the Acoustical Society of America. 124Dubbelboer, F., & Houtgast, T. (2008). The concept of signal-to- noise ratio in the modulation domain and speech intelligibility. The Journal of the Acoustical Society of America, 124 , 3937-3946. URL: http://scitation.aip.org/content/asa/journal/jasa/124/6/10.1121/1.3001713. doi:10.1121/1.3001713.
Objective quality and intelligibility prediction for users of assistive listening devices: Advantages and limitations of existing tools. T H Falk, V Parsa, J F Santos, K Arehart, O Hazrati, R Huber, J M Kates, S Scollie, 10.1109/MSP.2014.2358871IEEE Signal Processing Magazine. 32Falk, T. H., Parsa, V., Santos, J. F., Arehart, K., Hazrati, O., Huber, R., Kates, J. M., & Scollie, S. (2015). Objective quality and intelligibility prediction for users of assistive listening devices: Advantages and limita- tions of existing tools. IEEE Signal Processing Magazine, 32 , 114-124. doi:10.1109/MSP.2014.2358871.
A study of mutual frontend processing method based on statistical model for noise robust speech recognition. M Fujimoto, K Ishizuka, T Nakatani, Proceedings of the Interspeech. the InterspeechFujimoto, M., Ishizuka, K., & Nakatani, T. (2009). A study of mutual front- end processing method based on statistical model for noise robust speech recognition. In Proceedings of the Interspeech 2009 September 2009 (pp. 1235-1238).
Noise suppression with unsupervised joint speaker adaptation and noise mixture model estimation. M Fujimoto, S Watanabe, T Nakatani, 10.1109/ICASSP.2012.6288971IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEFujimoto, M., Watanabe, S., & Nakatani, T. (2012). Noise suppres- sion with unsupervised joint speaker adaptation and noise mixture model estimation. In IEEE International Conference on Acous- tics, Speech and Signal Processing (pp. 4713-4716). IEEE. URL: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6288971. doi:10.1109/ICASSP.2012.6288971.
. S Furui, K Maekawa, H Isahara, Furui, S., Maekawa, K., & Isahara, H. (2000).
A Japanese National Project on Spontaneous Speech Corpus and Processing Technology. ASR2000 -Automatic Speech Recognition: Challenges for the new Millenium. A Japanese Na- tional Project on Spontaneous Speech Corpus and Processing Tech- nology. In ASR2000 -Automatic Speech Recognition: Challenges for the new Millenium (pp. 244-248).
. France Paris, Paris, France. URL: https://www.isca-speech.org/archive_open/asr2000/asr0_244.html.
The Effect of Vocabulary Size on Articulation Score. D M Green, T G Birdsall, Signal Detection and Recognition by Human Observers: Contemporary Readings. J. A. SwetsPeninsula PublishingGreen, D. M., & Birdsall, T. G. (1988). The Effect of Vocabulary Size on Ar- ticulation Score. In J. A. Swets (Ed.), Signal Detection and Recognition by Human Observers: Contemporary Readings (pp. 609-619). Peninsula Pub- lishing. URL: https://books.google.co.jp/books?id=-YMmPQAACAAJ.
A Dynamic Compressive Gammachirp Auditory Filterbank. T Irino, R D Patterson, 10.1109/TASL.2006.874669IEEE transactions on audio, speech, and language processing. 14Irino, T., & Patterson, R. D. (2006). A Dynamic Compres- sive Gammachirp Auditory Filterbank. IEEE transactions on audio, speech, and language processing, 14 , 2222-2232. URL: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2661063&tool=pmcentrez&rendertyp doi:10.1109/TASL.2006.874669.
T Irino, K Yamamoto, gammachirp-filterbank. Irino, T., & Yamamoto, K. (2019). gammachirp-filterbank. URL: https://github.com/AMLAB-Wakayama/gammachirp-filterbank.
. ISO. 9921ISO 9921 (2003).
. Ergonomics -Assessment of speech communication. Ergonomics -Assessment of speech communication.
International Organization for Standardization. Switzerland Genève, Genève, Switzerland: International Organization for Standardization. URL: https://www.iso.org/standard/33589.html.
Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing. S Jørgensen, T Dau, 10.1121/1.3621502The Journal of the Acoustical Society of America. 130Jørgensen, S., & Dau, T. (2011). Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency se- lective processing. The Journal of the Acoustical Society of America, 130 , 1475-1487. URL: http://www.ncbi.nlm.nih.gov/pubmed/21895088. doi:10.1121/1.3621502.
A multiresolution envelope-power based model for speech intelligibility. S Jørgensen, S D Ewert, T Dau, 10.1121/1.4807563The Journal of the Acoustical Society of America. 134Jørgensen, S., Ewert, S. D., & Dau, T. (2013). A multi- resolution envelope-power based model for speech intelligibil- ity. The Journal of the Acoustical Society of America, 134 , 436-46. URL: http://www.ncbi.nlm.nih.gov/pubmed/23862819. doi:10.1121/1.4807563.
The Hearing Aid Speech Perception Index (HASPI). J Kates, K Arehart, 10.1016/j.specom.2014.06.002Speech Communication. 65Kates, J., & Arehart, K. (2014). The Hearing Aid Speech Percep- tion Index (HASPI). Speech Communication, 65 , 75-93. URL: http://www.sciencedirect.com/science/article/pii/S0167639314000545?via%3Dihub. doi:https://doi.org/10.1016/j.specom.2014.06.002.
Coherence and the speech intelligibility index. J M Kates, K H Arehart, 10.1121/1.1862575The Journal of the Acoustical Society of America. 117Kates, J. M., & Arehart, K. H. (2005). Coherence and the speech intelligibility index. The Journal of the Acoustical Society of America, 117 , 2224-2237. doi:10.1121/1.1862575.
Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality. J M Kates, K H Arehart, 10.1121/1.4931899The Journal of the Acoustical Society of America. 138Kates, J. M., & Arehart, K. H. (2015). Comparing the information con- veyed by envelope modulation for speech intelligibility, speech quality, and music quality. The Journal of the Acoustical Society of America, 138 , 2470-2482. URL: http://asa.scitation.org/doi/10.1121/1.4931899. doi:10.1121/1.4931899.
Role of mask pattern in intelligibility of ideal binary-masked noisy speech. U Kjems, J B Boldt, M S Pedersen, T Lunner, D Wang, 10.1121/1.3179673The Journal of the Acoustical Society of America. 126Kjems, U., Boldt, J. B., Pedersen, M. S., Lunner, T., & Wang, D. (2009). Role of mask pattern in intelligibility of ideal binary-masked noisy speech. The Journal of the Acoustical Society of America, 126 , 1415-1426. doi:10.1121/1.3179673.
NTT-Tohoku University Familiarity-Controlled Word Lists. K Kondo, S Amano, Y Suzuki, S Sakamoto, Kondo, K., Amano, S., Suzuki, Y., & Sakamoto, S. (2007). NTT- Tohoku University Familiarity-Controlled Word Lists 2007 (FW07). URL: http://research.nii.ac.jp/src/en/FW07.html.
Corpus of Spontaneous Japanese: Its design and evaluation. K Maekawa, ISCA and IEEE Workshop on Spontaneous Speech Processing and Recognition (SSPR2003). Tokyo, JapanMaekawa, K. (2003). Corpus of Spontaneous Japanese: Its design and evaluation. In ISCA and IEEE Workshop on Spontaneous Speech Pro- cessing and Recognition (SSPR2003) (pp. 7-12). Tokyo, Japan. URL: https://www.isca-speech.org/archive_open/sspr2003/index.html.
A direct test of the unequal-variance signal detection model of recognition memory. L Mickes, J Wixted, P Wais, 10.3758/BF03194112doi:10.3758/BF03194112Psychonomic Bulletin & Review. 14Mickes, L., Wixted, J., & Wais, P. (2007). A direct test of the unequal-variance signal detection model of recognition mem- ory. Psychonomic Bulletin & Review , 14 , 858-865. URL: http://dx.doi.org/10.3758/BF03194112%5Cnpapers2://publication/doi/10.3758/BF03194112. doi:10.3758/BF03194112.
. B C J Moore, Moore, B. C. J. (2013).
An Introduction to the Psychology of Hearing. 6th ed.An Introduction to the Psychology of Hearing. (6th ed.).
. The Leiden, Netherlands, BrillLeiden, The Netherlands: Brill. URL: http://www.brill.com/introduction-psychology-hearing-0.
A neural network alternative to non-negative audio models. P Smaragdis, S Venkataramani, 10.1109/ICASSP.2017.7952123doi:10.1109/ICASSP.2017.7952123IEEE International Conference on Acoustics, Speech and Signal Processing. New OrleansIEEESmaragdis, P., & Venkataramani, S. (2017). A neural network alterna- tive to non-negative audio models. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017 (pp. 86-90). New Orleans: IEEE. URL: https://doi.org/10.1109/ICASSP.2017.7952123. doi:10.1109/ICASSP.2017.7952123.
An algorithm for intelligibility prediction of time-frequency weighted noisy speech. C H Taal, R C Hendriks, R Heusdens, J Jensen, 10.1109/TASL.2011.2114881IEEE Transactions on Audio, Speech and Language Processing. 19Taal, C. H., Hendriks, R. C., Heusdens, R., & Jensen, J. (2011). An algo- rithm for intelligibility prediction of time-frequency weighted noisy speech. IEEE Transactions on Audio, Speech and Language Processing, 19 , 2125- 2136. doi:10.1109/TASL.2011.2114881.
Discriminative NMF and its application to single-channel source separation. F Weninger, J Le Roux, J Hershey, S Watanabe, Interspeech. Weninger, F., Le Roux, J., Hershey, J., & Watanabe, S. (2014). Discriminative NMF and its application to single-channel source separation. In Interspeech 2014 (pp. 865-869).
. K Yamamoto, T Irino, T Matsui, S Araki, K Kinoshita, T Nakatani, Yamamoto, K., Irino, T., Matsui, T., Araki, S., Kinoshita, K., & Nakatani, T. (2017).
Predicting Speech Intelligibility Using a Gammachirp Envelope Distortion Index Based on the Signal-to-Distortion Ratio. 10.21437/Interspeech.2017-170doi:10.21437/Interspeech.2017-170Proceedings of the Interspeech. the InterspeechPredicting Speech Intelligibility Using a Gammachirp Envelope Distortion Index Based on the Signal-to- Distortion Ratio. In Proceedings of the Interspeech 2017 (pp. 2949- 2953). URL: http://dx.doi.org/10.21437/Interspeech.2017-170. doi:10.21437/Interspeech.2017-170.
Speech intelligibility prediction with the dynamic compressive gammachirp filterbank and modulation power spectrum. K Yamamoto, T Irino, T Matsui, S Araki, K Kinoshita, T Nakatani, 10.1250/ast.40.84Acoustical Science and Technology. 40Yamamoto, K., Irino, T., Matsui, T., Araki, S., Kinoshita, K., & Nakatani, T. (2019). Speech intelligibility prediction with the dy- namic compressive gammachirp filterbank and modulation power spectrum. Acoustical Science and Technology, 40 , 84-92. URL: https://www.jstage.jst.go.jp/article/ast/40/2/40_E1785/_article. doi:10.1250/ast.40.84.
. K Yamamoto, T Irino, N Ohashi, S Araki, K Kinoshita, T Nakatani, Yamamoto, K., Irino, T., Ohashi, N., Araki, S., Kinoshita, K., & Nakatani, T. (2018).
Multi-resolution Gammachirp Envelope Distortion Index for Intelligibility Prediction of Noisy Speech. Proc. Interspeech. InterspeechMulti-resolution Gammachirp Envelope Distortion Index for Intelligibility Prediction of Noisy Speech. In Proc. Interspeech 2018 (pp. 1863-1867).
. India Hyderabad, Url, 10.21437/Interspeech.2018-1291doi:10.21437/Interspeech.2018-1291Hyderabad, In- dia. URL: http://dx.doi.org/10.21437/Interspeech.2018-1291. doi:10.21437/Interspeech.2018-1291.
|
[
"https://github.com/AMLAB-Wakayama/GEDI.",
"https://github.com/AMLAB-Wakayama/gammachirp-filterbank."
] |
[
"Numerical study of metastability due to tunneling: The quantum string method",
"Numerical study of metastability due to tunneling: The quantum string method"
] |
[
"Tiezheng Qian \nDepartment of Mathematics\nDepartment of Mathematics\nHong Kong University of Science and Technology\nClear Water BayKowloon, Hong KongChina\n",
"Weiqing Ren \nProgram in Applied and Computational Mathematics\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"Jing Shi \nDepartment of Mathematics and PACM\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"Weinan E \nDepartment of Physics and Institute of Nano Science and Technology\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"Ping Sheng \nHong Kong University of Science and Technology\nClear Water BayKowloon, Hong KongChina\n"
] |
[
"Department of Mathematics\nDepartment of Mathematics\nHong Kong University of Science and Technology\nClear Water BayKowloon, Hong KongChina",
"Program in Applied and Computational Mathematics\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Mathematics and PACM\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics and Institute of Nano Science and Technology\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Hong Kong University of Science and Technology\nClear Water BayKowloon, Hong KongChina"
] |
[] |
We generalize the string method, originally designed for the study of thermally activated rare events, to the calculation of quantum tunneling rates. This generalization is based on the analogy between quantum mechanics and statistical mechanics in the path-integral formalism. The quantum string method first locates, in the space of imaginary-time trajectories, the minimal action path (MAP) between two minima of the imaginary-time action. From the MAP, the saddle-point ("bounce") action associated with the exponential barrier penetration probability is obtained and the pre-exponential factor (the ratio of determinants) for the tunneling rate evaluated using stochastic simulation. The quantum string method is implemented to calculate the zero-temperature escape rates for the metastable zero-voltage states in the current-biased Josephson tunnel junction model. In the regime close to the critical bias current, direct comparison of the numerical and analytical results yields good agreement. Our calculations indicate that for the nanojunctions encountered in many experiments today, the (absolute) escape rates should be measurable at bias current much below the critical current.
|
10.1016/j.physa.2007.01.005
|
[
"https://export.arxiv.org/pdf/cond-mat/0509076v1.pdf"
] | 28,972,423 |
cond-mat/0509076
|
7fd407c02bd46ed811bf14743c3fe6fe51837370
|
Numerical study of metastability due to tunneling: The quantum string method
3 Sep 2005
Tiezheng Qian
Department of Mathematics
Department of Mathematics
Hong Kong University of Science and Technology
Clear Water BayKowloon, Hong KongChina
Weiqing Ren
Program in Applied and Computational Mathematics
Princeton University
08544PrincetonNew JerseyUSA
Jing Shi
Department of Mathematics and PACM
Princeton University
08544PrincetonNew JerseyUSA
Weinan E
Department of Physics and Institute of Nano Science and Technology
Princeton University
08544PrincetonNew JerseyUSA
Ping Sheng
Hong Kong University of Science and Technology
Clear Water BayKowloon, Hong KongChina
Numerical study of metastability due to tunneling: The quantum string method
3 Sep 2005numbers: 7450+r7420De8220Wt0510-a
We generalize the string method, originally designed for the study of thermally activated rare events, to the calculation of quantum tunneling rates. This generalization is based on the analogy between quantum mechanics and statistical mechanics in the path-integral formalism. The quantum string method first locates, in the space of imaginary-time trajectories, the minimal action path (MAP) between two minima of the imaginary-time action. From the MAP, the saddle-point ("bounce") action associated with the exponential barrier penetration probability is obtained and the pre-exponential factor (the ratio of determinants) for the tunneling rate evaluated using stochastic simulation. The quantum string method is implemented to calculate the zero-temperature escape rates for the metastable zero-voltage states in the current-biased Josephson tunnel junction model. In the regime close to the critical bias current, direct comparison of the numerical and analytical results yields good agreement. Our calculations indicate that for the nanojunctions encountered in many experiments today, the (absolute) escape rates should be measurable at bias current much below the critical current.
I. INTRODUCTION
One of the most important problems in statistical physics involves the rate of decay of a system rendered unstable by thermally activated barrier crossing [1][2][3] and/or quantum barrier tunneling [4][5][6], and functional integrals represent a fundamental tool for studying these transition processes. However, numerical evaluation of the functional integrals has always been a challenge. Recently, the string method has been proposed for the numerical evaluation of thermally activated rare events [7][8][9][10][11]. This method first locates the most probable transition pathway connecting two metastable states in configuration space. The transition rates can then be computed by numerically evaluating the fluctuations around the most probable pathway.
The purpose of this paper is to generalize the string method to the study of quantum metastability caused by barrier tunneling. The theory of the decay rate through barrier tunneling has been formulated using the imaginarytime functional integral techniques [4]. Essentially, the saddle point of the imaginary-time action is first located and the rate of decay is then obtained by evaluating the relevant fluctuations. Within the functional integral formalism, the computational task for a quantum field in d-dimensional space is equivalent to that for a classical field in (d + 1)dimensional space [4]. Therefore, the quantum version of the string method can be numerically implemented as its original classical version for a higher dimensional system.
It should be noted that while in zero-dimensional tunneling problems (a quantum particle is regarded as a zerodimensional quantum field, as in the example presented below) the quantum string method might not offer any special advantage than, e.g., the well-known WKB method, in higher dimensional problems or in field theoretical formulations, the usual wave-mechanics approach becomes very difficult to implement. It is precisely in such problems that the present approach can offer an efficient numerical tool in finding the path of "least action", and on that basis calculate the relevant tunneling rate(s).
The paper is organized as follows. In Sec. II we outline the string method for the numerical evaluation of thermal activation rate and generalize it to the evaluation of quantum tunneling rate. In Sec. III we apply the quantum string method to study the current-biased Josephson junction [12][13][14][15]. This physical model has long been used to demonstrate the quantum mechanical behavior of a single macroscopic degree of freedom (the phase difference across the tunnel junction) [15]. It has also played an important role in the study of macroscopic quantum tunneling [16]. The escape rate of the junction from its zero-voltage state is numerically evaluated at zero temperature in the absence of dissipation. For bias currents less than but very close to the critical current, the tilted-washboard potential can be approximated by the quadratic-plus-cubic potential, for which the analytic form of the quantum tunneling rate has been obtained [17,18]. Our numerical results obtained for this "solvable" model show good agreement with the previous analytic results, and thus affirm the validity of the quantum string method. In Sec. IV we conclude the paper with a few remarks on the relationship of our results to quantum dissipation.
II. QUANTUM STRING METHOD
A. String method for thermally activated rare events
The string method was originally presented for the numerical study of thermal activation of metastable states [7]. Consider a system governed by the energy potential V (q) in the overdamped regime, where q denotes the generalized coordinates {q i }. The minima of V (q) in the configuration space correspond to the metastable and stable states of the system. Given q A and q C as the two minima of V , the most probable fluctuation which can carry the system from q A to q C (or q C to q A ) corresponds to the lowest intervening saddle point q B between these two minima, with the transition rate given by [1][2][3]
Γ T (A → C) = |λ B | 2πη | det H(q B )| det H(q A ) −1/2 exp − ∆V k B T ,(1)
where η is the frictional coefficient, H(q) denotes the Hessian of V (q), λ B is the negative eigenvalue of H(q B ), and ∆V = V (q B ) − V (q A ) is the energy barrier. For the numerical evaluation of Γ T , we define the minimal energy path (MEP) as a smooth curve q ⋆ (s) in configuration space. It connects q A and q C with intrinsic parametrization such as arc length s, satisfying
(∇V ) ⊥ (q ⋆ ) = 0,(2)
where (∇V ) ⊥ is the component of ∇V normal to the curve q ⋆ (s). Physically, the MEP is the most probable pathway for thermally activated transitions between q A and q C . To numerically locate the MEP in q space, a string q(s, t) (a smooth curve with intrinsic parametrization by s) connecting q A and q C is evolved according to
dq dt ⊥ = − (∇V ) ⊥ (q),(3)
where (dq/dt) ⊥ denotes the velocity normal to the string [7]. To enforce the desired parametrization, e.g., equal arc length, the string is reparametrized every a few time steps. The stationary solution of Eq. (3) satisfies Eq. (2) which defines the MEP. Once the MEP is determined, the intervening saddle point q B is known. The negative eigenvalue λ B and the corresponding eigenvector can be directly obtained from the MEP, and the ratio of the determinants in Eq. (1) can be numerically computed using a stochastic method [9]. The above scheme for the calculation of thermal activation rates has many applications, e.g., the condensation of a supersaturated vapor, the realignment of a magnetic domain [7,8], and the decay of persistent current in onedimensional superconductor [19,20,11]. All of these transition processes occur when the system undergoes a fluctuation that is large enough to initiate the transition. Physically, the thermal activation rate Γ T becomes practically unmeasurable as the temperature is sufficiently low (k B T ≪ ∆V ). However, even though thermodynamic fluctuations are suppressed at low temperatures, a system can still be rendered unstable by quantum barrier tunneling [4]. The simplest example is a particle that escapes a potential well: it penetrates a potential barrier and emerges at the escape point with zero kinetic energy, after which it propagates classically. In quantum field theory, a classical false vacuum is rendered unstable by bubbles of the true vacuum, realized through tunneling. Once these bubbles are sufficiently large, they become energetically favorable to grow.
B. Rate of barrier tunneling
The theory for the rate of barrier tunneling has been formulated [4] and generalized [5,6] using the imaginary-time functional integral techniques. Here we show that the string method can be generalized to be an efficient numerical tool (the quantum string method) for the calculation of tunneling rates. In order to make concrete the formulation of the approach, below we consider a quantum particle that escapes a potential well through barrier tunneling. This zero-dimensional case can be extended to a quantum field, where the classical false vacuum is rendered unstable by barrier tunneling [4]. These results show that for a quantum field in d-dimensional space, the computational task may be reduced to the calculation of thermodynamic transition rates for a classical field in (d + 1)-dimensional space.
Consider a particle of mass m moving in a one-dimensional potential U (q) with two minima q 0 and q 1 , one of which, q 1 , is the absolute minimum (see Fig. 1a). Assume U (q 0 ) = 0 and q 1 is to the right of q 0 (q 1 > q 0 ). The normalized harmonic-oscillator ground state ψ g (q), centered at q 0 , is ψ g (q) = (mω 0 /πh) 1/4 exp −mω 0 (q − q 0 ) 2 /2h , where ω 0 = U ′′ (q 0 )/m is the frequency locally defined at q 0 . The ground-state energy ishω 0 /2. These familiar results can be derived from the imaginary-time propagator:
K(q 0 , q 0 ; T ) = q 0 |e −ĤT /h |q 0 = [Dq(τ )]e −S/h ,(4)
where T is the imaginary time duration,Ĥ is the Hamiltonian of the system,
S[q(τ )] = T /2 −T /2 dτ m 2 dq dτ 2 + U [q(τ )](5)
is the action, and [Dq(τ )] denotes the integration over functions q(τ ) satisfying q(−T /2) = q(T /2) = q 0 . Note that the imaginary-time (Euclidean) action S[q(τ )] can be obtained from the real-time (Minkowski) action
A[q(t)] = dt m 2 dq(t) dt 2 − U [q(t)]
through the formal substitution t → −iτ and −iA[q(t)] → S[q(τ )]. Thus the equation of motion in imaginary time would involve an inverted potential, i.e., U (q) → −U (q). An expression for K(q 0 , q 0 ; T ) in the limit of T → ∞ gives both the energy and the wavefunction of the lowest-lying energy eigenstate. In the semiclassical (smallh) limit, the functional integral for K(q 0 , q 0 ; T ) is dominated by the stationary points of the action, denoted byq(τ ), that satisfy the imaginary-time equation of motion
δS δq q = −m d 2q dτ 2 + U ′ (q) = 0,
with the boundary conditionq(−T /2) =q(T /2) = q 0 . There are two solutions: one isq(τ ) = q 0 , at which the Hessian of S, −m∂ 2 τ + U ′′ (q 0 ), has positive eigenvalues only, and the other is the so-called bounce q b (τ ), at which the
Hessian of S, −m∂ 2 τ + U ′′ [q b (τ )]
, has a zero eigenvalue plus a negative eigenvalue. The zero eigenvalue comes from the time-translation symmetry and the negative eigenvalue makes q b (τ ) a saddle point of S[q(τ )]. By following the bounce q b (τ ) in time, the quantum particle would initially stay at q 0 for a long time, on the order of T , then make a brief excursion to the escape point q e (separated from q 0 by a potential barrier, with U (q e ) = U (q 0 )) in a time of order 1/ω 0 , and finally return to q 0 and remain there for another duration of order T (see Fig. 1b). This process is called a "bounce". Here q e is the coordinate point at which the quantum tunneling particle leaves the barrier. Physically q b (τ ) characterizes a fluctuation large enough to accomplish the penetration. [For more details, see Appendix A.] Using the two stationary points of S in the semiclassical approximation with a proper analytic continuation [4], the propagator in Eq. (4) is obtained as
|ψ g (q 0 )| 2 e −EgT /h = (mω 0 /πh) 1/2 e −ω0T /2 e −iImEg T /h ,(6)
which gives |ψ g (q 0 )| 2 = (mω 0 /πh) 1/2 , the ground-state energyhω 0 /2, and an imaginary part of the energy ImE g . Physically, ImE g (< 0) is responsible for the decay of the metastable "ground state" centered at q 0 , with the decay rate Γ Q = −2ImE g /h:
Γ Q = ω 0 S b 2πh U ′′ (q 0 )| det ′ H[q b (τ )]| det H(q 0 ) −1/2 e −S b /h ,(7)
where H denotes the Hessian of S:
H[q(τ )] = −m∂ 2 τ + U ′′ [q(τ )], S b ≡ S[q b (τ )]
is the action associated with the bounce q b (τ ), and det ′ indicates that the zero eigenvalue is to be omitted in computing the determinant.
q q q 0 e 1 q q U(q) q 0 q e (a) (b) τ b (τ) FIG. 1. (a)
The potential in one-dimensional space. The unstable ground state is centered at q0. After penetrating the barrier, the particle emerges at the escape point qe and propagates classically. (b) The bounce solution q b (τ ) for the imaginary-time classical equation of motion. In the inverted potential −U (q), the particle begins at the top of the hill at q0, turns around (i.e., bounces) at the classical turning point qe, and returns to the top of the hill.
C. Numerical implementation of quantum string method
To numerically evaluate Γ Q , we generalize the string method to the quantum case. For formal similarity, we write the action S[q(τ )] in Eq. (5) as S(q), where the vector q represents the coordinates in the q(τ )-function space (q space). [Computationally, there are always a large but finite number of these coordinates.] We define the minimal action path (MAP) as a smooth curve q ⋆ (s) in q space. It connects the two minima of S, q 0 and q 1 , with intrinsic parametrization such as arc length s, satisfying
(∇S) ⊥ (q ⋆ ) = 0,(8)
where (∇S) ⊥ is the component of ∇S normal to the curve q ⋆ (s). Here q 0 and q 1 correspond toq(τ ) = q 0 and q(τ ) = q 1 , respectively. [A slightly different choice for q 1 is also possible, corresponding to aq(τ ) profile with q(τ ) = q 1 in most of the τ -interval [−T /2, T /2] andq(−T /2) =q(T /2) = q 0 .] The saddle point of S is obtained as the point q b which has the maximum value of S along the MAP. This corresponds to the bounce q b (τ ). To numerically locate the MAP in q space, a string q(s, t) connecting q 0 and q 1 is evolved according to Eq. (3) with V replaced by S. The stationary solution is the MAP defined by Eq. (8).
The ratio of determinants in Eq. (7) can be numerically obtained as follows: (1) From the MAP q ⋆ (s) parametrized by the arc length s, the eigenvector u
(1) b corresponding to the negative eigenvalue λ (1) b
of the Hessian H(q b ) can be obtained by evaluating dq ⋆ (s)/ds at the saddle point q b , followed by a normalization. λ
(1) b is then computed from λ (1) b = [u (1) b ] T H(q b )u (1) b .
(2) The eigenvector u (2) b corresponding to the zero eigenvalue λ (2) b of the Hessian H(q b ) can be obtained by evaluating ∂ τ q b (τ ) from q b , followed by a normalization.
(3) The Hessian H(q b ) is modified to give a positive definite matrixH(q b ):
H(q b ) = H(q b ) + 2|λ (1) b |[u (1) b ][u (1) b ] T + [u (2) b ][u (2) b ] T ,(9)whose determinant detH(q b ) equals | det ′ H(q b )|.(4)
In order to compute the ratio of determinants detH(q b )/det H(q 0 ), a harmonic potential parametrized by α (0 ≤ α ≤ 1) is constructed as
U α (q) = 1 2 q T [(1 − α)H(q b ) + αH(q 0 )]q(10)
in the N -dimensional q space. It can be shown [9] that
detH(q b ) det H(q 0 ) = exp 2 1 0 Q(α)dα ,(11)
where Q(α) is the expectation value
Q(α) = 1 Z(α) dq 1 2 q T H (q b ) − H(q 0 ) q exp [−U α (q)] ,(12)
in which Z(α) is the partition function
Z(α) = dq exp [−U α (q)] .(13)
A stochastic process can be generated to measure Q(α) according to Eq. (12) [9].
III. CURRENT-BIASED JOSEPHSON TUNNEL JUNCTION
A. Model formulation
Superconducting devices based on the Josephson effect have been widely used to investigate macroscopic quantum tunneling [16]. We consider the resistively and capacitively shunted junction [12,14,15] for which the classical equation of motion is
C h 2e 2 d 2 φ dt 2 + 1 R h 2e 2 dφ dt + ∂ ∂φ − I ch 2e cos φ − Ih 2e φ =h 2e I N (t),(14)
where φ is the difference in the phases of the order parameters on two sides of the junction, C is the capacitance of the junction, R is the resistance of the junction, I c is the Josephson critical current, I is the bias current, and I N (t) is the fluctuating noise current generated by R. Equation (14) is the same as that of a particle of mass C(h/2e) 2 moving along the φ axis in an effective potential (the so-called "tilted washboard" potential, see Fig. 2)
U (φ) = − I ch 2e cos φ + I I c φ .
According to this mechanical analog, for I < I c the zero-voltage state is given by φ 0 = arcsin(I/I c ), which is a minimum of U (φ). In the classical limit, the escape from the zero-voltage state is induced by the noise current which activates the system over the potential barrier [12]. At sufficiently low temperatures, although thermodynamic fluctuations are suppressed, the junction can still escape from the zero-voltage state through quantum barrier tunneling [14,15]. Here the unstable ground state is centered at φ0, φe is the escape point, and φ1 = φ0 + 2π is the next minimum.
We consider the zero-temperature behavior of the junction in the low-damping limit ((2eI c /hC) 1/2 [1 − (I/I c ) 2 ] 1/4 RC ≫ 1). The state of the junction is represented by the wave function Ψ(φ, t), governed by the Schrödinger equation ih∂Ψ/∂t =ĤΨ in which the Hamiltonian is of the form
H = − (2e) 2 2C ∂ 2 ∂φ 2 + U (φ).
We introduce three parameters: the dimensionless bias current x = I/I c , the plasma frequency ω p = 2eI c /hC, and I c /2eω p = E J /E C where E J = I ch /2e is the Josephson coupling energy and E C = (2e) 2 /C is the charging energy. The imaginary-time action
S[φ(τ )] = dτ 1 2 C h 2e 2 dφ dτ 2 + U (φ) ,(15)
can be written as S[φ(τ )] =h E J /E CS [φ(τ )], whereτ = ω p τ is a dimensionless time variable andS is the dimensionless actionS
[φ(τ )] = dτ 1 2 dφ dτ 2 + (− cos φ − xφ) .(16)
B. Numerical results
Based on the actions (15) and (16), the tunneling rate is given by
Γ Q = ω p cos φ 0 E J E C 1/4 S b 2π cos φ 0 | det ′ H[φ b (τ )]| det H(φ 0 ) −1/2 exp − E J E CS b ,(17)
according to the general expression (7). Here ω p √ cos φ 0 is the frequency locally defined at φ 0 ,S b ≡S[φ b (τ )] is the dimensionless action of the bounce φ b (τ ) which satisfies φ b (±∞) = φ 0 , and H is the Hessian ofS[φ(τ )], given by H[φ(τ )] = −∂ 2 /∂τ 2 + cos[φ(τ )]. Numerical calculations based on the action in Eq. (16) have been carried out to evaluate the bounce φ b (τ ), the bounce actionS b , and the ratio of determinants in the general expression (17). Note that these dimensionless properties are uniquely determined by the parameter x. Once they are evaluated, the tunneling rate Γ Q can be readily obtained using the other two parameters ω p and E J /E C . These parameters should be easily measurable experimentally, since they are directly determined from the capacitance of the junction, the critical current, and the bias current.
Numerical calculations have been carried out according to the following procedure.
(i) We first locate the MAP in the φ(τ )-function space. In the calculation, φ(τ ) is represented by a column vector φ of N = 200 entries, with theτ -interval [−T /2,T /2] discretized by a uniform mesh of N points. We useT = 20, large enough for the computation of zero-temperature properties. [HereT =hω p /k B T , where k B is the Boltzmann constant and T the temperature, andT ≫ 1 meanshω p ≫ k B T .] The string φ(s) connecting φ(0) = φ 0 and φ(1) = φ 1 is discretized by M = 200 points in the φ space. As to the two fixed ends of the string, φ 0 corresponds to φ(τ ) = φ 0 and φ 1 to a φ(τ ) profile with φ(τ ) = φ 1 in most of theτ -interval and φ(−T /2) = φ(T /2) = φ 0 . Here φ 0 = arcsin x and φ 1 = φ 0 + 2π are two neighboring minima of − (cos φ + xφ) (see Fig. 2). Note that φ 1 is obtained as a local minimum ofS in Eq. (16). The string evolution is generated by
(dφ/dt) ⊥ = − ∇S ⊥ (φ),
with the initial string taken from a linear interpolation between φ 0 and φ 1 . The MAP φ ⋆ (s) is reached by the evolving string φ(s, t) as its stationary solution defined by ∇S ⊥ (φ ⋆ ) = 0, with φ ⋆ (0) = φ 0 and φ ⋆ (1) = φ 1 . The bounce φ b (τ ) is obtained from the vector φ b which yields the maximum value ofS along the MAP. In Fig. 3 a sequence of the φ(τ ) profiles along the MAP is shown for x = 0.1, and in Fig. 4 the bounce profile φ b (τ ) is shown for a few selected values of x. In Fig. 5 the variation of the actionS along the MAP is shown for a few selected values of x, and in Fig. 6 the bounce actionS b is plotted as a function of x. N matrix H(φ b ), to give a positive definite matrixH(φ b ), given byH
(φ b ) = H(φ b ) + 2|λ (1) b |[u (1) b ][u (1) b ] T + cos φ 0 [u (2) b ][u (2) b ] T .
Here u (1) b is the eigenvector corresponding to the negative eigenvalue λ
φ b ) = cos φ 0 | det ′ H(φ b )|. Note that u (1) b , λ (1) b , and u (2) b
can be readily obtained once the MAP is determined, as outlined in Sec. II C. In Fig. 7 the unstable direction u The curves are displaced vertically for clarity, whereas the original ones all start and end at 0. Each eigenfunction represents the unstable direction at the saddle point along a particular MAP in the φ(τ )-function space. It is noted that u (1) b (τ ) obtained for x = 0.1 is qualitatively different from that for x = 0.9. For x close to 0, the bounce φ b (τ ) has the height φe (the escape point) close to the next (lower) minimum φ1, and the growth of the φ(τ ) bubble along the MAP is characterized by the movement of the two (left and right) domain walls. Consequently, the unstable direction u (1) b (τ ) shows two peaks. On the other hand, for x > 0.3, the bounce φ b (τ ) has the height φe far from the next minimum φ1, and the growth of the φ(τ ) bubble along the MAP is characterized by overall dilation. Consequently, the unstable direction u (iii) We calculate the ratio of determinants
γ = cos φ 0 | det ′ H[φ b (τ )]| det H(φ 0 ) = cos φ 0 | det ′ H(φ b )| det H(φ 0 ) ,(18)
where H(φ 0 ) is the N × N matrix representation of H(φ 0 ). This is done by evaluating detH(φ b )/ det H(φ 0 ) according to the stochastic method outlined in Sec. II C. Numerical results of this part are shown in Figs. 8 and 9, for the stochastically measured Q(α) in Eqs. (11) and (12) and the ratio of determinants γ in Eq. (18). FromS b and γ, the
dimensionless prefactor √ cos φ 0 S b 2π cos φ 0 | det ′ H[φ b (τ )]| det H(φ 0 ) −1/2
in Eq. (17) is readily obtained and plotted in Fig. 10. Using the numerical results forS b and γ, the mean escape rate out of the zero-voltage state can be readily obtained from Eq. (17), once the values of ω p and E J /E C are given. From I c = 9.489 µA and C = 6.35 pF reported in an early experiment [15], we have ω p = 67.4 GHz and E J /E C = 440. The largeness of E J /E C implies that quantum tunneling becomes observable only if x → 1 andS b → 0. For the experiment reported in Ref. [15], x ≈ 0.99 at whichS b ≈ 0.037, e − √ EJ /ECS b ∼ 10 −7 , and the escape rate is approximately 2.7 × 10 4 sec −1 . In a recent experiment on quantum superposition of macroscopic persistent-current states, a superconducting loop is constructed with three Josephson junctions [21]. We find that the junction parameters in that experiment allow quantum tunneling to be observable in a range of x much wider than that in Ref. [15]. From I c = 570 nA and C = 2.6 fF [21], we have ω p = 816 GHz and E J /E C = 2.18. The smallness of E J /E C then allowsS b and hence x to vary in a wide range. For x decreasing from 0.8 to 0.2,S b roughly increases from 2 to 10, and consequently, e − √ EJ /ECS b changes from ∼ 10 −2 to ∼ 10 −10 . Using Eq. (17) with the numerical results forS b and γ, we obtain the escape rates Γ Q = 1.2 × 10 11 sec −1 , 7.4 × 10 7 sec −1 , and 1.5 × 10 3 sec −1 , for x = 0.8, 0.5, and 0.2, respectively. Note that typically, the measured escape rates are in the range from ∼ 10 1 to ∼ 10 6 sec −1 . Therefore, our numerical results indicate that the absolute escape rates for today's nanojunctions should be measurable at bias current much below the critical current. This is because the junction capacitance has been significantly reduced and thus the action scale E J /E C can be made small enough to allow a relatively large dimensionless actionS b in the exponential factor e − √ EJ /ECS b [22]. In this regard a numerical scheme as presented in this paper is essential to the evaluation of the absolute escape rates. C. Quadratic-plus-cubic potential For bias currents less than but very close to I c (x close to 1), the potential barrier ∆U to be penetrated is low, the distance between the minimum φ 0 and the escape point φ e is small, and hence the potential U (φ) in the classically forbidden region can be approximated by the quadratic-plus-cubic potential. That is, the dimensionless potential − cos φ − xφ in the action (16) can be expressed in a Taylor expansion form
u(φ) = − cos φ − xφ ≈ (− cos φ 0 − xφ 0 ) + 1 2 cos φ 0 (φ − φ 0 ) 2 1 − φ − φ 0 φ e − φ 0 ,(19)
where φ e − φ 0 = 3 cot φ 0 approaches zero as x → 1 and φ 0 → π/2. From Eq. (19), the dimensionless potential barrier can be easily found to be ∆u = 2 cos φ 0 (φ e − φ 0 ) 2 /27 and the dimensionless bounce action to beS b = 8 √ cos φ 0 (φ e −φ 0 ) 2 /15. In addition, the Hessian ofS[φ(τ )] becomes H[φ(τ )] = −∂ 2 /∂τ 2 +cos φ 0 [1−3(φ−φ 0 )/(φ e −φ 0 )], for which the analytic result
cos φ 0 | det ′ H[φ b (τ )]| det H(φ 0 ) = 1 60
has been obtained for the determinant ratio γ in Eq. (18) [17,18]. These exact results for the quadratic-plus-cubic potential allow an analytic form for the tunneling rate Γ Q in Eq. In order to demonstrate the validity and precision of the quantum string method, numerical calculations have been carried out to reproduce the bounce action and determinant ratio for the quadratic-plus-cubic potential, an important model potential for the study of quantum metastability [18]. For simplicity, we work with the scaled action functional
S[q(τ )] = dτ 1 2 dq dτ 2 + 1 2 q 2 (1 − q) .(20)
The potential q 2 (1 − q) /2 in action (20) has q 0 = 0 as the metastable minimum and q e = 1 as the escape point. For computational purpose, this potential is distorted in the region of q ≫ q e to generate another (lower) minimum at q 1 (≫ q e ). Numerically, the two potential minima q 0 and q 1 are used to fix the ends of the evolving string in the q(τ )-function space. The stationary solution for the string evolution equation is the MAP from which the saddle point of S[q(τ )], i.e., the bounce q b (τ ), can be obtained. Since the potential profile is untouched in the classically forbidden region (q 0 ≤ q ≤ q e ), the bounce so obtained is not affected by the potential distortion far away. The bounce action S b ≡ S[q b (τ )] is obtained to be 0.5337, very close to the exact result 8/15. The determinant ratio
| det ′ H[q b (τ )]| det H(q 0 ) = | det ′ −∂ 2 /∂τ 2 + 1 − 3q b (τ ) | det (−∂ 2 /∂τ 2 + 1) = 1 60(21)
is obtained to be 0.0142, close to 1/60. These results are obtained for N = 200, M = 200, and the total imaginary time duration T = 20. Better agreement with the exact results can certainly be achieved by using longer imaginary time duration, vector space of higher dimensionality, and finer resolution in discretizing the string.
IV. CONCLUDING REMARKS
Quantum tunneling in macroscopic systems is intimately related to the important issue of quantum dissipation, which arises from the coupling to environmental variables. This coupling can modify the tunneling itself, as many prior works have shown [5,17,23,24]. However, it should be noted that regardless of the particulars in the quantum dissipation model, the net result is to decrease the escape rate. Hence the rate calculated for nondissipative quantum tunneling may be regarded as an upper bound to the rate(s) with nonzero dissipation.
In this regard it should also be noted that as a tool for the numerical evaluation of tunneling rate in the path integral formalism, the quantum string method is directly generalizable to field theoretic problems, requiring only additional computational resources. subject to the boundary conditionq(−T /2) =q(T /2) = q 0 for T → ∞. The qualitative behavior of q b (τ ) is suggested by the analogy with the equation of motion for a particle of mass m in the inverted potential −U (q), in which q 0 now corresponds to the top of the hill and q e to the classical turning point. The particle would spend most of its time at q 0 (due to zero velocity), but, in the course of an arbitrarily long interval of time, it would make a brief excursion to the point q e and then return to q 0 (see Fig. 1b). Note that is a constant of motion for q b (τ ). This means dq b /dτ vanishes at q 0 and q e . The bounce q b (τ ) shown in Fig. 1b is centered at τ c = 0 along the τ axis. Because of the time-translation invariance of the action, the bounce solutions are also given by q b (τ − τ c ), where τ c is an arbitrary center of the bounce. This symmetry property leads to a zero eigenvalue for the Hessian of S at q b (τ ), −m∂ 2 τ + U ′′ [q b (τ )]. The corresponding eigenfunction is given by
u (2) b (τ ) = m S b dq b dτ ,
where m/S b is the normalization factor derived from the action of the bounce, Note that dq b /dτ has a zero at the center of the bounce. Therefore, u
b (τ ) has a node and can not be the eigenfunction with the lowest eigenvalue: there must be a nodeless eigenfunction, u (1) b (τ ), with a negative eigenvalue. This implies that the bounce is not a minimum of the action but a saddle point. The negative eigenvalue requires a proper analytical continuation in evaluating the functional integral in (4). This leads to a complex energy in (6).
Using the constant of motion, the bounce action can be reduced to the form
S b = T /2 −T /2 m dq b dτ 2 dτ = 2 qe q0 2mU (q)dq.
It is seen that e −S b /2h is the familiar WKB exponential factor for the amplitude of tunneling wave. Accordingly, e −S b /h is the exponential factor for the current density of the tunneling wave, which is directly related to the rate of decay. This is reflected in Eq. (7). We want to remark that for one-dimensional quantum mechanics, the tunneling rate (7) derived from functional integral agrees with that obtained by standard WKB method of wave mechanics [4].
FIG. 2 .
2The tilted washboard potential U (φ) for I/Ic = 0.3.
FIG. 3 .FIG. 4 .FIG. 5 .
345A sequence of the φ(τ ) profiles along the MAP for x = 0.1. From bottom to top, the first curve denotes φ(τ ) = φ0 which is a local minimum ofS[φ(τ )], the third curve denotes the bounce φ b (τ ), and the last curve denotes another local minimum ofS[φ(τ )] with φ(0) = φ1 and φ(−T /2)] = φ(T /2) = φ0. The bounce φ b (τ ), plotted for a few selected values of x. The dimensionless action along a segment of the MAP starting from φ(τ ) = φ0 and ending at φ b (τ ). HereS is plotted as a function of the arc length s in the φ(τ )-function space for a few selected values of x. The profile of φ(τ ) = φ0 is taken as the reference point s = 0 at whichS has been set to be zero by a constant shift of the potential.
FIG. 6 .
6The bounce actionS b plotted as a function of x.
(
ii) We then modify the Hessian H[φ b (τ )], represented by the N ×
same Hessian, and detH(
MAP at the saddle point is shown for a few selected values of x.
FIG. 7. The eigenfunction u
b
(τ ) of the Hessian H[φ b (τ )] with the negative eigenvalue, plotted for a few selected values of x.
FIG. 8 .FIG. 9 .
89Stochastically measured Q(α), plotted for a few selected values of x. The determinant ratio γ in Eq. (18) plotted as a function of x.
.(17), plotted as a function of x. Note the large variation of a factor > 10 as a function of x.
φ 0 (φ e − φ 0 ) 2 = 0.469, approaching 8/15, and the determinant ratio γ = 0.0162, approaching 1/60, though φ e − φ 0 = 1.45 is still large.
ACKNOWLEDGMENTS T. Q. was supported by the Hong Kong RGC under grant 602805. J. S. and W. E. were supported by DOE under grant DE-FG02-03ER25587. P. S. was supported by the Hong Kong RGC under grants HKUST6073/02P and CA04/05.SC02. APPENDIX A: THE BOUNCE The bounce q b (τ ) is a solution of the imaginary-time classical equation of motion: m d 2q dτ 2 = U ′ (q),
−
U [q(τ )] = −U (q 0 ) = 0
. H A Kramers, Physica. 7284H. A. Kramers, Physica 7, 284 (1940).
. R Landauer, J A Swanson, Phys. Rev. 1211668R. Landauer and J. A. Swanson, Phys. Rev. 121, 1668 (1961).
. J S Langer, Ann. Phys. (N.Y.). 41108J. S. Langer, Ann. Phys. (N.Y.) 41, 108 (1967);
. J S Langer, Phys. Rev. Lett. 21973J. S. Langer, Phys. Rev. Lett. 21, 973 (1968).
. S Coleman, Phys. Rev. D. 152929S. Coleman, Phys. Rev. D 15, 2929 (1977);
. C G Callan, Jr , S Coleman, Phys. Rev. D. 161762C. G. Callan, Jr. and S. Coleman, Phys. Rev. D 16, 1762 (1977).
. A O Caldeira, A J Leggett, Phys. Rev. Lett. 46211A. O. Caldeira and A. J. Leggett, Phys. Rev. Lett. 46, 211 (1981).
. I Affleck, Phys. Rev. Lett. 46388I. Affleck, Phys. Rev. Lett. 46, 388 (1981).
. W E , W Ren, E Vanden-Eijnden, Phys. Rev. B. 6652301W. E, W. Ren, and E. Vanden-Eijnden, Phys. Rev. B 66, 052301 (2002).
. W E , W Ren, E Vanden-Eijnden, J. Appl. Phys. 932275W. E, W. Ren, and E. Vanden-Eijnden, J. Appl. Phys. 93, 2275 (2003).
Zero-temperature string method for the study of rare events. W E , W Ren, E Vanden-Eijnden, unpublishedW. E, W. Ren, and E. Vanden-Eijnden, Zero-temperature string method for the study of rare events, unpublished.
. W Ren, Comm. Math. Sci. 1377W. Ren, Comm. Math. Sci. 1, 377 (2003).
. T Qian, W Ren, P Sheng, Phys. Rev. B. 7214512T. Qian, W. Ren, and P. Sheng, Phys. Rev. B 72, 014512 (2005).
. V Ambegaokar, B I Halperin, Phys. Rev. Lett. 221364V. Ambegaokar and B. I. Halperin, Phys. Rev. Lett. 22, 1364 (1969).
. T A Fulton, L N Dunkleberger, Phys. Rev. B. 94760T. A. Fulton and L. N. Dunkleberger, Phys. Rev. B 9, 4760 (1974).
. R F Voss, R A Webb, Phys. Rev. Lett. 47265R. F. Voss and R. A. Webb, Phys. Rev. Lett. 47, 265 (1981).
. J M Martinis, M H Devoret, J Clarke, Phys. Rev. B. 354682J. M. Martinis, M. H. Devoret, and J. Clarke, Phys. Rev. B 35, 4682 (1987).
. A J Leggett, Prog. Theor. Phys. 6980Suppl.)A. J. Leggett, Prog. Theor. Phys. (Suppl.) 69, 80 (1980);
. A J Leggett, J. Phys.: Condens. Matter. 14415A. J. Leggett, J. Phys.: Condens. Matter 14, R415 (2002).
. A O Caldeira, A J Leggett, Ann. Phys. (N.Y.). 149374A. O. Caldeira and A. J. Leggett, Ann. Phys. (N.Y.) 149, 374 (1983).
. M Ueda, A J Leggett, Phys. Rev. Lett. 801576M. Ueda and A. J. Leggett, Phys. Rev. Lett. 80, 1576 (1998).
. J S Langer, V Ambegaokar, Phys. Rev. 164498J. S. Langer and V. Ambegaokar, Phys. Rev. 164, 498 (1967).
. D E Mccumber, B I Halperin, Phys. Rev. B. 11054D. E. McCumber and B. I. Halperin, Phys. Rev. B 1, 1054 (1970).
. C H Van Der Wal, Science. 290773C. H. van der Wal et al., Science 290, 773 (2000).
For small bias current I ≪ Ic (x ≪ 1), the ground-state energy (≈hωp/2) must be small compared to the barrier height ∆U (≈ 2EJ ). Otherwise, the system loses metastability. For EJ /EC = 2.18 in our calculations, the barrier height is about 9 times the ground-state energy and metastability can be safely assumed. The scale of the imaginary-time action, EJ /EC, can be written as EJ /hωp. Further reduction of the junction capacitance would eventually destroy this metastabilityThe scale of the imaginary-time action, EJ /EC, can be written as EJ /hωp. For small bias current I ≪ Ic (x ≪ 1), the ground-state energy (≈hωp/2) must be small compared to the barrier height ∆U (≈ 2EJ ). Otherwise, the system loses metastability. For EJ /EC = 2.18 in our calculations, the barrier height is about 9 times the ground-state energy and metastability can be safely assumed. Further reduction of the junction capacitance would eventually destroy this metastability.
. E Freidkin, P S Riseborough, P Hänggi, Z. Phys. B. 64237E. Freidkin, P. S. Riseborough, and P. Hänggi, Z. Phys. B 64, 237 (1986).
U Weiss, Quantum Dissipative Systems. SingaporeWorld ScientificU. Weiss, Quantum Dissipative Systems (World Scientific, Singapore, 1993).
|
[] |
[
"SYMMETRIES OF TRANSVERSELY PROJECTIVE FOLIATIONS",
"SYMMETRIES OF TRANSVERSELY PROJECTIVE FOLIATIONS"
] |
[
"F Lo Bianco ",
"E Rousseau ",
"F Touzet "
] |
[] |
[] |
Given a (singular, codimension 1) holomorphic foliation F on a complex projective manifold X, we study the group PsAut(X, F ) of pseudoautomorphisms of X which preserve F ; more precisely, we seek sufficient conditions for a finite index subgroup of PsAut(X, F ) to fix all leaves of F . It turns out that if F admits a (possibly degenerate) transverse hyperbolic structure, then the property is satisfied; furthermore, in this setting we prove that all entire curves are algebraically degenerate. We prove the same result in the more general setting of transversely projective foliations, under the additional assumptions of non-negative Kodaira dimension and that for no generically finite morphism f : X ′ → X the foliation f * F is defined by a closed rational 1-form.
| null |
[
"https://arxiv.org/pdf/1901.05656v1.pdf"
] | 119,724,578 |
1901.05656
|
9d0e0c3d880af3b05958871910945ed5980a278e
|
SYMMETRIES OF TRANSVERSELY PROJECTIVE FOLIATIONS
17 Jan 2019
F Lo Bianco
E Rousseau
F Touzet
SYMMETRIES OF TRANSVERSELY PROJECTIVE FOLIATIONS
17 Jan 2019
Given a (singular, codimension 1) holomorphic foliation F on a complex projective manifold X, we study the group PsAut(X, F ) of pseudoautomorphisms of X which preserve F ; more precisely, we seek sufficient conditions for a finite index subgroup of PsAut(X, F ) to fix all leaves of F . It turns out that if F admits a (possibly degenerate) transverse hyperbolic structure, then the property is satisfied; furthermore, in this setting we prove that all entire curves are algebraically degenerate. We prove the same result in the more general setting of transversely projective foliations, under the additional assumptions of non-negative Kodaira dimension and that for no generically finite morphism f : X ′ → X the foliation f * F is defined by a closed rational 1-form.
Introduction
In this article we study the symmetries of holomorphic foliations, i.e. automorphisms (or birational transformations) of the ambient manifold which send each leaf to another leaf; we denote by Aut(X, F ) the group of such automorphisms. In particular, we focus on the following question: Question 1. Under which conditions does a finite index subgroup of Aut(X, F ) preserve each leaf of F ?
If the above condition is satisfied, we will say that the transverse action of Aut(X, F ) (on F ) is finite.
Example 1. Let F be a linear foliation on a compact complex torus X = C n /Λ. Then the group Aut(X, F ) contains the group of translations of X, and in particular its transverse action is infinite.
Example 2. Since the group of automorphisms of a projective variety of general type X is finite, so is the transverse action of Aut(X, F ) for any foliation F on X. By taking the pull-back foliation on a product X × Y (Y being for example a compact torus) one obtains a foliation with an infinite group of symmetries which has finite transverse action.
1.1. Main results. From now on we suppose that X is a complex projective manifold and that F is a (possibly singular) foliation of codimension 1. Recall that a birational transformation f : X X is called a pseudo-automorphism if f induces an isomorphism U ∼ = V between two Zariski-open sets such that codim(X \ U ), codim(X \ V ) ≥ 2; or, equivalently, if f and f −1 do not contract any hypersurface.
We say that F admits a transverse hyperbolic structure if, roughly speaking, outside a degeneracy divisor H ⊂ X the foliation admits local first integrals F i : U i → D which are uniquely defined up to left composition with automorphisms of D; see Definition 2.1. The third-named author showed in [Tou13] that F admits a transverse hyperbolic structure if the conormal bundle N * F is pseudo-effective and the positive part of its Zariski decomposition is non-trivial; see Remark 2.2. We denote by PsAut(X, F ) the group of pseudo-automorphisms of X which preserve F .
In this context, we prove that the foliation is essentially the pull-back of a foliation on a projective variety of general type, which implies the transverse finiteness of the action of PsAut(X, F ); furthermore, we obtain a result on entire curves on X:
Theorem A. Let X be a projective manifold and let F be a transversely hyperbolic codimension 1 foliation. Then
• there exists a generically finite morphism π : X ′ → X, a morphism ψ : X ′ → B onto a projective variety B of general type and a foliation G on B such that π * F = ψ * G; • the transverse action of PsAut(X, F ) is finite;
• any entire curve f : C → X is algebraically degenerate i.e. f (C) is not Zariski dense.
For a proof, see Theorem 3.1 and Theorem 3.2. This result should be seen as a generalization of well-known properties of hyperbolic curves. It is also important to remark that such a statement is wrong in the non-Kähler setting as we will see in the striking example of Inoue surfaces.
Transversely hyperbolic foliations are a special case of transversely projective foliations: in this case, roughly speaking, the distinguished first integrals have values in P 1 and they are uniquely defined up to left composition with automorphisms of P 1 (see Definition 2.3). In this context the description is less precise, and we are forced to introduce a dichotomy:
Theorem B. Let X be a projective manifold with κ(X) ≥ 0 and F be a transversely projective (possibly singular) foliation of codimension 1 on X. Then
• either there exists a generically finite morphism π : X ′ → X such that π * F is defined by a closed rational 1-form; • or the transverse action of PsAut(X, F ) is finite.
Remark that the first alternative contains the case of algebraically integrable foliations.
The proofs of the results follow the same overall strategy, although in the general case of transversely projective foliations one needs to address some additional technical difficulties:
• we apply a result of Corlette-Simpson [CS08] which allows to factor (see Definition 2.5) the monodromy of the structure either through a curve or through a quotient of the polydisk D N /Γ (the transverse hyperbolic and transverse projective cases are treated in detail in [Tou16] and [LPT16] respectively); • the case of curves can be treated almost by hand (in the case of a projective structure, we use a classification result of Cantat and Favre [CF03]); • for the case of quotients of the polydisk, we apply a result of Brunebarbe [Bru16], which ensures that the image of the morphism ψ : X D N /Γ is of (log-)general type, and in particular its group of pseudo-automorphisms is finite; • one shows that ψ is essentially equal to the Shafarevich morphism of the monodromy representation, hence it is invariant by PsAut(X, F ); then one can restrict to fibres (in the transverse projective case, one needs to apply [LB], hence the assumption on the Kodaira dimension).
1.2. A conjecture. In the context of fibrations (i.e. algebraically integrable foliations), Question 1 was studied by the first-named author in [LB]. Theorem A in loc-cit. suggests the following conjecture:
Conjecture 2. Let X be a projective manifold such that κ(X) ≥ 0, F be a foliation on X and L be a line bundle on X. Suppose that L admits a singular hermitian metric whose curvature form defines, up to sign, a transverse hermitian metric on F . Then a birational transformation of X preserving F and L has transversely finite action.
Here, by a transverse hermitian metric we mean a (semi-)positive closed (1, 1)current, which is invariant by the holonomy of F and which induces a smooth hermitian metric on the normal bundle N F in codimension 1 (indeed, outside the singular locus of F ); this is also Mok's definition of a semi-kähler structure [Mok00, Definition 1.2.1]. The third-named author showed in [Tou15] that, if F has codimension 1 and is regular, the existence of a closed positive and holonomy invariant current without atomic part implies the existence of a such a transverse hermitian metric. Moreover, the latter can be chosen to be homogeneous (i.e. hyperbolic, euclidean or spherical, depending on the sign of the curvature tensor).
Remark 1.1. By the results proven in the forthcoming Section 3, it seems rather natural to state the same conjecture under more general assumptions on the transverse metric inherited from the curvature current of L (allowing for instance weaker regularity and additional degenaracies along invariant hypersurfaces).
1.3. Structure of the text. In Section 2 we present the formal definitions of transversely hyperbolic and projective structures, and give the interpretation of these definitions in terms of developing maps and monodromy; we also briefly recall some of the properties of Shimura modular orbifolds which will be used later, as well as the definition of factorization of a representation and a result of lifting of pseudoautomorphisms to finite étale covers. In Section 3 and 4 we prove Theorem A and B respectively; we also show that Theorem A cannot be extended to the general (non-Kähler) compact case. Finally, in Section 5 we describe the symmetries of codimension 1 foliations on compact complex tori; in particular, we show that Conjecture 2 is (trivially) satisfied in this case.
Preliminaries
2.1. Transverse structures on codimension 1 foliations. Throughout this section, we denote by X a complex (projective) manifold and by F a codimension 1 (possibly singular) foliation. By a (smooth) transverse structure on F we mean, roughly speaking, a geometric structure (in a broad sense: metric, homogeneous structure...) defined on the normal bundle N F which is invariant by the holonomy of F . Of course we need to specify the behavior at singular points of F ; furthermore, we will consider more generally singular transverse structure, which may degenerate (in a prescribed way) along an F -invariant hypersurface H.
2.1.1. Definitions. Let us start with the formal definition of transverse hyperbolic structure, see [Tou15]. Caution! We use a different notation than [Tou15], where the metric and the associated curvature current are denoted by η T and −T respectively.
Definition 2.1. A (branched) transverse hyperbolic structure on F is the datum of a non-trivial positive closed (1, 1)-current T such that:
• T is invariant by the holonomy of F (or simply F -invariant), meaning that,
if ω is a local holomorphic 1-form defining F , we have ω ∧ T = 0; • T induces a singular hermitian metric on N F (in the sense of Demailly, see [Dem92]); • if Θ T denotes the curvature current associated to T , we have
Θ T = −(T + [N ]), where [N ]
denotes the current of integration along a Q-effective divisor N .
The hypersurface H := Supp(N ) is the degeneracy locus of the transverse hyperbolic structure.
A closed non-trivial semi-positive current satisfying the first and second condition is called a singular transverse metric of F . It can be then locally written as T = ie 2ψ ω ∧ω where ω is a local closed one-form defining F and ψ is L 1 loc . If x ∈ X is a regular point of F , we can describe the foliation by a local coordinate dz = 0, so that locally
T = ie 2ψ(z) dz ∧ dz,
The associated curvature current is then locally defined as
Θ T = − i π ∂∂ψ.
A transversely (branched) euclidean (respectively, spherical) structure is defined in an analogous way by imposing that
Θ T = −[N ] (respectively, Θ T = T − [N ]).
Remark 2.2. If a foliation F admits a transverse hyperbolic structure, then N * F is pseudo-effective. Indeed, a positive, holonomy invariant current T defining the hyperbolic structure defines a singular hermitian metric on N F ; its curvature form, which is equal to −T − [N ], represents the class c 1 (N F ). Therefore, the class Moreover N can be chosen to coincide with the negative part of the Zariski decomposition of c 1 (N * F ) (see [Bou04]). In this situation, the first part of the alternative exactly occurs when the positive part is non-trivial and the structures are then unique.
c 1 (N * F ) = −c 1 (N F ) is
Instead of considering local first integral with values in D, one can pick more generally first integrals with values in P 1 , well-defined up to automorphisms of P 1 . In order to define a projective structure (see [LP07,LPT16]), one imposes the following conditions on the singular locus (i.e. the hypersurface where the structure degenerates):
Definition 2.3. A transverse projective structure on F is the data of a triple (E, ∇, σ) where
• E is a rank 2 vector bundle;
• ∇ is a flat meromorphic connection on E;
• σ : X E is a meromorphic section of P(E) X such that, if R denotes the Riccati foliation on P(E) determined by (the projectivization of ) ∇, F = σ * R. Such triples are considered modulo a natural relation of birational equivalence (see [LPT16]).
As explained in [Tou16, §6.1], transversely hyperbolic foliations are a special case of transversely projective foliations.
2.1.2. Distinguished first integrals and monodromy representation. Let F be a transversely hyperbolic foliation on a manifold X; denote by H ⊂ X the polar hypersurface of T . Remark that, locally at points of X 0 := X \ H, F can be defined by a local first integral F i : U i → D which are uniquely defined modulo composition to the left by elements of Isom(D) = Aut(D) = PSL 2 (R). Following such distinguished first integrals along closed paths yields a developing map dev :
X 0 → D,
where X 0 denotes the universal cover of X 0 , and a monodromy representation
ρ : π 1 (X 0 ) → PSL 2 (R)
such that ρ(γ) • dev = dev •γ for all γ ∈ π 1 (X 0 ). Here, we identify π 1 (X 0 ) with the group of deck transformations of the universal cover X 0 → X 0 .
Similarly, if F is a transversely projective foliation and H ⊂ X denotes the polar hypersurface of the connection ∇, locally at points of X 0 := X \ H the foliation F admits distinguished (meromorphic) first integrals
F i : U i P 1
which are uniquely defined modulo composition to the left by elements of Aut(P 1 ) = PSL 2 (C). Following such distinguished first integrals along closed paths yields a (meromorphic) developing map dev :
X 0 P 1 ,
where X 0 denotes the universal cover of X 0 , and a monodromy representation
ρ : π 1 (X 0 ) → PSL 2 (C)
such that ρ(γ) • dev = dev •γ for all γ ∈ π 1 (X 0 ).
2.1.3.
Singularities of transverse projective structures. Let us conclude the introduction to transverse projective (or hyperbolic) structures by a brief discussion on the singular locus H introduced above.
Definition 2.4. We say that a transversely projective foliation has regular singularities if the corresponding connection has at worst regular singularities in the sense of [Del70].
As remarked in [Tou16, §6.1], transversely hyperbolic foliations have regular singularities when considered as transversely projective foliations.
For the purposes of this article, one can simply keep in mind the following property: if a transversely projective foliation has regular singularities and the monodromy (of a small loop) around an irreducible hypersurface D ⊂ H is trivial, then a distinguished first integral defined in a neighborhood of D extends meromorphically through D.
Shimura modular orbifolds.
Recall that an orbifold is Hausdorff topological space which is locally modelled on finite quotients of C n . One defines an orbifold cover as a map f : X → Y between orbifolds which is locally conjugated to a quotient map
C n /Γ 0 → C n /Γ 1 Γ 0 ≤ Γ 1 .
Then one can see that given an orbifold X there exists a universal orbifold cover π : X → X; the orbifold fundamental group π orb 1 (X) is then defined as the group of deck transformations of π. For example, if U is a simply connected complex manifold and G ≤ Aut(U ) is a discrete subgroup such that the stabilizer of each point of U is finite, then the quotient X = U/G admits a natural orbifold structure such that π orb 1 (X) = G. Following Corlette and Simpson [CS08] (see also [LPT16] and references therein), a polydisk Shimura modular orbifold is a quotient H of a polydisk D n by a group of the form U (P, Φ) where P is a projective module of rank two over the ring of integers O L of a totally imaginary quadratic extension L of a totally real number field F ; Φ is a skew hermitian form on P L = P ⊗ OL L; and U (P, Φ) is the subgroup of the Φ-unitary group U (P L , Φ) consisting of elements which preserve P . This group acts naturally on D n where n is half the number of embeddings σ : L → C such that the quadratic form
√ −1Φ(v, v) is indefinite.
The aforementioned action is explained in details in [CS08,§9]. Note that there is one tautological representation
π orb 1 (D n /U(P, Φ)) ≃ SU(P, Φ)/{± Id} ֒→ PSL(C)
, which induces for each embedding σ : L → C one tautological representation π orb 1 (D n /U(P, Φ)) → PSL(C). The quotients D n /U(P, Φ) are always quasiprojective orbifolds, and when [L : Q] > 2n they are projective (i.e. proper/compact) orbifolds. The archetypical examples satisfying [L : Q] = 2n are the Hilbert modular orbifolds, which are quasiprojective but not projective.
Representations and factorization.
A crucial point of the proofs of our results consists in applying some results of factorization of representation of fundamental groups.
Definition 2.5. Let X be a (complex) manifold and let ρ : π 1 (X) → G be a representation. We say that ρ factors through a map f :
X → Y towards a manifold Y if there exists a representationρ : π 1 (Y ) → G such that ρ =ρ • f * .
Similarly, we say that ρ factors through a map f :
X → Y towards an orbifold Y if there exists a representationρ : π orb 1 (Y ) → G such that ρ =ρ • f * .
A classical question about the representations of fundamental groups of manifolds is the existence of a "universal factor", in the sense of the following definition. Note that the classical definition of Shafarevich morphism deals with (images in X of) proper normal complex spaces instead of algebraic subvarieties.
Definition 2.6. Let X be a smooth quasi-projective variety and let ρ : π 1 (X) → G be a representation which factors through an algebraic morphism f : X → Y . We say that f is the Shafarevich morphism associated to ρ if, for any normal connected algebraic subvariety Z ⊂ X, we have the equivalence
ρ(π 1 (Z)) = {e} ⇔ f (Z) = {pt.}.
Remark that, if it exists, the Shafarevich morphism associated to a representation is unique.
2.4. Lifting pseudo-automorphisms. In order to get rid of orbifold points in factorizations of representations, we will need some results of lifting of pseudoautomorphisms to finite étale covers.
Lemma 2.7. Let G be a finitely generated group and H ≤ G a finite index subgroup. Then there exists a finite index subgroup
H ′ ≤ H such that φ(H ′ ) = H ′ for all φ ∈ Aut(G).
Proof. Denote by i H = [G : H] the index of H in G. Let K = g∈G gHg −1 be the normal core of H. Then K is a normal, finite index subgroup of H; more precisely, its index is i K = [G : K] ≤ i iH H . Remark that, for a fixed i, there are only finitely many normal subgroups of G with index ≤ i: indeed, such a subgroup can be identified with the kernel of a morphism G → G i , where G i is a finite group of cardinality ≤ i. Since there exist only finitely many such groups G i and since G is finitely generated, there exist only finitely many equivalence classes of such morphisms, hence only finitely many normal subgroups of G with index ≤ i.
Fix i = i iH H , let S i denote the finite set of normal subgroups of G with index ≤ i, and let H ′ = G ′ ∈Si G ′ .
Then H ′ is a normal subgroup of G with finite index, H ′ ⊆ H, and, since every automorphism φ of G fixes the set S i , a fortiori we have φ(H ′ ) = H ′ .
Corollary 2.8. Let X be a quasi-projective complex manifold and ν : X ′ → X be a finite étale cover. Then there exists a finite étale cover η : X ′′ → X ′ of X ′ such that every pseudoautomorphism of X lifts to a pseudo-automorphism of X ′′ .
Proof. Let G = π 1 (X) and H = π 1 (X ′ ); since X is quasi-projective, G is finitely generated. The injection ν * : H → G allows to identify H with a finite index subgroup of G; by Lemma 2.7, we can find a finite index subgroup H ′ ≤ H which is stable by all automorphisms of G. Let η : X ′′ → X ′ be the étale finite cover corresponding to the inclusion H ′ ≤ H and let
π = ν • η : X ′′ → X. Now let f : X X be a pseudo-automorphism; let U = dom(f ) be the domain of f and X ′′ U = π −1 (U ) be the inverse image of U . Then the composition f • π : X ′′ U → X lifts to a (rational) morphism X ′′ U → X ′′ if and only if (f • π) * π 1 (X ′′ U ) ⊂ π * π 1 (X ′′ ) = H ′ . Remark that, if W ⊂ Y is an analytic subset of a complex manifold Y whose complement Y \ W has codimension ≥ 2, then π 1 (Y ) ∼ = π 1 (W ). More accurately, if q ∈ W , the inclusion i : W ֒→ Y induces an isomorphism of fundamental groups i * : π 1 (W, q) ∼ − → π 1 (Y, q).
This implies that the inclusion induces an isomorphism
π 1 (U ) ∼ = π 1 (X). Similarly, if V ⊂ X denotes the domain of f −1 , the inclusion induces an isomorphism π 1 (V ) ∼ = π 1 (X).
Now, it is not hard to see that the composition
π 1 (X) ∼ = π 1 (U ) f * → π 1 (X) ∼ = π 1 (V ) (f −1 ) * → π 1 (X)
is the identity morphism; this means that f induces an automorphism of π 1 (X).
Therefore,
(f • π) * π 1 (X ′′ U ) = f * H ′ ⊂ π * π 1 (X ′′ ) = H ′ , which concludes the proof.
Corollary 2.9. Let F be a codimension 1 foliation on a smooth projective manifold X; assume that F admits a transverse hyperbolic or projective structure and let G ≤ PsAut(X, F ) be the subgroup of pseudo-automorphisms of X which preserve F and its transverse structure. Let X 0 = X \ H be the smooth locus of the structure and let X ′ 0 → X 0 be a finite étale cover. Then, after possibly replacing X ′ 0 by a finite étale cover, all elements of G lift to pseudo-automorphisms of X ′ 0 . Proof. By Corollary 2.8, we only need to show that a pseudo-automorphism of X which preserves F and its transverse structure restricts to a pseudo-automorphism of X 0 . In order to prove this, one needs to check that if D ⊂ X is a hypersurface which is not contained in H, then the strict transform f (D) (which is a hypersurface because f does not contract any divisor) is not contained in H. Indeed, if p ∈ D denotes a point where f is well-defined and a local isomorphism, the push-forward by f defines a transverse projective structure for F at a neighborhood of f (p); since we assumed that the transverse structure is preserved, this implies
that f (p) / ∈ H.
Remark that the uniqueness assumption is automatically satisfied in the hyperbolic case (see Remark 2.2); in the general case, one needs to impose that F doesn't come from a foliation defined by a closed rational form (see Lemma 4.6).
The transversely hyperbolic case
Throughout this section, we denote by F a foliation admitting a (branched) transverse hyperbolic structure, by H ⊂ X the hypersurface along which the structure degenerates, namely the support of the negative part of c 1 (N * F ), and by X 0 := X \ H the regular locus of the structure. The monodromy of the structure is a homomorphism ρ : π 1 (X 0 ) → PSL 2 (R).
3.1. Finiteness of the transverse action. The goal of this section is to prove the following:
Theorem 3.1. Let X be a projective manifold and let F be a transversely hyperbolic codimension 1 foliation. Then • there exists a generically finite morphism f : X ′ → X (which is finite étale over X 0 ) and a fibration π : X → B onto a projective variety B of general type such that f * F is the pull-back of a foliation on B; • the transverse action of PsAut(X, F ) is finite.
Proof. As we saw in Remark 2.2, the existence of a hyperbolic structure on F implies that the conormal bundle N * F is pseudo-effective. Therefore, by [Tou16] (one needs to combine Theorem 1, Proposition 4.6 and Theorem 4 of loc.cit. and remark that we are in the case ǫ = 1) we have two (non-mutual) possibilities:
(1) either F is algebraically integrable;
(2) or there exists a morphism
Ψ : X → H = D N /Γ such that F = Ψ * G i ,
where G i denotes one of the modular foliations on H. First, we may assume that the image by ρ of π 1 (X 0 ) is torsion-free. Indeed, by Selberg's lemma this is true for a finite index subgroup G of π 1 (X 0 ); replace X 0 by its finite étale cover X ′ 0 → X 0 corresponding to G. The pull-back foliation F ′ on X ′ 0 is naturally endowed with a transverse hyperbolic structure, whose monodromy identifies with the restriction of ρ to G. By [Tou13, Theorem 1] the transverse hyperbolic structure is unique, so that in particular it is preserved by PsAut(X, F ). Therefore, by Corollary 2.9, after possibly taking another finite étale cover, all elements of PsAut(X, F ) lift to pseudoautomorphisms of X ′ 0 ; of course, the lifts preserve F ′ and its transverse hyperbolic structure. Let X ′ be a smooth compactification of X ′ 0 ; in order to show the claim for the pair (X, F ), it suffices to show it for the pair (X ′ , F ′ ). Therefore, from now on we will suppose that the monodromy of the structure is torsion-free.
Let T be a current which defines the transverse hyperbolic structure and let Θ T = −T − [N ] be the associated curvature current. Then −Θ T is an F -invariant closed positive current (which represents c 1 (N * F )), and by [Tou13, Proposition 2.10(vi)] the negative part {N } ∈ H 1,1 (X, R) is rational. This implies that the monodromy of the structure around the components of H is finite, hence trivial since we assumed that the monodromy is torsion-free. Therefore, by the Riemann extension theorem, a distinguished first integral defined in a small open set in the complement of H extends through H, meaning that the representation ρ actually factors through π 1 (X).
Let us treat first the case where F is algebraically integrable, or, equivalently, the monodromy Γ = ρ(π 1 (X)) ≤ PSL 2 (R) of the transverse structure is discrete and cocompact (see [Tou16,Proposition 4.6]). Let X → X be the universal cover andπ : X → D be the developing map. By [Tou16, Theorem 3.2 and §3.2]π is surjective and has connected fibres. The fibration obtained by quotient π : X → C := D/Γ defines the foliation F . The curve C is uniformized by the disk, therefore it is of general type. In order to conclude, it suffices to remark that elements of PsAut(X, F ) preserve π by definition, and the action on C is identified with the transverse action on F ; since the group of automorphisms of a curve of general type is finite, the claim is proved. From now on suppose that we are in the second case: there exists a morphism
Ψ : X → H = D N /Γ such that F = Ψ * G i ,
where G i denotes one of the modular foliations on H. The monodromy ρ factors through Ψ| X0 . As before, we may assume that Γ is torsion-free: indeed, by Selberg's lemma there exists a finite index subgroup Γ ′ ≤ Γ which is torsion-free. Replace X 0 by its finite étale cover e : X ′ 0 → X 0 corresponding to the finite index subgroup Ψ −1 * (Γ ′ ) ≤ π 1 (X 0 ). By Corollary 2.9, up to taking another finite étale cover all elements of PsAut(X, F ) lift to pseudo-automorphisms of X ′ 0 . If X ′ denotes a smooth compactification of X ′ 0 such that e extends through X ′ \ X ′ 0 , we can reason on the pull-back foliation e * F on X ′ .
If we denote by
X π → B → Z = Im(Ψ) ⊂ D N /Γ
the Stein factorization of Ψ, π is PsAut(X, F )-equivariant: indeed, if F ⊂ X is a fibre of π and f ∈ PsAut(X, F ), the algebraic subvariety Since the quotient D N /Γ is smooth and the subvariety Z ⊂ D N /Γ is compact, by [Bru18] Z has big cotangent bundle; therefore, by [CP15a], Z is a projective variety of general type, hence so is B by pull-back of canonical forms. Since the group of birational transformations of a variety of general type is finite, a finite index subgroup G ≤ PsAut(X, F ) fixes each fibre of π.
Ψ(f (F )) ⊂ D N /Γ is G i -invariant,
Let G be the pull-back foliation of F i | Z on B; we have shown that F = π * G and that B is of general type. Furthermore, the finite index subgroup G ≤ PsAut(X, F ) preserves each fibre of π, hence in particular each leaf of F . This concludes the proof.
3.2. Entire curves and special manifolds.
Theorem 3.2. Let X be a projective manifold and F a transversely hyperbolic foliation of codimension 1 on X. Then any entire curve f : C → X is algebraically degenerate i.e. f (C) is not Zariski dense.
Remark 3.3. If X is not projective, the statement is false as we will see below in the example of Inoue surfaces which always admit Zariski dense entire curves.
We shall start with a lemma.
Lemma 3.4. Let X be a projective manifold and F a transversely hyperbolic foliation of codimension 1 on X. Then any entire curve f : C → X is tangent to F .
Proof. Let us denote by h the transverse metric which is a smooth transverse metric of constant curvature −1 on X \ (Sing(F ) ∪ H) (where H is the degeneracy locus of the metric). Suppose f : C → X is not tangent to F . In particular, f (C) ⊂ Sing(F ) ∪ H. Therefore f * h induces a non-zero singular metric γ(t) = γ 0 (t)i dt ∧ dt on C where log γ 0 is subharmonic and Ric γ ≥ γ in the sense of currents. But the Ahlfors-Schwarz lemma (see [Dem97]) implies that γ ≡ 0, a contradiction. Now, we can prove the theorem.
Proof. From the preceding lemma, we can suppose that f : C → X is tangent to F . From the study of transversely hyperbolic singular foliations [Tou16], we have two cases: either F is a fibration, and all leaves are algebraic, or F is obtained as the pull-back Ψ * G where Ψ is a morphism of analytic varieties between X and the quotient H = D n /Γ of a polydisk, by an irreducible lattice Γ ⊂ (Aut D) n and G is one of the tautological foliation. Therefore Ψ(f ) : C → H is tangent to G and is constant thanks to the hyperbolicity of the leaves on H [RT18]. This concludes the proof.
It seems interesting to relate the above Theorem 3.2 to the theory of special manifolds as introduced by Campana (see [Cam04] for definitions of special manifolds and conjectures around).
Campana has conjectured that special manifolds correspond to projective varieties admitting a Zariski dense entire curve. In particular, Theorem 3.2 suggests the following question.
Question 3. Let X be a projective manifold and F a transversely hyperbolic (singular) foliation of codimension 1 on X. Prove that X is not special.
The above results also suggest to characterize special manifolds in terms of exceptional locus as Lang's conjectures for general type varieties [Lan86].
Let Exc(X) ⊂ X denote the Zariski closure of the union of the images of all non-constant holomorphic maps C → X.
Conjecture 4 (Lang). Let X be a complex projective manifold. Then X is of general type if and only if Exc(X) = X.
Let X be a projective manifold and consider X 1 := P(T X ) the projectivized tangent bundle. All entire curves f : C → X can be lifted as entire curves f [1] : C → X 1 . Now, we define an exceptional locus in X 1 as: Exc 1 (X) ⊂ X 1 is the Zariski closure of the union of all the images of lifted entire curves f [1] (C).
We propose the following conjecture which generalizes Lang's conjecture to the setting of special manifolds.
Conjecture 5. Let X be a projective manifold. Then X is not special if and only if Exc 1 (X) = X 1 .
This suggests the following question.
Question 6. Let X be a projective manifold and F a holomorphic foliation on X such that all entire curves are tangent to F . Is it true that all entire curves in X are algebraically degenerate and that X is not special ?
More generally, one may consider inductively jets spaces π 0,k : X k → X (see [Dem97]) and the corresponding exceptional loci Exc k (X) obtained as the Zariski closure of the union of all the images of lifted entire curves f [k] (C) ⊂ X k . Then we ask the following question.
Question 7. Let X be a projective manifold. Suppose there is an integer k ≥ 0 such that Exc k (X) = X k . Is it true that all entire curves in X are algebraically degenerate and that X is not special ?
This question is also motivated by recent results of Demailly [Dem11] and Campana-Păun [CP15b] which imply the following weaker statement: X is of general type if and only if there is an integer k ≥ 1 such that Bs(O X k (m) ⊗ π 0,k * A −1 ) = X k , where A is an ample line bundle on X. The relationship with the previous question is clear with the now classical fact that Exc k (X) ⊂ Bs(O X k (m) ⊗ π 0,k * A −1 ) (see [Dem97]).
3.3.
A transversely hyperbolic foliation with infinite transverse action. In this section, we will see that if the Kähler assumption is dropped, one can construct transversely hyperbolic foliations with non finite transverse action and Zariski dense entire curves.
More precisely, let us consider Inoue surfaces [Ino74] which are quotients of H×C, where H is the upper half-plane, by certain infinite discrete subgroups. They are equipped with a natural transversely hyperbolic foliation.There are three type of Inoue surfaces distinguished by the type of their fundamental group: S M , S (+) and S (−) .
Let us describe the Inoue surfaces of type S M . Let M = (m i,j ) ∈ SL(3, Z) be a unimodular matrix with eigen-values α, β, β such that α > 1 and β = β. We choose a real eigen-vector (a 1 , a 2 , a 3 ) and an eigen-vector (b 1 , b 2 , b 3 ) of M corresponding to α and β. Let G M be the group of analytic automorphisms of H × C generated by
• g 0 : (w, z) → (αw, βz) • g i : (w, z) → (w + a i , z + b i ) for i = 1, 2, 3.
S M is defined to be the quotient surface H × C/G M .
Consider the automorphisms of H × C, (w, z) → (nw, nz). They induce automorphisms h n of S M which have infinite transverse action provided n = α l/p for l, p integers. One should also remark that in such surfaces all (non-constant) entire curves are tangent to the foliation and are Zariski dense (their topological closure is a real torus of dimension 3).
Here, the representation ρ F : π 1 (S M ) → PSL(2, R) associated to the transverse hyperbolic structure takes values in the affine subgroup Aff(2, R) of and its linear part ρ 1 F : π 1 (S M ) → (R >0 , ×) has non trivial image. It is worth noticing that this situation cannot occur in the Kähler realm. Indeed, suppose that X is a compact Kähler manifold carrying a transversely hyperbolic codimension one foliation which is also transversely affine and such the linear part ρ 1 F : π 1 (X) → (R >0 , ×) has non trivial image. To wit, there exists on X an open cover (U i ) such that for every i, F is defined by dw i = 0, where w i : U i → H is submersive on U i − Sing F and such that the glueing conditions w i = ϕ ij • w j are defined by locally constant elements ϕ ij of Aff(2, R). In particular there exists locally constants cocycles a ij ∈ R >0 such that (3.1) dw i = a ij dw j and the normal bundle N F is thus numerically trivial. On the other hand, the existence of a transverse hyperbolic structure directly implies that N F is equipped with a metric whose curvature is a non trivial semi-negative form. This shows that c 1 (N F ) = 0, a contradiction.
The transversely projective case
Throughout this section we let X be a projective manifold, F be a transversely projective foliation of codimension 1 on X and PsAut(X, F ) ≤ PsAut(X) be the group of pseudo-automorphisms which preserve F .
Denote by E X any rank two vector bundle such that the given projective structure on F is defined by a Riccati foliation R on P(E). The foliation R is defined by a (non-unique) flat meromorphic connection ∇ on E, which induces a monodromy representation
ρ ∇ : π 1 (X \ (∇) ∞ ) → SL 2 (C),
where (∇) ∞ denotes the divisor of poles of ∇. The monodromy representation of the projective structure is a representation
ρ : π 1 (X 0 ) → PSL 2 (C),
where X 0 := X\ and H ⊂ (∇) ∞ denotes the divisor along which the projective structure degenerates. Such representation is induced by ρ ∇ upon projectivization.
Proof of Theorem B. By [LPT16,Theorem D], at least one of the following is true:
(1) there exists a generically finite morphism π : X ′ → X such that π * F is defined by a closed rational 1-form; (2) there exists a rational dominant map η : X S to a ruled surface π : S → C and a Riccati foliation G defined on S (i.e. over the curve C) such that F = η * G;
(3) there exists a polydisk Shimura modular orbifold H = D N /Γ and an algebraic map ψ : X 0 → H such that the monodromy representation ρ factors through one of the tautological representations of π 1 (H) (up to a field automorphism of C). Furthermore the singularities of the transverse projective structure are regular.
In the first case, we fall into the first alternative of the statement; the second and the third cases are settled by Proposition 4.1 and Proposition 4.4 respectively whose proofs are given below.
4.1. The case of surfaces. In this section we treat the case of birational symmetries of foliations of surfaces. Most of the key results are contained in [CF03]. We want to prove the following:
Proposition 4.1. Suppose that there exist a dominant rational map η : X S towards a surface S and a foliation G on S such that F = η * G. Then
• either there exists π : X ′ → X, where π is generically finite, such that π * F is defined by a closed rational 1-form; • or the transverse action of PsAut(X, F ) is finite.
We start by the following lemma, which is well-known to specialists.
Lemma 4.2. Let G be a codimension one foliation on a complex manifold Y . Suppose that G is invariant by the flow of a vector field v on Y which is not everywhere tangent to G; then G is defined by a closed rational 1-form.
Proof. Let ω be a rational form on Y defining G; for example, one can take a form with values in N * F which defines G, and divide it by any non-zero meromorphic section of N * F . Let us show that the rational formω := ω/ω(v) is closed. It is enough to check that dω = 0 at smooth points of F such that v is locally transverse to F ; in a neighborhood of such a point we can find local coordinates (z, w 1 , . . . , w n ) = (z, w) such that
ω = a(z, w) dz, v = b(z, w) ∂ ∂z .
The condition that the flow of v preserves F means that b(z, w) = b(z) does not actually depend on w. Thereforew = dz/b(z) is closed, which concludes the proof.
The proof of the following lemma is essentially contained in [CP14], but we prove it again for the convenience of the reader. Lemma 4.3. Let F be a transversely projective foliation with abelian monodromy and at worst logarithmic singularities. Then F is defined by a closed rational 1form.
Proof. Remark that abelian subgroups of PSL 2 (C) are conjugated to subgroups of (C, +) or of (C * , ×). Therefore, we may assume that the monodromy is either additive or multiplicative.
If the monodromy is additive, then the local distinguished first integrals F i can be chosen so that the local meromorphic forms dF i glue to a closed rational form which is defined outside the singularities of the structure and which defines F . Similarly, if the monodromy is multiplicative, then the F i -s can be chosen so that the local meromorphic forms dF i /F i glue to a closed rational form which is defined outside the singularities of the structure and which defines F .
The assumption on the singularities ensures that such forms can be extended meromorphically through them. By GAGA, the meromorphic forms obtained in this way are rational.
We are ready to prove Proposition 4.1.
Proof of Proposition 4.1. First, remark that we can replace S by the Stein factorization π : S ′ → S of η and G by π * G. Therefore, we may suppose that the fibres of η are connected.
Let us prove that • either the action of PsAut(X, F ) on X preserves η • or F is algebraically integrable (and in particular it is defined by a closed rational form). Let us fix f ∈ PsAut(X, F ) and suppose that for a fibre Y of η we have
η(f (Y )) = {pt.}. Since Y is F -invariant and f preserves the foliation F , f (Y ) is also F - invariant, thus η(f (Y )) is G-invariant; in particular, since η(f (Y )) has dimension ≥ 1, this means that η(f (Y )) is a G-invariant algebraic curve.
Remark that, if η(f (Y )) = {pt.}, then the same is true for nearby fibres. However, by [Jou79,Ghy00] if there is an infinite number of G-invariant curves then G is algebraically integrable, therefore so is F . This shows the alternative.
From now on, we suppose that the action of PsAut(X, F ) preserves η, meaning that η induces a group homomorphism φ : PsAut(X, F ) Bir(S, G).
In order to show that the transverse action of PsAut(X, F ) on F is finite, it is enough to show that the transverse action of Bir(S, G) on G is finite. Suppose that this is not the case, so that in particular Bir(S, G) is infinite.
By [CF03, Theorem 1.3] if for every birational model (S ′ , G ′ ) of (S, G) we have Aut(S ′ , G ′ )
Bir(S ′ , G ′ ), then either G is algebraically integrable or (S, G) is birationally equivalent to one of the two situations in Example 1.3 of loc.cit.: after possibly pulling back by a generically finite morphism, S = P 1 × P 1 and G is defined by a form written as αy dx + βx dy, whose multiple α dx x + β dy y is a closed rational form defining G. Therefore, we may assume that Aut(S, G) = Bir(S, G) is an infinite group.
By [CF03,Proposition 3.9] at least one of the following is verified:
(1) Aut(S, G) contains an element of infinite order f whose action on H 1,1 (S, R) satisfies ||(f n ) * || → +∞ as n → +∞; (2) Aut(S, G) contains the flow of a vector field v on S In the first case, by [CF03, Theorem 3.1, Theorem 3.5] there exists a generically finite morphism ν : S ′ → S such that ν * G is either a linear foliation on an abelian surface or an elliptic fibration; in both cases, ν * G is defined by a closed rational 1form, therefore so is F after pull-back by a generically finite morphism π : X ′ → X induced by ν.
In the second case, by [CF03, Proposition 3.8] we are in one of the following situations:
• v is tangent to an elliptic fibration and G is either a turbolent foliation (so that we can apply Lemma 4.2) or the fibration itself. In both cases, G is defined by a closed rational 1-form. • G is a linear foliation on a torus, thus it is defined by a closed regular 1-form.
• v is tangent to a fibration in rational curves and G is either a Riccati foliation (so that we can apply Lemma 4.2) or the fibration itself. In both cases, G is defined by a closed rational 1-form. • S is a P 1 -bundle over an elliptic curve E, v projects onto a vector field on E and G is either obtained by suspension or it coincides with the P 1 -bundle (so that in particular it is algebraically integrable). In the first case G is smooth and the P 1 -bundle induces a transverse projective structure without poles, whose monodromy factors through E; in particular the monodromy is abelian, which implies that G is defined by a closed rational form by Lemma 4.3. Therefore, in both cases G is defined by a closed rational form. • Up to a change of birational model, G is a linear foliation on P 1
x × P 1 y , which means that it is defined by a closed rational form of type
α dx x + β dy y .
This proves that if a foliation on a surface is preserved by a holomorphic vector field, then it is defined by a closed rational 1-form, which concludes the proof.
4.2.
Factorization through a Shimura modular orbifold. Throughout this section, suppose that there exists an algebraic quotient of the polydisk H = D N /Γ and an algebraic map ψ : X 0 → H such that the monodromy representation ρ factors through one of the tautological representations of π orb 1 (H) (up to a field automorphism of C). Our goal is to prove the following:
Proposition 4.4. Under the assumption above, at least one of the following is verified:
• either there exists a generically finite morphism π : X ′ → X such that π * F is defined by a closed rational 1-form; • or the transverse action of PsAut(X, F ) is finite.
Lemma 4.5. Suppose that D N /Γ is smooth. Then the Stein factorization of ψ : X 0 → Im(ψ) is the Shafarevich morphism of the monodromy representation ρ (in the sense of Definition 2.6).
Proof. Let ψ 0 : X 0 → B be the Stein factorization of ψ : X 0 → Im(ψ). We already know that the monodromy is trivial along fibres; what is left to prove is that ψ 0 is "maximal" among such morphisms, i.e. that, if i : Y ֒→ X 0 is a normal connected algebraic subvariety such that the image of ρ • i * : π 1 (Y ) → PSL 2 (C) is finite, then ψ 0 (Y ) is reduced to a point.
Suppose by contradiction that ψ(Y ) is not reduced to a point and let Z = ψ(Y ) ⊂ H; by [RT18, Proposition 3.4], Z is not tangent to any of the F i .
The leaves of any one of the F i are locally given by first integrals with values in D; since up to composing with an element of Gal(C/Q) the representation ρ factors through the monodromy of the natural transverse hyperbolic structure on F i , the latter is finite along Z.
Consider the finite étale cover Z ′ → Z corresponding to the finite index subgroup ker(ρ • i * ) ⊂ π 1 (Y ). Then the pull-back of F i to (the smooth part of) Z ′ is given by a global first integral Z ′ sm → D; by the Riemann extension theorem, such first integral extends holomorphically to a smooth projective model of Z ′ , hence it is constant, a contradiction. This shows that the factorization of ψ is indeed the Shafarevich morphism of ρ.
The following lemma follows from the proof of [CLNL + 07, Lemma 2.20]; one can actually see that the morphism π in the statement can be taken to have topological degree ≤ 2.
Lemma 4.6. If F admits more than one transverse projective structure, then there exists π : X ′ → X, where π is generically finite, such that π * F is defined by a closed rational 1-form.
We are ready to prove Proposition 4.4.
Proof of Proposition 4.4. Suppose that there exists no generically finite morphism π : X ′ → X such that π * F is defined by a closed rational 1-form; by Lemma 4.6 this implies that the transverse projective structure of F is unique, so that in particular it is preserved by PsAut(X, F ).
First, we may assume that Γ is torsion-free: indeed, by Selberg's lemma there exists a finite index subgroup Γ ′ ≤ Γ which is torsion-free. Replace X 0 by its finite étale cover X ′ 0 → X 0 corresponding to the finite index subgroup ψ −1 * (Γ ′ ) ≤ π 1 (X 0 ). By Corollary 2.9, up to taking another finite étale cover, all elements of PsAut(X, F ) lift to pseudo-automorphisms of X ′ 0 , and we can reason on the pull-back foliation on X ′ 0 . Remark that, if X ′ is a smooth compactification of X ′ 0 and X ′ → X denotes a generically finite morphism which restricts to the étale cover X ′ 0 → X 0 , we have κ(X ′ ) ≥ κ(X) ≥ 0; therefore, it suffices to prove the claim for the pull-back of F on X ′ .
If we denote by
X 0 π → B → Z ⊂ D N /Γ
the Stein factorization of ψ, π is PsAut(X, F )-equivariant: indeed, by Lemma 4.5 it is identified with the Shafarevich morphism of the monodromy ρ, and therefore it only depends on the transverse projective structure. By uniqueness, it is canonically associated to F . Furthermore, as we saw in the proof of Corollary 2.9, elements of PsAut(X, F ) restrict to pseudo-automorphisms of X 0 . This proves that PsAut(X, F ) acts by pseudo-automorphisms on B.
Since D N /Γ is smooth, by [Bru18] the subvariety Z := ψ(X 0 ) ⊂ D N /Γ has big logarithmic cotangent bundle; therefore by [CP15a] it is of log-general type (see [Iit82]), hence so is B by pull-back of log-canonical forms. By [Iit82,Theorem 11.12], the group of strictly birational transformations of B is finite, and in particular so is its group of pseudo-automorphisms. Therefore, a finite index subgroup G ≤ PsAut(X, F ) fixes each fibre of π.
If the fibres of π are F -invariant, the proof is finished: F is the pull-back of a foliation G on B, and the finite index subgroup G ≤ PsAut(X, F ) fixes each leaf of G, thus each leaf of F . Suppose now that generic fibres of π are not F -invariant. The restriction of F to such fibres is a codimension 1 foliation endowed with a natural transverse projective structure, which is preserved by the restriction of the action of G to fibres. Since the monodromy of the structure is trivial in small neighborhoods of fibres of π (in X 0 ), in such a neighborhood U one can define a first integral
F U : U P 1
which defines the transverse structure. Since the action of G preserves such structure, for g ∈ G one has
F U • g = φ g • F U for some φ g ∈ PSL 2 (C).
Let us show that the action of G on U is transversely finite; this is equivalent to showing that the group morphism Φ : G → PSL 2 (C) defined by g → φ g has finite image. First remark that, since the singularities of the structure are regular by [LPT16], the first integral F U extends meromorphically to the closure of U in X. In particular, if we denote by Y the Zariski-closure in X of a fibre of π contained in U , the restriction of F to Y is given by a meromorphic first integral
F Y : Y P 1 .
By the easy addition formula (see e.g. [Iit82,§10]), for a general fibre F of π we have κ(X) ≤ κ(F ) + dim(B), which implies that κ(F ) ≥ 0. Therefore, by [LB], the transverse action of G on Y (i.e. the action of G on the first integral F Y ) is torsion; since such action coincides with the action of G on P 1 defined above, this implies that the image of Φ is torsion. Suppose by contradiction that Φ(G) has infinite order; torsion subgroups of Lie groups are virtually abelian (see e.g. [Lee76]), and abelian torsion subgroups of PSL 2 (C) are conjugated to rational angle rotation subgroups. Since we supposed the image of Φ to be infinite, the conjugation which puts a finite index subgroup in the form of rotations is unique up to composition with the involution i : z → z −1 ; therefore, up to composition with i, the first integral F U can be chosen canonically. As remarked above, the morphism Φ coincides with the transverse action of G on any fibre in U (endowed with the restricted foliation); in particular, if one chooses another small neighborhood of a fibre U ′ as above such that U ∩ U ′ = ∅, the transverse action of G will also be infinite. As a consequence, on U ′ one can also define a canonical (up to involution i) first integral
F U ′ : U ′ P 1 .
Since, on U ∩ U ′ , F U and F U ′ differ at most by composition with i, the local rational forms dF U /F U glue to a global rational form (on X 0 ) which defines F ; by the regularity of singularities of the structure, such form extends to H, and we obtain a contradiction with the assumption at the beginning of the proof. This shows that the transverse action of G on U is finite. Now, consider the foliation G obtained by intersecting (local) leaves of F with fibres of π. Since the restriction of F to fibres of π is algebraically integrable, G is also algebraically integrable; let η : X B G be a rational fibration defining G.
Remark that the action of G preserves η. The fact that G has transversely finite action on U can be rephrased by saying that some finite index subgroup G ′ of G acts as the identity on η(U ). Since birational transformations which act as the identity on a euclidean neighborhood are the identity, this implies that G ′ fixes all fibres of η, and in particular all leaves of F . This concludes the proof.
The case of codimension 1 foliations on tori
Conjecture 2 is only meaningful for manifolds with a rich group of automorphisms; the first example one should study is therefore that of homogeneous (compact Kähler) manifolds. By a result of Borel and Remmert (see for example [Ghy96, Theorem 2.5]), such a manifold can be decomposed in a product T × R, where T = C n /Λ is a complex torus and R is a rational homogenous manifold (a generalized flag manifold); in particular, in order for a homogeneous manifold X to have non-negative Kodaira dimension, X needs to be a torus.
5.1. Classification. Codimension 1 foliations on complex tori have been classified by Brunella:
Theorem 5.1 ( [Bru10]). Let F be a (singular) codimension one foliation on a complex torus X = C n /Λ. Then exactly one of the following holds:
(1) F is a linear foliation;
(2) F is a turbulent foliation: there exists a linear projection π : X → Y onto a complex torus Y = C m /Λ ′ , 0 < m < n, a closed meromorphic one-form η 0 on Y and a holomorphic (linear) form η 1 on X, which doesn't vanish on the fibres of π, such that F is defined by the meromorphic one-form η := π * η 0 + η 1 ;
(3) the normal bundle N F is ample.
Remark that N F is automatically effective: indeed, the image of a generic vector field on X through the natural projection T X → N F yields a non-trivial section. One can then construct the normal reduction of F (see [Bru10]), i.e. a linear projection π : X → Y onto a complex torus Y = C m /Λ ′ such that N F = π * L for some ample line bundle L on Y . Case (1) and (3) in Theorem 5.1 correspond to the extremal cases m = 0 and m = n respectively.
Smooth foliations, which had been previously classified by Ghys [Ghy96], arise exactly in the cases m = 0 and m = 1.
Symmetries.
Recall that any meromorphic map X T from a compact complex manifold towards a compact complex torus is actually holomorphic (see [Fuj78,Lemma 3.3]); therefore, when studying Conjecture 2 on complex tori one can simply study automorphisms preserving the foliation.
Proposition 5.2. Let F be a (singular) codimension one foliation on a complex torus X = C n /Λ and let G = Aut(X, F ) be the group of symmetries of F .
(1) If F is a linear foliation, then G contains the group of translations of X, which has transversely infinite action on F .
(2) If F is a turbulent foliation, then with the notation of Theorem 5.1 G preserves π and its action on Y is finite; G contains the subgroup of translations of X preserving π, which has infinite transverse action on F .
(3) If the normal bundle N F is ample, then G is a finite group.
Remark that in cases 1 and 2 the group G can actually be much bigger (for example, it may contain elements with infinite linear action): this depends on resonance conditions between the lattice defining X (respectively, the fibres of π) and the holomorphic form defining F (respectively the form η 1 ).
Proof. Suppose first that F is a linear foliation. Then clearly G contains the group of translations, and we only need to prove that the latter has transversely infinite action on F . If this weren't the case, then X would be covered by a finite union of leaves of F , which contradicts the fact that leaves have zero Lebesgue measure. Now consider the case of a turbulent foliation defined by a meromorphic one-form η = π * η 0 + η 1 ,
where the same notation as in Theorem 5.1 is used. Since the projection π is canonically associated to the normal bundle N F , hence to the foliation F , G preserves it. Furthermore, the action of G on the base Y preserves the ample line bundle L; by [Mum08, Application 1, section 6], such action is finite. It is clear that any translation of X along fibres of π preserves η, hence F . The same argument as in the case of linear foliations allows to conclude that the action of such translations, hence that of G, is transversely infinite. Finally, suppose that N F is ample. The same argument as above allows to conclude that Aut(X, N F ) is finite, hence so is G.
Remark 5.3. Proposition 5.2 does not contradict Conjecture 2. Indeed, if a foliation F on an abelian variety admits a transverse invariant metric which is constructed as the curvature of an hermitian metric on a line bundle L, then the leaves of F are the fibres of a linear projection π : X → E onto an elliptic curve E, which is nothing but the normal reduction of L; in particular, the action of Aut(X, F , L) = Aut(X, L) on the space of leaves identifies with the action of Aut(E, L E ) on E, where L E is an ample line bundle such that L = π * L E . Such action is finite by [Mum08, Application 1, section 6].
Let us prove the above claim. Suppose that F admits an invariant transverse metric θ which can be constructed as the curvature form Θ h of an hermitian metric on a line bundle L on X. In particular, this implies that L is pseudo-effective; by [Bau98, Lemma 1.1], L is numerically equivalent to an effective line bundle L ′ . Let π : X → E be the normal reduction of L ′ , and let A be an ample line bundle on E such that L ′ = π * A. Remark that, since 0 = [θ] 2 = π * c 1 (A) 2 , E is one-dimensional. Now, since [θ] = π * c 1 (A), the integral of θ along any closed curve contained in a fibre of π is equal to 0. This implies that the fibres of π coincide with the leaves of F , which concludes the proof.
and by[RT18, Proposition 3.4] it is reduced to a point. This proves that PsAut(X, F ) acts by pseudo-automorphisms on B.
On the cone of curves of an abelian variety. Thomas Bauer, Amer. J. Math. 1205Thomas Bauer. On the cone of curves of an abelian variety. Amer. J. Math., 120(5):997-1006, 1998.
Sébastien Boucksom, Divisorial Zariski decompositions on compact complex manifolds. Annales Scientifiques de l'École Normale Supérieure. Quatrième Série. 37Sébastien Boucksom. Divisorial Zariski decompositions on compact complex man- ifolds. Annales Scientifiques de l'École Normale Supérieure. Quatrième Série, 37(1):45-76, 2004.
Codimension one foliations on complex tori. Marco Brunella, Ann. Fac. Sci. Toulouse Math. 196Marco Brunella. Codimension one foliations on complex tori. Ann. Fac. Sci. Toulouse Math. (6), 19(2):405-418, 2010.
A strong hyperbolicity property of locally symmetric varieties. Yohan Brunebarbe, arXiv:1606.03972arXiv preprintYohan Brunebarbe. A strong hyperbolicity property of locally symmetric varieties. arXiv preprint arXiv:1606.03972, 2016.
Symmetric differentials and variations of Hodge structures. Yohan Brunebarbe, J. Reine Angew. Math. 743Yohan Brunebarbe. Symmetric differentials and variations of Hodge structures. J. Reine Angew. Math., 743:133-161, 2018.
Frédéric Campana. Orbifolds, special varieties and classification theory. Ann. Inst. Fourier (Grenoble). 543Frédéric Campana. Orbifolds, special varieties and classification theory. Ann. Inst. Fourier (Grenoble), 54(3):499-630, 2004.
Jorge Vitório Pereira, and Frédéric Touzet. Complex codimension one singular foliations and Godbillon-Vey sequences. Serge Cantat, Charles Favre, J. Reine Angew. Math. 5611Mosc. Math. J.Serge Cantat and Charles Favre. Symétries birationnelles des surfaces feuilletées. J. Reine Angew. Math., 561:199-235, 2003. [CLNL + 07] Dominique Cerveau, Alcides Lins-Neto, Frank Loray, Jorge Vitório Pereira, and Frédéric Touzet. Complex codimension one singular foliations and Godbillon-Vey sequences. Mosc. Math. J., 7(1):21-54, 166, 2007.
Transversely affine foliations on projective manifolds. Gaël Cousin, Jorge Vitório Pereira, Math. Res. Lett. 215Gaël Cousin and Jorge Vitório Pereira. Transversely affine foliations on projective manifolds. Math. Res. Lett., 21(5):985-1014, 2014.
Orbifold generic semi-positivity: an application to families of canonically polarized manifolds. Frédéric Campana, Mihai Păun, Ann. Inst. Fourier (Grenoble). 652Frédéric Campana and Mihai Păun. Orbifold generic semi-positivity: an applica- tion to families of canonically polarized manifolds. Ann. Inst. Fourier (Grenoble), 65(2):835-861, 2015.
Orbifold generic semi-positivity: an application to families of canonically polarized manifolds. Frédéric Campana, Mihai Păun, Ann. Inst. Fourier (Grenoble). 652Frédéric Campana and Mihai Păun. Orbifold generic semi-positivity: an applica- tion to families of canonically polarized manifolds. Ann. Inst. Fourier (Grenoble), 65(2):835-861, 2015.
On the classification of rank-two representations of quasiprojective fundamental groups. Kevin Corlette, Carlos Simpson, Compos. Math. 1445Kevin Corlette and Carlos Simpson. On the classification of rank-two representations of quasiprojective fundamental groups. Compos. Math., 144(5):1271-1331, 2008.
Équations différentielles à points singuliers réguliers. Pierre Deligne, Lecture Notes in Mathematics. 163Springer-VerlagPierre Deligne. Équations différentielles à points singuliers réguliers. Lecture Notes in Mathematics, Vol. 163. Springer-Verlag, Berlin-New York, 1970.
Singular Hermitian metrics on positive line bundles. Jean-Pierre Demailly, Complex algebraic varieties. Bayreuth; BerlinSpringer1507Jean-Pierre Demailly. Singular Hermitian metrics on positive line bundles. In Complex algebraic varieties (Bayreuth, 1990), volume 1507 of Lecture Notes in Math., pages 87-104. Springer, Berlin, 1992.
Algebraic criteria for Kobayashi hyperbolic projective varieties and jet differentials. Jean-Pierre Demailly, Algebraic geometry-Santa Cruz. Providence, RIAmer. Math. Soc62Jean-Pierre Demailly. Algebraic criteria for Kobayashi hyperbolic projective varieties and jet differentials. In Algebraic geometry-Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 285-360. Amer. Math. Soc., Providence, RI, 1997.
Holomorphic Morse inequalities and the Green-Griffiths-Lang conjecture. Jean-Pierre Demailly, Special Issue: In memory of Eckart Viehweg. 7Jean-Pierre Demailly. Holomorphic Morse inequalities and the Green-Griffiths- Lang conjecture. Pure Appl. Math. Q., 7(4, Special Issue: In memory of Eckart Viehweg):1165-1207, 2011.
On automorphism groups of compact Kähler manifolds. Akira Fujiki, Invent. Math. 443Akira Fujiki. On automorphism groups of compact Kähler manifolds. Invent. Math., 44(3):225-258, 1978.
Feuilletages holomorphes de codimension un sur les espaces homogènes complexes. Étienne Ghys, Ann. Fac. Sci. Toulouse Math. 56Étienne Ghys. Feuilletages holomorphes de codimension un sur les espaces homogènes complexes. Ann. Fac. Sci. Toulouse Math. (6), 5(3):493-519, 1996.
Jouanolou concernant les feuilles fermées des feuilletages holomorphes. Étienne Ghys. À Propos D'un Théorème De, J.-P , Rend. Circ. Mat. Palermo. 492Étienne Ghys. à propos d'un théorème de J.-P. Jouanolou concernant les feuilles fermées des feuilletages holomorphes. Rend. Circ. Mat. Palermo (2), 49(1):175-180, 2000.
Algebraic geometry. Shigeru Iitaka, Graduate Texts in Mathematics. 76Shigeru Iitaka. Algebraic geometry, volume 76 of Graduate Texts in Mathematics.
An introduction to birational geometry of algebraic varieties. Springer-Verlag, 24New York-Berlin; North-Holland Mathematical LibrarySpringer-Verlag, New York-Berlin, 1982. An introduction to birational geometry of algebraic varieties, North-Holland Mathematical Library, 24.
On surfaces of Class VII 0. Masahisa Inoue, Invent. Math. 24Masahisa Inoue. On surfaces of Class VII 0 . Invent. Math., 24:269-310, 1974.
Feuilles compactes des feuilletages algébriques. J P Jouanolou, Math. Ann. 2411J. P. Jouanolou. Feuilles compactes des feuilletages algébriques. Math. Ann., 241(1):69-72, 1979.
Hyperbolic and Diophantine analysis. Serge Lang, Bull. Amer. Math. Soc. (N.S.). 142Serge Lang. Hyperbolic and Diophantine analysis. Bull. Amer. Math. Soc. (N.S.), 14(2):159-205, 1986.
An application of p-adic integration to the dynamics of a birational transformation preserving a fibration. Federico Lo, Bianco , arXiv:1707.09534arXiv preprintFederico Lo Bianco. An application of p-adic integration to the dynamics of a bira- tional transformation preserving a fibration. arXiv preprint arXiv:1707.09534.
On torsion subgroups of Lie groups. Lee Dong Hoon, Proc. Amer. Math. Soc. 552Dong Hoon Lee. On torsion subgroups of Lie groups. Proc. Amer. Math. Soc., 55(2):424-426, 1976.
Transversely projective foliations on surfaces: existence of minimal form and prescription of monodromy. Frank Loray, Jorge Vitório Pereira, Internat. J. Math. 186Frank Loray and Jorge Vitório Pereira. Transversely projective foliations on sur- faces: existence of minimal form and prescription of monodromy. Internat. J. Math., 18(6):723-747, 2007.
Representations of quasiprojective groups, flat connections and transversely projective foliations. Frank Loray, Jorge Vitório Pereira, Frédéric Touzet, J. Éc. polytech. Math. 3Frank Loray, Jorge Vitório Pereira, and Frédéric Touzet. Representations of quasi- projective groups, flat connections and transversely projective foliations. J. Éc. poly- tech. Math., 3:263-308, 2016.
Fibrations of compact kähler manifolds in terms of cohomological properties of their fundamental groups. Ngaiming Mok, Annales de l'Institut Fourier. 502Ngaiming Mok. Fibrations of compact kähler manifolds in terms of cohomological properties of their fundamental groups. Annales de l'Institut Fourier, 50(2):633-675, 2000.
Published for the Tata Institute of Fundamental Research. David Mumford, Tata Institute of Fundamental Research Studies in Mathematics. C. P. Ramanujam and Yuri Manin5Abelian varieties. Corrected reprint of the second (1974) editionDavid Mumford. Abelian varieties, volume 5 of Tata Institute of Fundamental Re- search Studies in Mathematics. Published for the Tata Institute of Fundamental Re- search, Bombay; by Hindustan Book Agency, New Delhi, 2008. With appendices by C. P. Ramanujam and Yuri Manin, Corrected reprint of the second (1974) edition.
Curves in Hilbert modular varieties. Erwan Rousseau, Frédéric Touzet, Asian J. Math. 224Erwan Rousseau and Frédéric Touzet. Curves in Hilbert modular varieties. Asian J. Math., 22(4):673-690, 2018.
Uniformisation de l'espace des feuilles de certains feuilletages de codimension un. Frédéric Touzet, Bull. Braz. Math. Soc. (N.S.). 443Frédéric Touzet. Uniformisation de l'espace des feuilles de certains feuilletages de codimension un. Bull. Braz. Math. Soc. (N.S.), 44(3):351-391, 2013.
Feuilletages holomorphes admettant une mesure transverse invariante. Frédéric Touzet, Ann. Fac. Sci. Toulouse Math. 246Frédéric Touzet. Feuilletages holomorphes admettant une mesure transverse invari- ante. Ann. Fac. Sci. Toulouse Math. (6), 24(3):523-541, 2015.
On the structure of codimension 1 foliations with pseudoeffective conormal bundle. Frédéric Touzet, Foliation theory in algebraic geometry, Simons Symp. ChamSpringerFrédéric Touzet. On the structure of codimension 1 foliations with pseudoeffective conormal bundle. In Foliation theory in algebraic geometry, Simons Symp., pages 157-216. Springer, Cham, 2016.
|
[] |
[
"Extending the domain of validity of the Lagrangian approximation",
"Extending the domain of validity of the Lagrangian approximation"
] |
[
"Sharvari Nadkarni-Ghosh \nDepartment of Physics\nCornell University\n14853IthacaNYUSA\n",
"David F Chernoff \nDepartment of Astronomy\nCornell University\n14853IthacaNYUSA\n"
] |
[
"Department of Physics\nCornell University\n14853IthacaNYUSA",
"Department of Astronomy\nCornell University\n14853IthacaNYUSA"
] |
[
"Mon. Not. R. Astron. Soc"
] |
We investigate convergence of Lagrangian Perturbation Theory (LPT) by analysing the model problem of a spherical homogeneous top-hat in an Einstein-deSitter background cosmology. We derive the formal structure of the LPT series expansion, working to arbitrary order in the initial perturbation amplitude. The factors that regulate LPT convergence are identified by studying the exact, analytic solution expanded according to this formal structure. The key methodology is to complexify the exact solution, demonstrate that it is analytic and apply well-known convergence criteria for power series expansions of analytic functions. The "radius of convergence" and the "time of validity" for the LPT expansion are of great practical interest. The former describes the range of initial perturbation amplitudes which converge over some fixed, future time interval. The latter describes the extent in time for convergence of a given initial amplitude. We determine the radius of convergence and time of validity for a full sampling of initial density and velocity perturbations.This analysis fully explains the previously reported observation that LPT fails to predict the evolution of an underdense, open region beyond a certain time. It also implies the existence of other examples, including overdense, closed regions, for which LPT predictions should also fail. We show that this is indeed the case by numerically computing the LPT expansion in these problematic cases.The formal limitations to the validity of LPT expansion are considerably more complicated than simply the first occurrence of orbit crossings as is often assumed. Evolution to a future time generically requires re-expanding the solution in overlapping domains that ultimately link the initial and final times, each domain subject to its own time of validity criterion. We demonstrate that it is possible to handle all the problematic cases by taking multiple steps (LPT re-expansion).A relatively small number (∼ 10) of re-expansion steps suffices to satisfy the time of validity constraints for calculating the evolution of a non-collapsed, recombinationera perturbation up to the current epoch. If it were possible to work to infinite Lagrangian order then the result would be exact. Instead, a finite expansion has finite errors. We characterise how the leading order numerical error for a solution generated by LPT re-expansion varies with the choice of Lagrangian order and of time step size. Convergence occurs when the Lagrangian order increases and/or the time step size decreases in a simple, well-defined manner. We develop a recipe for time step control for LPT re-expansion based on these results.
|
10.1111/j.1365-2966.2010.17529.x
|
[
"https://arxiv.org/pdf/1005.1217v2.pdf"
] | 118,544,900 |
1005.1217
|
c1fee5b1a99f1745867ca052964941726b4eb7f7
|
Extending the domain of validity of the Lagrangian approximation
14 Sep 2010
Sharvari Nadkarni-Ghosh
Department of Physics
Cornell University
14853IthacaNYUSA
David F Chernoff
Department of Astronomy
Cornell University
14853IthacaNYUSA
Extending the domain of validity of the Lagrangian approximation
Mon. Not. R. Astron. Soc
00014 Sep 2010Printed 15 September 2010(MN L A T E X style file v2.2)cosmology: theory -large-scale structure of Universe
We investigate convergence of Lagrangian Perturbation Theory (LPT) by analysing the model problem of a spherical homogeneous top-hat in an Einstein-deSitter background cosmology. We derive the formal structure of the LPT series expansion, working to arbitrary order in the initial perturbation amplitude. The factors that regulate LPT convergence are identified by studying the exact, analytic solution expanded according to this formal structure. The key methodology is to complexify the exact solution, demonstrate that it is analytic and apply well-known convergence criteria for power series expansions of analytic functions. The "radius of convergence" and the "time of validity" for the LPT expansion are of great practical interest. The former describes the range of initial perturbation amplitudes which converge over some fixed, future time interval. The latter describes the extent in time for convergence of a given initial amplitude. We determine the radius of convergence and time of validity for a full sampling of initial density and velocity perturbations.This analysis fully explains the previously reported observation that LPT fails to predict the evolution of an underdense, open region beyond a certain time. It also implies the existence of other examples, including overdense, closed regions, for which LPT predictions should also fail. We show that this is indeed the case by numerically computing the LPT expansion in these problematic cases.The formal limitations to the validity of LPT expansion are considerably more complicated than simply the first occurrence of orbit crossings as is often assumed. Evolution to a future time generically requires re-expanding the solution in overlapping domains that ultimately link the initial and final times, each domain subject to its own time of validity criterion. We demonstrate that it is possible to handle all the problematic cases by taking multiple steps (LPT re-expansion).A relatively small number (∼ 10) of re-expansion steps suffices to satisfy the time of validity constraints for calculating the evolution of a non-collapsed, recombinationera perturbation up to the current epoch. If it were possible to work to infinite Lagrangian order then the result would be exact. Instead, a finite expansion has finite errors. We characterise how the leading order numerical error for a solution generated by LPT re-expansion varies with the choice of Lagrangian order and of time step size. Convergence occurs when the Lagrangian order increases and/or the time step size decreases in a simple, well-defined manner. We develop a recipe for time step control for LPT re-expansion based on these results.
INTRODUCTION
Understanding the non-linear growth of structure in an expanding universe has been an active area of research for nearly four decades. Simulations have been instrumental in illustrating exactly what happens to an initial power spectrum of small fluctuations but analytic methods remain essential for elucidating the physical basis of the numerical results. Perturbation theory, in particular, is an invaluable tool for achieving a sophisticated understanding.
The Eulerian and Lagrangian frameworks are the two principal modes of description of a fluid. The fundamental dependent variables in the Eulerian treatment are the density ρ(x, t) and velocity v(x, t) expressed as functions of the grid coordinates x and time t, the independent variables. In perturbation theory the dependent functions are expanded in powers of a small parameter. For cosmology that parameter typically encodes a characteristic small spatial variation of density and/or velocity with respect to a homogeneous cosmology at the initial time. As a practical matter, the first-order perturbation theory becomes inaccurate when the perturbation grows to order unity. Subsequently one must work to higher order to handle the development of non-linearity (see Bernardeau et al. 2002 for a review) or adopt an alternative method of expansion.
In the Lagrangian framework, the fundamental dependent variable is the physical position of a fluid element or particle (terms used interchangeably here). The independent variables are a set of labels X, each of which follows a fluid element, and the time. Usually X is taken as the position of the element at some initial time but other choices are possible. In any case, the physical position and velocity of a fluid element are r = r(X, t) andṙ(X, t), respectively. Knowledge of the motion of each fluid element permits the full reconstruction of the Eulerian density and velocity fields. In cosmological applications of Lagrangian perturbation theory (LPT), just like Eulerian perturbation theory, the dependent variables are expanded in terms of initial deviations with respect to a homogeneous background. The crucial difference is that the basis for the expansion is the variation in the initial position and position-derivative not the variation in the initial fluid density and velocity. The Eulerian density and velocity may be reconstructed from knowledge of the Lagrangian position using exact non-perturbative definitions. A linear approximation to the displacement field results in a non-linear expression for the density contrast. The Lagrangian description is well-suited to smooth, well-ordered initial conditions; a single fluid treatment breaks down once particle crossings begin, caustics form and the density formally diverges.
First-order LPT was originally introduced by Zel' Dovich (1970) to study the formation of non-linear structure in cosmology. In his treatment the initial density field was taken to be linearly proportional to the initial displacement field (the "Zeldovich approximation"). These results were extended by many authors (Moutarde et al. 1991;Buchert 1992;Bouchet et al. 1992;Buchert & Ehlers 1993;Buchert 1994;Munshi et al. 1994;Catelan 1995;Buchert 1995;Bouchet et al. 1995;Bouchet 1996;Ehlers & Buchert 1997). The work pioneered by Bouchet focused on Zeldovich initial conditions and established the link between LPT variables and statistical observables. The work by Buchert as well as the paper by Ehlers & Buchert (1997) formalised the structure of the Newtonian perturbative series for arbitrary initial conditions. A general relativistic version of the Zeldovich approximation was developed by Kasai (1995) and other relativistic descriptions of the fluid in its rest frame were investigated by Matarrese & Terranova (1996) and Matarrese, Pantano & Saez (1993, 1994. LPT has been used for many applications including, recently, the construction of non-linear halo mass functions by Monaco (1997) and Scoccimarro & Sheth (2002).
Not much has been written about the convergence of LPT although LPT expansions are routinely employed. Sahni & Shandarin (1996) pointed out that the formal series solution for the simplest problem, the spherical top-hat, did not converge for the evolution of homogeneous voids. Figure 1 illustrates the conundrum that the LPT approximations diverge from the exact solution in a manner that worsens as the order of the approximation increases. The details will be described in the next section.
This paper explores LPT convergence for the spherical top-hat and identifies the root cause for the lack of convergence. The analysis naturally suggests a means of extending the range of validity of LPT. This generalisation of LPT guarantees convergence to the exact solution of the model problem at all times prior to the occurrence of the first caustic. Tatekawa (2007) attempted to treat the divergence by applying the Shanks transformation to the LPT series. Although non-linear transformations can sum a divergent series, the correct answer is not guaranteed; comparison of several different methods is usually necessary to yield trustworthy results. Other approaches include the Shifted-Time-Approximation (STA) and Frozen-Time-Approximation (FTA) which have been investigated by Karakatsanis, Buchert & Melott (1997). These schemes modify lower order terms to mimic the behavior of higher order terms and/or extend the range of applicability in time. None of these techniques are considered here.
The organisation follows: §2 sketches the model problem, the evolution of a uniform sphere in a background homogeneous Einstein-deSitter cosmology. The LPT equations, the structure of the formal series and the term-by-term solution are outlined.
§3 discusses the complexification of the LPT solution and convergence of the series. This section introduces the "radius of convergence" and the "time of validity" for LPT. §4 outlines the real and complex forms of the parametric solution and sets forth the equations that must be solved to locate the poles which govern the convergence. §5 presents numerical results for the time of validity and radius of convergence for a full range of possible initial conditions for the top-hat. The notion of mirror model symmetry is introduced and used to explain a connection in the convergence for open and closed models. §6 shows that Figure 1. The time-dependent scale factor b of an initial spherical top-hat perturbation is plotted as a function of the background scale factor a. The perturbation is a pure growing mode, i.e. the density and velocity perturbations vanish at t = 0. The black dotted line is the exact solution. The smooth blue lines are the LPT results obtained by working successively to higher and higher order. Series with even (odd) final order lie below (above) the exact solution. Roughly speaking, LPT converges only for a < ∼ 0.2. Beyond that point the higher order approximations deviate from the exact solution more than lower order ones.
the time of validity may be extended by re-expanding the solution in overlapping domains that ultimately link the initial and final times, each domain subject to an individual time of validity criterion. The feasibility of this method is demonstrated in some examples. §7 summarises the work.
THE MODEL PROBLEM AND FORMAL SERIES SOLUTION
This section describes the governing equations, the initial physical conditions, the formal structure of the LPT series solution and the order-by-order solution.
Newtonian treatment
Consider evolution on sub-horizon scales after recombination in a matter-dominated universe. A Newtonian treatment of gravity based on solving Poisson's equation for the scalar potential and on evaluating the force in terms of the gradient of the potential gives an excellent approximation for non-relativistic dynamics. When there are no significant additional forces on the fluid element (e.g. pressure forces) then it is straightforward to eliminate the gradient of the potential in favour ofr, the acceleration. The governing equations are ∇x ·r = −4πGρ(x, t)
(1)
∇x ×r = 0(2)
where ρ(x, t) is the background plus perturbation density, G is Newton's gravitational constant and ∇x is the Eulerian gradient operator. In the Lagrangian treatment, the independent variables are transformed (x, t) → (X, t) and the particle position r = r(X, t) adopted as the fundamental dependent quantity. For clarity note that x refers to a fixed Eulerian grid not a comoving coordinate.
Spherical top-hat
The starting physical configuration is a compensated spherical perturbation in a homogeneous background cosmology. The perturbation encompasses a constant density sphere about the centre of symmetry and a compensating spherical shell. The shell that surrounds the sphere may include vacuum regions plus regions of varying density. Unperturbed background extends beyond the outer edge of the shell. Physical distances are measured with respect to the centre of symmetry. At initial time t0 the background and the innermost perturbed spherical region (hereafter, "the sphere") have Hubble constants H0 and Hp0, and densities ρ0 and ρp0, respectively. Let r b,0 (rp,0) be the physical distance from the centre of symmetry to the inner edge of the background (to the outer edge of the sphere) at the initial time. Let a0, b0 be the initial scale factors for the background and the sphere respectively. Two sets of Lagrangian coordinates Y = r b,0 /a0 and X = rp,0/b0 are defined. A gauge choice sets a0 = b0. Appendix A provides a figure and gives a somewhat more detailed chain of reasoning that clarifies the construction of the physical and Lagrangian coordinate systems. The initial perturbation is characterised by the independent parameters
δ = ρp0 ρ0 − 1 δv = Hp0 H0 − 1.(3)
Finally, assume that the background cosmology is critical Ω0 = 1. The perturbed sphere has Ωp0 = 1 + δ (1 + δv) 2 .
(4)
The physical problem of interest here is the future evolution of an arbitrary initial state unconstrained by the past history. In general, the background and the perturbation can have different big bang times. Initial conditions with equal big bang times will be analysed as a special case of interest and imply an additional relationship between δ and δv.
While the previous paragraphs summarise the set up, they eschew the complications in modelling an inhomogeneous system in terms of separate inner and outer homogeneous universes. For example, matter motions within the perturbed inner region may overtake the outer homogeneous region so that there are problem-specific limits on how long solutions for the scale factors a(t) and b(t) remain valid. The appendix shows that there exist inhomogeneous initial configurations for which the limitations arising from the convergence of the LPT series are completely independent of the limitations associated with collisions or crossings of inner and outer matter-filled regions. A basic premise of this paper is that it is useful to explore the limitations of the LPT series independent of the additional complications that inhomogeneity entails.
Equation governing scale factors
During the time that the spherical perturbation evolves as an independent homogeneous universe it may be fully described in terms of the motion of its outer edge rp. Write
rp(t) = b(t)X(5)
where b(t) is the scale factor and X is the Lagrangian coordinate of the edge. The initial matter density of the homogeneous sphere ρ(X, t0) = ρp0 = ρ0(1 + δ). The physical density of the perturbation at time t is ρ(X, t) = ρ(X, t0)J(X, t0) J(X, t)
where the Jacobian of the transformation relating the Lagrangian and physical spaces is
J(X, t) = det ∂ r ∂ X .(7)
Since eq. (5) implies J(X, t) = b(t) 3 and the choice a0 = b0 implies J(X, t0) = a 3 0 the perturbation matter density at later times is
ρp(t) = ρ0(1 + δ)a 3 0 b(t) 3 .(8)
Substituting for ρp and rp in eq. (1) gives
b b = − 1 2 H 2 0 a 3 0 (1 + δ) b 3(9)
with initial conditions b(t0) = a0 andḃ(t0) =ȧ0(1 + δv). The curl of the acceleration (i.e. eq. (2)) vanishes by spherical symmetry. The corresponding equation for the background scale factor is
a a = − 1 2 H 2 0 a 3 0 a 3(10)
with initial conditions a(t0) = a0 andȧ(t0) =ȧ0 = a0H0. The solution for b(t) will be expressed in terms of its deviations from a(t).
In summary, the physical setup is an Ω0 = 1 background model and a compensated spherical top-hat (over-or underdense). The properties of interest are the relative scale factors a(t)/a0 and b(t)/a0 (the choice of a0 is arbitrary and b0 = a0). The evolution of the relative scale factors is fully specified by H0, Hp0 and Ωp0 at time t0. The perturbed physical quantities, Hp0 and Ωp0, may be equivalently specified by a choice of δ and δv. Appendix A contains a systematic description and enumerates degrees of freedom, parameters, constraints, etc. Figure 2. Phase diagram of density and velocity perturbations (δ, δv). Physical initial conditions require −1 < δ < ∞ and −∞ < δv < ∞.
∆, ∆ v , Θ Θ c Θ c initially
The left panel highlights the qualitatively different initial conditions. The shaded (unshaded) region corresponds to closed (open) model with negative (positive) total energy. For small ∆, models with θ − c < θ < θ + c are closed. Initially expanding and contracting models are separated by the dashed horizontal line (δv = −1). The right panel shows the evolution of δ and δv. The solid blue line corresponds to the "Zeldovich" condition i.e no perturbation at t = 0. The points (−1, −1), (−1, 0.5) and (0, 0) are unstable, stable and saddle fixed points of the phase space flow. The flow lines (indicated by the blue vectors) converge along the Zeldovich curve either to the stable fixed point at (−1, 0.5) or move parallel to the Zeldovich curve to a future density singularity. Further discussion follows in §5.4 and §6.2.
Perturbations in phase space
The initial density and velocity perturbations are taken to be of the same order in the formalism developed by Buchert (1992Buchert ( , 1994, Buchert & Ehlers (1993) and Ehlers & Buchert (1997). We assume the same ordering here. Write the initial perturbation (δ, δv) in terms of magnitude ∆ and angle θ
∆ = δ 2 + δ 2 v (11) so that δ = ∆ cos θ (12) δv = ∆ sin θ.(13)
To map physical perturbations (δ, δv) in a unique manner to (∆, θ) adopt the ranges ∆ 0 and −π < θ π. Figure 2 (left panel) shows the phase space of initial perturbations. Since density is non-negative the regime of physical interest is δ −1. Open (closed) models with positive (negative) total energy are the regions that are unshaded (shaded). Initially expanding models, 1 + δv > 0, lie above the horizontal dashed line. The right panel of figure 2 summarises the overall evolution of the system. The initial choice of δ and δv dictates the trajectory in the plane. Cosmologically relevant initial conditions generally assume there to be no perturbation at t = 0. We adopt the name "Zeldovich" initial conditions for models that satisfy this condition. This establishes a specific relation between δ and δv which is indicated by the sold blue line. The exact mathematical relationship is given in §5.4. Starting from a general initial point (δ, δv), the system as it evolves traces out a curve in phase space indicated by the blue arrows. There are three fixed points visible. The origin (δ, δv) ≡ (0, 0), which corresponds to a unperturbed background model, is a saddle point. The vacuum static model at point (−1, −1) is a unstable node and the vacuum, expanding model at (−1, 0.5) is a degenerate attracting node. Far to the right and below the dashed line the models collapse to a future singularity. The phase portrait illustrates that the trajectories either converge to the vacuum, expanding model or to the singular, collapsing model. The equations that govern the flow and further relevance of the Zeldovich solution is discussed in §6.2 and §5.4.
Generating the Lagrangian series solution
The scale factor is formally expanded
b(t) = ∞ n=0 b (n) (t)∆ n(14)
where b (n) denotes an n-th order term. The initial conditions are
b(t0) = a(t0) (15) b(t0) =ȧ0(1 + δv) =ȧ0(1 + ∆ sin θ).(16)
Substitute the expansion for b(t) into eq. (9), equate orders of ∆ to give at zeroth order
b (0) + 1 2 H 2 0 a 3 0 b (0) 2 = 0(17)
which is identical in form to eq. (10) for the unperturbed background scale factor. The initial conditions at zeroth order:
b (0) (t0) = a0 (18) b (0) (t0) =ȧ0.(19)
The equation and initial conditions for b (0) (t) simply reproduce the background scale factor evolution b (0) (t) = a(t). Without loss of generality assume that the background model has big bang time t = 0 so that
a(t) = a0 t t0 2/3 = a0 3H0t 2 2/3 .(20)
At first order
b (1) − H 2 0 a 3 0 b (1) a 3 = − 1 2 H 2 0 a 3 0 cos θ a 2(21)
and, in general,
b (n) − H 2 0 a 3 0 b (n) a 3 = S (n)(22)
where S (n) depends upon lower order approximations (b (0) , b (1) . . . b (n−1) ) as well as θ. The first few are:
S (2) = − 1 2 H 2 0 a 3 0 a 4 b (1) 3b (1) − 2a cos θ (23) S (3) = − 1 2 H 2 0 a 3 0 a 5 b (1) −4 b (1) 2 + 6ab (2) + 3ab (1) cos θ − 2a 2 b (2) cos θ (24) S (4) = − 1 2 H 2 0 a 3 0 a 6 b (1) 2 5 b (1) 2 − 12ab (2) − 4ab (1) cos θ + 6a 2 b (1) b (3) + b (2) cos θ + 3a 2 b (2) 2 − 2a 3 b (3) cos θ .(25)
These terms can be easily generated by symbolic manipulation software. The initial conditions are
b (1) (t0) = 0 (26) b (1) (t0) =ȧ0 sin θ(27)
and for n > 1
b (n) (t0) = 0 (28) b (n) (t0) = 0.(29)
The ordinary differential equations for b (n) may be solved order-by-order. To summarise, the structure of the hierarchy and the simplicity of the initial conditions allows the evaluation of the solution at any given order in terms of the solutions with lower order. This yields a formal expansion for the scale factor of the sphere
b = ∞ n=0 b (n) (t)∆ n(30)
which encapsulates the Lagrangian perturbation treatment. The right hand size explicitly depends upon the size of the perturbation and time and implicitly upon a0, H0, and θ. This hierarchy of equations is identical to that generated by the full formalism developed by Buchert and collaborators when it is applied to the top-hat problem. The convergence properties in time and in ∆ are distinct; a simple illustrative example of this phenomenon is presented in Appendix B.
CONVERGENCE PROPERTIES OF THE LPT SERIES SOLUTION
The series solution outlined in the previous section does not converge at all times. Figure 1 is a practical demonstration of this non-convergence for the case of an expanding void. An understanding of the convergence of the LPT series is achieved by extending the domain of the expansion variable ∆ from the real positive axis to the complex plane.
Complexification
The differential eq. (9) and initial conditions for the physical system arë
b(t) = − 1 2 H 2 0 a 3 0 (1 + ∆ cos θ) b(t) 2 b(t0) = a0 b(t0) =ȧ0(1 + ∆ sin θ)(31)
where t, b(t), ∆ and all zero-subscripted quantities are real. This set may be extended by allowing ∆ and b to become complex quantities, denoted hereafter, ∆ and b, while the rest of the variables remain real. The complex set is
b(t) = − 1 2 H 2 0 a 3 0 (1 + ∆ cos θ) b(t) 2 b(t0) = a0 b(t0) =ȧ0(1 + ∆ sin θ).(32)
The theory of differential equations (for example, Chicone 2006) guarantees that the solution to a real initial value problem is unique and smooth in the initial conditions and parameters of the equation and can be extended in time as long as there are no singularities in the differential equation (hereafter, the maximum extension of the solution). First, note that each complex quantity in eq. (32) may be represented by a real pair, i.e. b = u + iv by pair {u, v} = { b, b} and ∆ = x + iy by pair {x, y} = { ∆, ∆}. The basic theory implies continuity and smoothness of solution u and v with respect to initial conditions and parameters x and y. Second, observe that the Cauchy-Riemann conditions ux = vy and uy = −vx are preserved by the form of the ordinary differential equation. Since the initial conditions and parameter dependence are holomorphic functions of ∆ it follows that b(t, ∆) is a holomorphic function of ∆ at times t within the maximum extension of the solution.
Inspection shows that the differential equation is singular only at b = 0. For a particular value of ∆ = ∆ , the solution to the initial value problem can be extended to a maximum time tmx such that b(∆ , tmx) = 0 or to infinity. The existence of a finite tmx signals that a pole in the complex analytic function b(∆, t) forms at ∆ = ∆ and t = tmx. For times t such that t0 t < tmx, the solution b(∆, t) is analytic in a small neighbourhood around the point ∆ . Of course, there may be poles elsewhere in the complex ∆ plane.
The relationship between the original, real-valued physical problem and the complexified system is the following. In the original problem ∆ is a real, positive quantity at t0. LPT is a power series expansion in ∆ about the origin (the point ∆ = 0). LPT's convergence at any time t can be understood by study of the complexified system. Consider the complex disk D centred on the origin and defined by |∆| < ∆. At t0 each point in D determines a trajectory b(∆, t) for the complexified system extending to infinity or limited to finite time t = tmx(∆) because of the occurrence of a pole. The time of validity is defined as T (∆) = minD tmx, i.e. the minimum tmx over the disk. Since there are no poles in D at t0 the time of validity is the span of time when D remains clear of any singularities. If a function of a complex variable is analytic throughout an open disk centred around a given point in the complex plane then the series expansion of the function around that point is convergent (Brown & Churchill 1996). The LPT expansion for the original problem converges for times less than the time of validity because the complex extension b(∆, t) is analytic throughout D for t < T (∆). If ∆1 < ∆2 then, in an obvious notation, the disks are nested D(∆1) ⊂ D(∆2) and the times of validity are ordered T (∆1) T (∆2).
This idea is shown in figure 3. No singularities are present for the initial conditions at t0; at t1 a singularity is present outside the disk but it does not prevent the convergence of the LPT expansion with ∆ equal to the disk radius shown; at t2 a singularity is present in the disk or on its boundary and it may interfere with convergence.
A distinct but related concept is the maximum amplitude perturbation for which the LPT expansion converges at the initial time and at all intermediate times up to a given time. The radius of convergence R∆(t) is the maximum disk radius ∆ for which t > T (∆). Because the disks are nested if t1 < t2 then R∆(t1) R∆(t2). Figure 3. This figure is a schematic illustration of how the time of validity is determined. The initial conditions imply a specific, real ∆ at time t 0 . The LPT series is an expansion about ∆ = 0, convergent until a pole appears at some later time within the disk of radius ∆ (shown in cyan) in the complex ∆ plane. Typically, the pole's position forms a curve (blue dashed) in the three dimensional space ( [∆], [∆], t). The black dots mark the pole at times t 1 and t 2 . At t 1 the pole does not interfere with the convergence of the LPT series; at t 2 it does. The time of validity may be determined by a pole that appears within the disk without moving through the boundary (not illustrated).
Im[Δ] time t = t 0 Re[Δ] t = t 1 Δ=0 t = t 2
The time of validity and the radius of convergence are inverse functions of each other. If the initial perturbation is specified, i.e. ∆ is fixed, and the question to be answered is "how far into the future does LPT work?" then the time of validity gives the answer. However, if the question is "how big an initial perturbation will be properly approximated by LPT over a given time interval?" then the radius of convergence provides the answer.
Finally, note that one can trivially extend this formalism to deal with time intervals in the past.
Calculating radius of convergence and time of validity
The following recipe shows how to calculate the radius of convergence R∆(t) and the time of validity T (∆) efficiently. Fix a0, H0, t0 and θ; these are all real constants set by the initial conditions. Assume that it is possible to find b(∆, t) for complex ∆ and real t by solving eq. (32). There exist explicit expressions for b as will be shown later. Start with t = t0 and R∆(t) = ∞. The iteration below maps out R∆(t) by making small increments in time δt.
• Store old time tprevious = t, choose increment δt and form new time of interest t = tprevious + δt.
• Locate all the ∆ which solve b(∆, t) = 0. The roots correspond to poles in the complex function. Find the root closest to the origin and denote its distance as |∆near|.
• The radius of convergence is R∆(t) = min(|∆near|, R∆(tprevious)).
• Continue.
Since R∆ is decreasing, the inversion to form T (∆) is straightforward. Figure 4 shows a schematic cartoon of the construction process.
Complex Δ plane Root Plot Im[Δ] Re[Δ]
Roots at t 2
Roots at t 1 Δ T(Δ) t R Δ (t)
EXPLICIT SOLUTIONS
The usual parametric representation provides an efficient method to construct an explicit complex representation for b(∆, t).
Real (physical) solutions
The original system eq. (31) depends upon a0, H0, θ and ∆. The assumed Einstein-deSitter background has a0 > 0 anḋ a0 > 0; as defined, the perturbation amplitude ∆ 0 and the relative density and velocity components are determined by phase angle θ with −π < θ π. The quantity (1 + ∆ cos θ) is proportional to total density and must be non-negative. The sign ofḃ0 is the sign of 1 + ∆ sin θ and encodes expanding and contracting initial conditions.
Briefly reviewing the usual physical solution, the integrated form iṡ
b 2 = H 2 0 a 3 0 (1 + ∆ cos θ) b + (1 + ∆ sin θ) 2 − (1 + ∆ cos θ) a0 .(33)
The combination
E(∆, θ) = (1 + ∆ sin θ) 2 − (1 + ∆ cos θ)(34)
is proportional to the total energy of the system. If E > 0 the model is open and if E < 0 it is closed and will re-collapse eventually. Figure There are four types of initial conditions (positive and negative E, positive and negativeḃ0) and four types of solutions, shown schematically in figure 5. The solutions have well-known parametric forms involving trigonometric functions of angle η or iη (see Appendix C). The convention adopted here is that the singularity nearest the initial time t0 coincides with η = 0 and is denoted t + bang (t − bang ) for initially expanding (contracting) solutions (see figure 5). The time interval between the singularity and t0 is tage = |t0 − t ± bang | 0. The left (right) panel illustrates initially expanding (contracting) models. t ± bang corresponds to η = 0; t coll to η = 2π. For expanding solutions tage = t 0 − t + bang is the time interval since the initial singularity and t coll is the future singularity for closed models. For contracting solutions tage = t − bang − t 0 is the time until the final singularity and t coll is the past singularity for closed models.
The parametric solution for the models can be written as
b(η, ∆, θ) = a0 2 (1 + ∆ cos θ) [−E(∆, θ)] (1 − cos η) t(η, ∆, θ) = t0 ± 1 2H0 (1 + ∆ cos θ) [−E(∆, θ)] 3/2 (η − sin η) − tage(∆, θ) .(35)
The plus and minus signs give the solution for initially expanding and initially contracting models respectively. Parameter η is purely real for closed solutions and purely imaginary for open solutions. The distance to the nearest singularity is
tage = b=a 0 b=0 db [ḃ 2 ] 1/2 = 1 H0 y=1 y=0 dy [(1 + ∆ cos θ)y −1 + E(∆, θ)] 1/2 .(36)
The second equality uses eq. (33) and the substitution y = b/a0.
Complex extension
To extend the above parametric solution to the complex plane, one might guess the substitution ∆ → ∆e iφ where −π < φ π in eq. (35) and eq. (36). The physical limit is φ = 0. However, this leads to two problems. First, the integral for tage can have multiple extensions that agree for physical φ = 0 but differ elsewhere including the negative real axis. This is tied to the fact that the operations of integration and substitution ∆ → ∆e iφ do not commute because of the presence of the square root in the expression for tage. A second related problem is the presence of multiple square roots in the parametric form for t. These give rise to discontinuities along branch cuts such that one parametric form need not be valid for the entire range of φ, but instead the solution may switch between different forms. Directly extending the parametric solution is cumbersome. However, the original differential eq. (32) is manifestly single-valued. The equation can be integrated forward or backward numerically to obtain the correct solution for complex ∆. One can then match the numerical solution to the above parametric forms to select the correct branch cuts. This procedure was implemented to obtain the form for all ∆ and θ. The main result is that the solution space for all θ and ∆ is completely spanned by complex extensions of the two real parametric forms which describe initially expanding and contracting solutions. The expressions for tage and details are given in Appendix C3.
The traditional textbook treatment relating physical cosmological models with real Ω > 1 and Ω < 1 typically invokes a discrete transformation η → iη in the parametric forms and one verifies that this exchanges closed and open solutions. However, starting from the second order differential equation it is straightforward to use the same type of reasoning as above to construct an explicit analytic continuation from one physical regime to the other.
In addition, note that the differential equation and its solution remain unchanged under the simultaneous transformations ∆ → −∆ and θ → θ + π. Every complex solution with −π < θ 0 can be mapped to a complex solution with 0 < θ π and vice-versa. For determining the radius of convergence and the time of validity the whole disk of radius |∆| is searched for poles so it suffices to consider a restricted range of θ to handle all physical initial conditions.
Poles
The condition b = 0 signals the presence of a pole. Inspection of the parametric form shows that this condition can occur only when η = 0 or η = 2π. The corresponding time
t(∆, θ) = t0 ± π H 0 (1+∆ cos θ) [−E(∆)] 3/2 − tage(∆) (η = 2π) t0 ∓ tage(∆) (η = 0)(37)
is immediately inferred. Since the independent variable t is real the transcendental equation
t(∆, θ) = 0(38)
must be solved. It is straightforward to scan the complex ∆ plane and calculate t to locate solutions. Each solution gives a root of b = 0 and also implies the existence of a pole at the corresponding ∆. Note that relying upon the parametric solutions is a far more efficient method for finding the poles than integrating the complex differential equations numerically. We have verified that both methods produce the same results.
In practice, we fix θ, scan a large area of the complex ∆ plane, locate all purely real t and save the {∆, t} pairs. These are used to create a scatter plot of |∆| as a function of time (hereafter the "root plot"). Generally, the location of the poles varies smoothly with t and continuous loci of roots are readily apparent. Finding R∆ and T (∆) follows as indicated in figure 4.
RESULTS FROM THE COMPLEX ANALYSIS
Root plots were calculated for a range of angles 0 θ π. Since the root plots depend upon |∆| they are invariant under θ → θ − π and this coverage suffices for all possible top-hat models. For the results of the full survey in θ see Appendix D. The theoretical radius of convergence R∆(t) and time of validity T (∆) follow directly.
This section analyses the theoretical convergence for specific open and closed models derived from the root plots. These estimates are compared to the time of validity inferred by numerical evaluation of the LPT series. The range of models with limited LPT convergence is characterised. The concept of mirror models is introduced to elucidate a number of interconnections between open and closed convergence. The physical interpretation of roots introduced by the complexification of the equations but lying outside the physical range are discussed. Finally, the special case where the background and the perturbation have the same big bang time is analysed. Figure 6 shows R∆(t) for θ = 2.82 and initial scale factor a0 = 10 −3 . All ∆ yield expanding open models for this θ; one choice corresponds to the model whose LPT series appeared in figure 1 (∆ = 0.01, θ = 2.82, a0 = 10 −3 ). The x-axis is log a and is equivalent to a measure of time. The y-axis is log |∆|, i.e. the distance from the origin to poles in the complex ∆ plane. In principle, future evolution may be limited by real or complex roots. The blue solid line and the red dotted line indicate real and complex roots of η = 2π respectively. The cyan dashed and pink dot-dashed lines indicate the real and complex roots of η = 0 respectively. Future evolution is constrained by real roots (blue and cyan) in this example.
Open models
The time of validity is the first instance when a singularity appears within the disk of radius ∆ in the complex ∆ plane. For the specific case, starting at ordinate ∆ = 10 −2 , one moves horizontally to the right to intersect the blue line and then vertically down to read off the scale factor av = a[T (∆)] = 0.179. The time of validity inferred from the root plot agrees quantitatively with the numerical results in figure 1.
Appendix D presents a comprehensive set of results. The time of validity is finite for any open model. As expected, smaller amplitudes imply longer times of validity. The poles do not correspond to collapse singularities reached in the course of normal physical evolution since the open models do not have any real future singularities. A hint of an explanation is already present, however. The green dashed line is δv = 1 (or ∆ = 1/ sin θ) at which point the root switches from η = 2π below to 0 above. Such a switch might occur if varying the initial velocity transposes an expanding closed model into a contracting closed model. But it is expected to occur at δv = −1 not 1. The open models are apparently sensitive to past and future singularities in closed models with initial conditions that are transformed in a particular manner. §5.3 explores this interpretation in detail. For closed models the scale factor at time of collapse is ac. Blue solid line and red small dashed line denote real and complex roots of η = 2π, respectively. The cyan dashed lines denotes the real roots of η = 0. When the first singularity encountered is real, av = ac, the time of validity is the future time of collapse. However, when the singularity is complex the time of validity is less than the actual collapse time. In the range ∆rc < ∆ < ∆ E=0 , there are closed models with av < ac.
Closed models
Figure 7 presents R∆(t) for models with θ = 0.44 and a0 = 10 −3 . There are several new features. Over the angular range θ − c < θ < θ + c the cosmology is closed for small ∆ (see shaded region in figure 2 near ∆ = 0). Conversely, a straight line drawn from ∆ = 0 within this angular range must eventually cross the parabola E = 0 except for the special case θ = 0. Since the velocity contribution to energy E ∝ ∆ 2 while the density contribution ∝ −∆ it is clear that eventually E > 0 as ∆ increases. The critical value, ∆E=0, is a function of θ. Below the brown horizontal dot-dashed line in figure 7 the models are closed, above they are open (line labelled ∆ = ∆E=0).
The root plot has, as before, blue solid and red dotted lines denoting the distance to real and complex ∆ poles, respectively, for η = 2π. The cyan dashed line denotes real roots for η = 0 and does not restrict future evolution.
For small ∆ real roots determine the time of validity. These roots correspond exactly to the model's collapse time. In other words, the time of validity is determined by the future singularity. For example, for ∆ = 0.01, the root plot predicts that a series expansion should be valid until the collapse at a = 5.5 denoted by "av = ac" on the x-axis . This prediction is confirmed in the left hand panel of figure 8. The root diagram is consistent with the qualitative expectation that small overdensities should have long times of validity because collapse times are long: lim∆→0 T (∆) → ∞.
As ∆ increases from very small values, i.e. successively larger initial density perturbations, the collapse time decreases. Eventually the velocity perturbation becomes important so that at ∆ = ∆rc a minimum in the collapse time is reached. For ∆E=0 > ∆ > ∆rc the collapse time increases while the model remains closed. As ∆ → ∆E=0 the collapse time becomes infinite and the model becomes critical. All models with ∆ > ∆E=0 are open. The root diagram shows that for ∆ > ∆rc, the time of validity is determined by complex not real ∆ for η = 2π. Closed models with ∆rc < ∆ < ∆E=0 have a time of validity less than the model collapse time. For example, for ∆ = 0.2, the collapse occurs at a = 0.94 but convergence is limited to a 0.38. This prediction is verified in the right panel of figure 8.
The convergence of LPT expansions for some closed models is limited to times well before the future singularity. This general behaviour is observed for θ − c < θ < θ + c and ∆rc < ∆ < ∆E=0 where both ∆rc and ∆E=0 are functions of θ. Appendix D provides additional details.
Mirror models, real and complex roots
The parametrization of the perturbation in terms of ∆ > 0 and −π < θ π and the complexification of ∆ → ∆ can give rise to poles anywhere in the complex ∆ space. When R∆ is determined by a pole along the real positive axis, a clear interpretation is possible: the future singularity of the real physical model exerts a dominant influence on convergence. LPT expansions for closed models with ∆ < ∆rc are limited by the future collapse of the model and are straightforward to interpret.
The meaning of real roots for open models is less clear cut. The roots determining R∆ at large t are negative real and small in magnitude. Negative ∆ lies outside the parameter range for physical perturbations taken to be ∆ > 0. Nonetheless the mapping (∆, θ) → (−∆, θ ± π) preserves (δ, δv) and the original equations of motion. The poles of the models with parameters (∆, θ) and (∆, θ ± π) are negatives of each other. Let us call these "mirror models" of each other.
For infinitesimal ∆ if the original model is open then the mirror model is closed. Figure 2 shows that the ∆E=0 line has some curvature (in fact, it is a parabola) whereas the mirror mapping is an exact inversion through ∆ = 0. The notion of mirror models explains other features of the root diagrams. The time of validity of open models was previously discussed using figure 6 (θ = 2.82). The blue solid line indicated real roots. Such roots are the future singularities of closed mirror models lying in the fourth quadrant along θ = 2.82 − π = −0.32. As ∆ increases the sequence of mirror models crosses the δv = −1 line (the horizontal dashed line) to become initially contracting cosmologies and, in our labelling, the future singularity switches from η = 2π to η = 0. This explains the switch in root label from blue solid to cyan dashed seen in figure 6, which occurs at δv = 1 in the original model.
The symmetry of the mirroring is not limited to cases when ∆ is real. It applies for complex ∆, too. For example, the models in the right panels of figures 8 and 9 are mirrors of each other. Their time of validity is the same and determined by complex roots which are negatives of each other. These singularities are non-physical and have no interpretation in terms of the collapse of any model yet they limit the LPT convergence in the same way. Figure 10 shows the areas of phase space where complex roots determine the time of validity in light red. The area within the parabola (light blue) contains closed models. Most of the light blue region has a time of validity determined by real roots, i.e. the time to the future singularity. The area with both light blue and red shading encompasses closed models with the unexpected feature that the time of validity is less than the time to collapse.
The area outside the parabola contains open models. The time of validity of the unshaded region is determined by real roots. The original observation of LPT's non-convergence for an underdensity (Sahni & Shandarin 1996) is an example that falls in this region. For small amplitude perturbations the time of validity is simply related by mirror symmetry to the occurrence of future singularities of closed models. The right hand plot in figure 9 is an example of an open model with time of validity controlled by complex roots (red shading outside the parabola).
Finally, some open models (especially those with large ∆) have mirrors that are open models. Figure 11 shows mirror models (∆ = 2, θ = 17π/36) and (∆ = 2, θ = 17π/36−π). These are initially expanding and contracting solutions respectively. The root plot in figure 12 predicts that the series is valid until av = 0.0016. The real root with η = 0 (cyan line) sets the time of validity and corresponds to the bang time (the future singularity) of the initially contracting model.
In all cases, the analysis correctly predicts the convergence of the LPT series.
Zeldovich and equal bang time models
The large expanse of phase space shaded light red in figure 10 suggests that complex roots should play a ubiquitous role in LPT applications but the situation is somewhat more subtle. For good physical reasons purely gravitational cosmological calculations often start with expanding, small amplitude, growing modes at a finite time after the big bang. The absence of decaying modes implies that the linearized perturbations decrease in the past 1 . A non-linear version of this condition is that the perturbation amplitude is exactly zero at t = 0. The same condition can be formulated as "the background and the perturbation have the same big bang time" or "the ages of the perturbation and the background are identical." The condition is
1 H0 y=1 y=0 dy [(1 + ∆ cos θ)y −1 + E(∆, θ)] 1/2 = 2 3H0
. This is a nonlinear relationship between the two initial parameters ∆ and θ which is shown by a thick blue line on the phase space diagram in figure 10. We have adopted the name "Zeldovich" initial conditions for the top-hat models that satisfy the equal bang time relation. There are a variety of definitions for Zeldovich initial conditions given in the literature. Generally, these agree at linear order. This one has the virtue that it is simple and easy to interpret. Note that the blue curve does not intersect the region of phase space where complex roots occur except, possibly, near ∆ = 0.
In the limit of small ∆ eq. (39) becomes The solutions are θ = θZ± where θZ+ = 2.82 and θZ− = π − θZ+ = −0.32. The second quadrant solution θZ+ corresponds to open models while its mirror in the fourth quadrant θZ− to closed models. Only when ∆ → 0 can complex roots approach the loci of Zeldovich initial conditions but they intersect only in the degenerate limit.
∆(3 sin θ − cos θ) = 0.(40
In the next section, we will show that points starting close to the Zeldovich curve continue to stay near it as they move through phase space. Such models have real, not complex, roots. This implies that closed systems along the curve always have a convergent series solution. Hitherto, LPT convergence has been studied only for initial conditions close to the Zeldovich curve. This is why problems have been noted only in the case of voids. The existence of the complex roots is a new finding. All of the above is based on the spherical top-hat model which has a uniform density.
As emphasized above, there are good physical motivations for adopting Zeldovich-type initial conditions. The fact that cosmological initial conditions must also be inhomogeneous (i.e. Gaussian random fluctuations) is not captured by the tophat model. One can imagine two extreme limiting cases for how the simple picture of top-hat evolution is modified. If each point in space evolves independently as a spherical perturbation then at any given time one expects to find a distribution of points along the Zeldovich curve. As time progresses this distribution moves such that the underdense points cluster around the attracting point (−1, 0.5) and overdense points move towards collapse. The distribution of initial density and velocity perturbations yields a cloud of points in phase space but complex roots never play a role because nothing displaces individual points from the Zeldovich curve. Each moves at its own pace but stays near the curve. Alternatively, it is well known that tidal forces couple the collapse of nearby points. These interactions amplify the initial inhomogeneities leading to the formation of pancakes and filaments. As time progresses motions transverse to the Zeldovich curve will grow. If these deviations are sufficient they may push some points into areas with complex roots. In a subsequent paper, we will explore these issues for general inhomogenous initial conditions.
LPT RE-EXPANSION
To overcome the constraints above, an iterative stepping scheme that respects the time of validity is developed for LPT. The initial parameters at the first step determine the solution for some finite step size. The output at the end of the first step determines the input parameter values for the next step and so on.
The Algorithm
Choose the background (a0, H0, Ω0 = 1, Y0) and the perturbation (b0 = a0, Hp0, Ωp0, X0) at initial time t0. The perturbed model is fully characterised by Hp0 and Ωp0 or by δ0 = ρp0/ρ0 − 1 and δv,0 = Hp0/H0 − 1 or by ∆0 and θ0. Extra subscripts have been added to label steps.
LPT converges for times t < T (∆0, θ0). Use LPT to move forward to time t * satisfying t0 < t * < T (∆0, θ0). At t * , the background and perturbed scale factors and time derivatives are a * , b * ,ȧ * , andḃ * . The fractional density and velocity perturbations with respect to the background are
δ * = (1 + δ0) a * b * 3 − 1 (41) δv, * =ḃ * /b * a * /a * − 1.(42)
Re-expand the perturbation around the background model as follows. First, let the time and Lagrangian coordinate for the background (inner edge of the unperturbed sphere) be continuous: t1 = t * and Y1 = Y0. These imply a1 = a * andȧ1 =ȧ * , i.e. the scale factor and Hubble constant for the background are continuous. At the beginning of the first step we assumed a0 = b0. This is no longer true at the end of the first step. Define a new Lagrangian coordinate X1 = X0b * /a * , new scale factor b1 = a * , and new scale factor derivativeḃ1 =ḃ * a * /b * . These definitions leave the physical edge of the sphere and its velocity unaltered
r physical, * = b * X0 = b1X1 (43) r physical, * =ḃ * X0 =ḃ1X1.(44)
The re-definitions relabel the fluid elements with a new set of Lagrangian coordinates and re-scale the scale factor. The perturbation parameters are unchanged δ1 = δ * and δv,1 = δv, * because physical quantities are unmodified. Consequently, ∆1 = ∆ * and θ1 = θ * .
Flow dynamics in the phase space
To examine how Lagrangian re-expansion works consider how the Lagrangian parameters ∆ and θ would vary if they were evaluated at successive times over the course of a specific cosmological history. Let δ(t) and δv(t) be defined via eq. (3) and apply the second-order equations of motion eqs. (9) and (10) to derive the coupled first-order system
dδ dt = − 2 t δv(1 + δ) (45) dδv dt = 1 3t {(1 + δv)(1 − 2δv) − (1 + δ)}(46)
where all occurrences of δ and δv are functions of time. From δ(t) and δv(t) one infers the parameters, ∆(t) and θ(t). These have the following simple interpretation: a Lagrangian treatment starting at time t has ∆ = ∆(t ) and θ = θ(t ) in the LPT series.
Since the system is autonomous it reduces to a simple flow in phase space. The flow has three fixed points at (δ, δv) = (0, 0), the unperturbed, background model, (−1, −1), a vacuum static model, and (−1, 0.5), a vacuum expanding model. Linearizing around (0, 0) shows it is a saddle fixed point. The tangent to the E = 0 curve at the origin is the attracting direction and the tangent to the equal big bang curve is the repelling direction. The fixed point at (−1, 0.5) is a degenerate attracting node and that at (−1, −1) is an unstable node. The flow vectors are plotted in the left panel of figure 13. The blue shaded region indicates closed models and red shaded region indicates models where complex roots limit the time of validity for LPT.
Note that the flow lines smoothly cover the whole phase space. The interpretation is that the continuous relabelling of Lagrangian coordinates and re-scaling of the scale factor has the potential to overcome the convergence limitations discussed thus far. Otherwise one might have seen ill-defined or incomplete flows or flows that were confined to a given region.
Asymptotic limits of open and closed models
The right panel of figure 13 zooms in on the area near the origin. Initial points that correspond to open models starting near the origin approach the Zeldovich curve and asymptotically converge to the strong attractor at (δ, δv) = (−1, 0.5).
Closed models collapse and the density δ → ∞. In the asymptotic limit, the solution to (46) is given by δ ∼ δ 2 v + K with integration constant K. From figure 13, the flow lines of closed models that start in the vicinity of the origin trace a parabolic path that is parallel and essentially equivalent to the Zeldovich curve.
The flow shows where re-expansion is needed. Closed model flow lines that start near the origin never pass through the red shaded region where complex roots play a role; the time of validity equals the time to collapse and no re-expansion is needed. However, closed models that originate in the red region must be re-expanded. The flow suggests that they eventually move into the blue region. So even though a closed model may initially have an LPT series with limited convergence, re-expansion makes it possible to move into the part of phase space where a single step suffices to reach collapse.
Finite steps and feasibility
This section and the next examine the feasibility of extending a solution from recombination to today. The results will be applied to fully inhomogeneous evolution in future paper.
Let the asymptotic time of validity for an open model be expressed in dimensionless form χ = limt→∞ H(t)T (∆(t), θ(t)). Here, ∆ → 5/4 and θ → tan −1 (−1/2) = 2.677 and T (∆, θ) is determined by the future time to collapse of the closed mirror An example shows that the basic effect can be seen even before the asymptotic regime is achieved. Figure 14 sketches the first two steps where the assumed model parameters at the first step are (∆0, θ0) = (0.01, 2.82). The scale factor at the time of validity is a = 0.179. A step with half the allowed increment in time is taken and the system is re-initialised. The re-initialisation implies (∆1, θ1) = (0.91, 2.68) or (δ1, δv,1) = (−0.82, 0.4). Afterwards the new time of validity is larger in this example.
The feasibility of the re-expansion scheme can be examined by evaluating the ratio of the time of validity before (T ) and after (T ) a step Figure 15 shows α evaluated along the continuous flow as a function of scale factor for three different starting initial conditions. Since α > 3 at all times, starting at initial time ti the time after N steps is roughly t ∼ α N ti > 3 N ti. be the final (initial) time of interest where t f /ti ∼ a f /ai ∼ 10 4.5 . Estimating α = 3 implies N ∼ log 3 10 4.5 ∼ 10 steps are needed. This numerical result for N is an overestimate and one can do better. It is important to recall that it based on an arbitrarily high order expansion which achieves an exact solution. If one is limited to calculations of finite Lagrangian order and imposes a maximum numerical error at the end of the calculation then more than N steps may be required. At least N steps are needed for series convergence and more than N steps may be needed for error control.
α = T T .(47)
One can extend any open model to an arbitrary future time while respecting the time of validity of the LPT series. The number of steps is governed by a geometric progression.
One can also extend any closed model to the future singularity while respecting the time of validity of the LPT series. Only a single step is needed for a closed model when the root is real (blue shaded region of figure 13). When it is complex (the region shaded both blue and red) the model flows first toward the node at (0, 0) (∆ decreases) and ultimately reaches the region of real roots. Multiple steps will generally be necessary to escape the region of complex roots. An approximate fit (eq. (D2)) shows that χ ∼ T (∆, θ)H(t) ∝ ∆ β for small ∆ where β < −2.5. Both χ and the time of validity increase as the node is approached. Time advances at least as quickly as a geometric progression and this is analogous to the manner in which the open model steps towards its limit point. However, unlike the open case, once the trajectory crosses into the blue region (assuming it does not lie exactly on the unstable attracting trajectory) a single final step is needed. The specific number of steps will depend upon the starting initial conditions but will be small because of the property of geometric progression.
Demonstrative examples
LPT re-expansion can solve the problematic convergence in previously analysed open and closed models.
Open models have asymptotic values of ∆ and θ and simple evolution. The first section below includes numerical results that provide a practical demonstration of the success of LPT re-expansion in this case. Convergence as Lagrangian order increases and/or time step size decreases is observed qualitatively.
Closed models have a somewhat more complex behaviour (before and after turnaround). The second section provides both a qualitative and quantitative discussion of convergence. The scaling of the leading order error and the time step control which are derived are of general applicability. 179. The left panel shows an attempt to take a single step to a = 1 using successively higher LPT series orders. As expected, higher order terms do not improve the accuracy of the description because the time of validity is violated. The middle panel employs three steps to reach a = 1, each respecting the time of validity. Now the LPT series with higher order improves the accuracy just as one desires. The right panel employs six steps to reach a = 1, each respecting the time of validity. Again, higher order improves the description. Note that more frequent re-expansion, i.e. smaller steps in time, improve the errors at fixed LPT order. Figure 17 investigates the closed model introduced in figure 8 (∆ = 0.2, θ = 0.44, a0 = 10 −3 ). The time of validity is determined by a complex root. The first panel shows that the series begins to diverge at a = 0.38 well before the collapse singularity is reached at a = 0.94.
Closed model
A single time step less than the time of validity is guaranteed to converge as the order of the Lagrangian expansion increases. LPT re-expansion utilises a set of such time steps each of which is likewise guaranteed to converge. However, since a calculation of infinite order is never achieved in practice, it is worth characterising how convergence depends upon two calculational choices one has at hand, the time step and the order of the Lagrangian expansion.
A single small step beginning at t = t0 and ending at t f has leading order error for the m-th order Lagrangian approximation 2 ∝ (t f /t0 − 1) m+2 ∆ m+1 , where ∆ is the value at the initial time. If the same small interval is covered in N smaller steps, the error after N steps scales as N −m (t f /t0 − 1) m+2 ∆ m+1 (see Appendix E for details). If the step size increases in a geometric sequence such that δt/t is a constant for each intermediate step, then t f = t0(1 + δt/t) N and the error after N steps scales as N (t f /t0 − 1)(δt/t) m+1 ∆ m+1 . This leads to the interpretation that the error per intermediate step scales as (δt/t) m+1 ∆ m+1 . Define = (δt/t)∆. The leading order error scales as m+1 which is numerically small if < 1. The sum of all the missing higher order terms is finite if δt < T , i.e respects the time of validity.
In a practical application, the initial and final times are not close. A reasonable time step criterion is to choose < 1 fixed throughout the evolution and to infer δt for a given ∆. Other choices are possible but δt must always be less than the time of validity. If is held fixed throughout the evolution, then the net error after N steps for the m-th order approximation ∝ m+1 N .
The number of steps required to go from the initial to the final time can be estimated. As a special case assume that ∆ is constant. The time step criterion implies that the number of steps to move from the initial time t = t0 to the final time t f for given is N = log(t f /t0)/ log(1 + ( /∆)). For limited total intervals (t f − t0 << t0) and small steps ( /∆ << 1) the exact answer reduces to N ∼ (t f − t0)∆/ = (t f − t0)/δt. Here δt = t∆ does not grow appreciably over the interval so the estimate for N is a maximum. In this limit, the net error ∝ m ∆. The leading order error for the m-th order Lagrangian scheme decreases at least as quickly as m . In more general situations the value of ∆ varies. Once the closed model turns around ∆ increases without bound. For fixed the step size δt decreases monotonically to zero as t → t coll where t coll is the time of the future singularity. At any order it would take infinitely many steps to follow the solution up until collapse. Consider the problem of tracking the solution up to a large, finite value of ∆ = ∆ f . This moment corresponds to a fixed time t f < ∼ t coll in the exact solution. The number of steps N < Nmax ∼ t f /δt f where δt f is the step size for the system near ∆ f ; δt f ∝ /∆ f . The leading order error after N steps at the m-th Lagrangian order ∝ m+1 N < m+1 Nmax ∼ m ∆ f . This method of step control forces the leading order error at fixed time t f < t coll to decrease as the Lagrangian order m increases and/or the control parameter decreases.
The second and third panels in figure 17 show the runs with = 0.5 and = 0.2 respectively. The Lagrangian orders are colour-coded; dots show time steps determined by the above criterion. At each order the solution was terminated when the numerically determined ∆ > 100 so as to avoid the infinite step regime. This required 32 steps for the = 0.5 run and 77 steps for the = 0.2 run. As the red solid lines illustrates, the first order solution turns around before all other solutions. This explains why its step size begins to shrink near the midpoint of the graph. By contrast, all the step sizes for higher order solutions are very similar up to that point.
The numerical errors may be analysed from two points of view.
(i) A comparison of different coloured lines (different Lagrangian orders) in a single panel shows that error decreases as m increases. This is true in a quantitative as well as qualitative sense. For example, in the second panel at a = 0.64 a plot of the log of the absolute error is approximately linear in m, as expected.
(ii) A comparison of the same coloured lines in the middle and right panels shows that smaller implies better accuracy. Again, this is true in a quantitative as well as qualitative sense. For example, the observed ratio of errors at a = 0.64 for the 9-th order calculations is 5 × 10 −4 . To evolve up to this time with = 0.5 (middle panel) takes 10 steps; with = 0.2 (right panel) it takes 22 steps. The expected ratio of errors is (0.2/0.5) 9+1 (22/10) ∼ 2 × 10 −4 , the same order of magnitude as the observed ratio.
These comparisons lead to the important conclusion that the leading order error for LPT re-expansion varies with Lagrangian order and time step as theoretically expected.
It is clear that considerable benefit accrues not only from implementing higher order Lagrangian schemes but also by limiting time step size (which must always be less than the time of validity). For simple examples like the top-hat it is feasible to work to very high Lagrangian order but this is not likely to be true in the context of more complicated, inhomogeneous problems. On the other hand, marching forward by many small time steps using LPT re-expansion is generally feasible. In the example above the initial perturbation is ∆ = 0.2 whereas a practical calculation starting at recombination would start with ∆ ∼ 10 −5 . For the same the practical application requires more steps for the phase before turnaround but the net increase is only a modest logarithmic factor. In fact, most of the steps in the example were taken after turnaround and the total number varies with the depth of the collapse. This will continue to be true for the practical calculation. The choice of step size and order for such applications will be the subject of a forthcoming paper.
CONCLUSION
We have investigated the time of validity of Lagrangian perturbation theory for spherical top-hat cosmologies with general initial conditions. Using techniques from complex analysis we showed that the time of validity is always limited for open models. We also discovered a class of closed models whose time of validity is less than their time to collapse. We introduced the concept of the mirror model and derived a symmetry principle for the time of validity of mirror models. For small initial perturbations the time of validity of LPT series expansion of an open model corresponds to the collapse time of a closed mirror model.
A qualitative analogy is useful. A single LPT series expansion is similar to a single step in a finite difference approximation for advancing a hyperbolic partial differential equation like the wave equation. The time of validity of the LPT expansion is analogous to the Courant condition which guarantees stability. In LPT the constraint is an acceleration-related time-scale; in the wave equation it is a sound-crossing time-scale.
We developed the method of LPT re-expansion which overcomes the limitations intrinsic to a single expansion. We demonstrated how to iteratively re-expand the solution so as to link convergent series expressions that extend from initial to final times. The time of validity of the expansions set the minimum number of re-expansion steps (∼ 10) necessary for cosmological simulations starting at recombination and proceeding to the present epoch. Finite as opposed to infinite order Lagrangian expansions required extra steps to achieve given error bounds. We characterised how the leading order numerical error for a solution generated by LPT re-expansion varied with the choice of Lagrangian order and of time step size. We provided a recipe for time step control for LPT re-expansion based on these results.
Our long-term goal and motivation for this study is to develop a numerical implementation of LPT re-expansion for fully inhomogeneous cosmological simulation. Top-hats with Zeldovich initial conditions have special properties with respect to LPT convergence. We found that all underdense models must be treated by re-expansion while none of the overdense ones need be. However, during the course of an inhomogeneous simulation the density and irrotational velocity perturbations (with respect to a homogeneous background cosmology) at an arbitrary point will generally not fall on the top-hat's Zeldovich curve. Hence, the convergence of LPT in inhomogeneous applications must be guided by the analysis of more general models. Tophats with arbitrary initial conditions are the simplest possibility and constitute the main focus in this paper. The limitations on LPT convergence which we have elucidated in this generic case are considerably more complicated than in the top-hat with Zeldovich initial conditions. Our plan is to use the generic time of validity criterion to determine the time-stepping for inhomogeneous evolution. This should allow us to develop high-precision simulations with well-defined control of errors. The practical impact of a refined treatment of LPT convergence is not yet clear.
The convergence issues we have dealt with should not be confused with the breakdown when orbit crossing takes place and the Jacobian of the transformation from Lagrangian to physical coordinates becomes singular. At that time the flow becomes multi-streamed and much of the simplicity and advantage of the Lagrangian approach vanishes. The aim of the current work is to make sure it is possible to reach the epoch of multi-streamed flow but offers nothing new on how to proceed beyond it. In fact, it may be necessary to include an effective pressure term in the equations to account for the velocity dispersion induced by orbit crossing (Adler & Buchert 1999;Buchert, Dominguez & J.Perez-Mercader 1999) or to adopt alternative approximations for the basic dynamics (such as the adhesion approximation; see Sahni & Coles 1995 for a review and references therein) to make progress.
APPENDIX A: FORMAL SET-UP OF THE SPHERICAL TOP-HAT
We intend to study an inhomogeneous universe. It contains a single, compensated spherical perturbation evolving in a background cosmology. To describe two spatially distinct pieces of the inhomogeneous universe (the background and the central perturbation) we invoke the language of homogeneous cosmology.
A1 Description of the background
The origin of the coordinate system is the centre of the sphere. The background system at the initial time t0 is set by the physical size of the inner edge r b,0 , the velocityṙ b,0 and density parameter Ω0. The Lagrangian coordinate system is extended linearly throughout space once the Lagrangian coordinate of the inner edge is fixed. Let the Lagrangian coordinate of the inner edge be
Y = r b,0 a0 . (A1)
Either choose the initial background scale factor a0 and determine the coordinate system or, alternatively, fix Y and infer the background scale factor. In either case, the scale factor embodies the gauge freedom associated with the radial coordinate system. The future evolution of the inner edge of the background is given by r b (t) = a(t)Y . The velocity at the initial time satisfiesṙ b,0 =ȧ0Y . The density at any later time is
ρ b (t) = ρ b0 a 3 0 a 3 ,(A2)
and the Hubble parameter for the background is
H0 =ṙ b,0 r b,0 =ȧ 0 a0 . (A3)
The evolution of the scale factor is
a a = − 4πGρ b0 a 3 0 a 3 = − 1 2 H 2 0 a 3 0 Ω0 a 3 (A4)
The quantities, r b,0 ,ṙ b,0 , Ω0 and t0 along with the choice of the coordinate system, completely specify the background universe.
A2 Description of the innermost perturbation
The perturbation can be described by four physical quantities: the physical position rp,0 and velocityṙp,0 of the edge (or the ratio H0p =ṙp,0/rp,0), the density parameter Ωp0 at the initial time t0. The Lagrangian coordinate system for the perturbation is
X = rp,0 b(t0) .(A5)
It can be linearly extended throughout space. Like a0, b(t0) embodies the gauge freedom associated with the choice of the coordinate system. Without loss of generality, one can pick this gauge to satisfy b(t0) = a0.
(A6)
Note that the Lagrangian coordinate systems for the background and perturbation are different. Let ρ0 and ρp,0 denote the densities of the background and perturbation respectively. Define the perturbation parameters
δ = ρp0 ρ b0 − 1 (A7) δv = H0p H0 − 1 (A8) giving Ω0p = (1 + δ) (1 + δv) 2 . (A9)
A3 Inhomogeneous model Figure A1 shows how an overdense and underdense innermost sphere may be embedded with compensation in a homogeneous background universe. The assumption that the background cosmology evolves like a homogeneous model, fully described in terms of its Hubble constant and density, imposes consistency conditions. At the initial instant the "inner edge" of the unperturbed background distribution is at physical distance r b,0 from the centre of the sphere. The region with r > r b,0 will evolve like an unperturbed homogeneous cosmology as long as (i) the mass within equals the mass that an unperturbed sphere would contain; (ii) matter motions within the perturbed region do not overtake the inner edge of the homogeneous region.
These conditions which are obvious in the Newtonian context have general relativistic analogues (Landau & Lifshitz 1975). Next, consider the innermost perturbed spherical region. At the initial time let rp,0 be the "outer edge" of this region. The physical properties and evolution of the innermost region are fully described in terms of its Hubble constant and density as long as its outer edge does not overtake matter in surrounding shells. While this is obvious in a Newtonian context there exists a relativistic analogue (Tolman 1934;Landau & Lifshitz 1975).
The inhomogeneous model is incomplete without specification of the transition region between the innermost sphere and the background. For the background to evolve in an unperturbed fashion the mass within r b,0 must be exactly 4πρ0r 3 b,0 /3. There are many ways to satisfy this requirement. For example, when δ > 0 a simple choice is to place an empty (vacuum) shell for rp,0 < r < r b,0 so that (ρp0/ρ0) = (r b,0 /rp,0) 3 = (Y /X) 3 . The evolution of each matter-filled region proceeds independently as long as the trajectories of the inner and outer edges do not cross. When δ < 0, a more complicated transition is required. For example, one choice is to nest sphere, empty shell and dense shell (see figure A1) so that the mass within r b,0 matches that of the unperturbed background. In this case ρp0r 3
p,0 = f ρ0r 3 b,0 for some f < 1 (the remaining fraction 1 − f is placed in the dense shell). Varying the specifics of the compensation region while keeping the properties of the sphere fixed leaves δ and δv, as defined above, invariant.
For fixed δ and δv the solution b(t) is independent of the details of the transition. Nonetheless, variation in f , r b,0 /rp,0 and Y /X all go hand-in-hand. Hence, the extent of time that the sphere's evolution may be treated as independent of the matter-filled outer regions also varies. A basic premise of this paper is that it is meaningful to determine the limitations arising from the convergence of the LPT series independently of limitations associated with crossing of separate matter-filled regions. For a given a δ and δv this separation can be achieved for specific constructions by choosing the radius and (hence velocity) of the inner sphere and the energy of the compensating region appropriately.
A4 Number of degrees of freedom for the innermost sphere
If the innermost sphere corresponds to an overdensity then the compensating region can be a vacuum as shown in figure A1.
Having picked the co-ordinate system, having selected equal initial times for the background and perturbation (not equal bang times but equal times at which we give the background and perturbation values), and required the correct amount of mass, only two degrees of freedom remain: δ and δv.
To reiterate, the background and the perturbation can have different big bang times. Setting them equal would imply a relationship between δ and δv and leave a single free parameter.
If the innermost sphere corresponds to an underdensity then the compensating region is not vacuum but a spherical shell. In this case, in addition to δ and δv, one must specify f or, equivalently, rp,0. But the solution for b(t) is independent of the size of the innermost sphere so, again, only two degrees of freedom remain.
A5 Preventing shell crossing
There are two sorts of limitations for the solution of b(t). One is the calculation-dependent limitation arising from the convergence properties of the Lagrangian series expansion. It involves the scale factors only. The other is a physical limitation arising from collisions of the innermost region with surrounding non-vacuum regions (either the background or a compensating shell). We show that it is possible to delay the epoch of collisions indefinitely without altering the evolution of the innermost region.
Fix H0, Hp,0, ρ0 and ρp,0. This implies that the expansion parameters in LPT, δ and δv, and the time of validity of the LPT solution are all fixed. Consider the case of an overdensity surrounded by vacuum. To stave off the collision of the outer edge of the innermost region with inner edge of the homogeneous background hold r b,0 fixed and reduce rp,0. The velocitẏ rp,0 = H0prp,0 becomes arbitrarily small. The time for the edge to reach any fixed physical distance increases without bound. Shell crossings may be put off indefinitely. However, we have altered the mass within the innermost edge of the background so we add back a thin, dense shell just inside r b,0 and set it on a critical trajectory outward. This accomplishes our goal.
The case of the underdensity surrounded by a compensating shell is identical. First, we must make sure that the compensating shell does not overrun the homogeneous model. Choose the shell to be thin, fix its initial physical distance from the centre and adjust is velocity (based on how the interior mass changes) to give a critical solution. The two power laws, one for the compensating shell and one for the innermost boundary of the homogeneous model, cannot cross in the future. Second, as above, note that reducing rp,0 reduces the outward velocity of the edge so that it takes more time to reach the initial position of the compensating shell. The time can be made arbitrarily long.
The limitations in LPT convergence are completely distinct from those associated with physical collisions in inhomogeneous model.
APPENDIX B: SERIES EXPANSIONS FOR A FUNCTION OF TWO VARIABLES
In this section we elucidate by example some qualitative features of the expansion of b(t, ∆), the central quantity in the Lagrangian treatment of the top-hat. We assume a very simple form denoted f (t, ∆) and look at convergence with respect to expansions in t and ∆. Let
f (t, ∆) = t 2/3 1 t + ∆ 1/3 .(B1)
The series expansion of this function around ∆ = 0 at fixed t is
f ∼ t 1/3 + t 4/3 ∆ − 1 9 t 7/3 ∆ 2 + 5 81 t 10/3 ∆ 3 − 10 243 t 13/3 ∆ 4 + 22 729 t 16/3 ∆ 5 + O(∆ 6 ) (B2)
which is supposed to mimic the Lagrangian expansion in ∆. One can also expand the function as a series in t around t = ti
f ∼ 3 ∆ + 1 ti t 2/3 i + (2∆ti + 1)(t − ti) 3 ∆ + 1 t i 2/3 t 4/3 i + −∆ 2 t 2 i − ∆ti − 1 (t − ti) 2 9 ∆ + 1 t i 2/3 t 7/3 i (∆ti + 1) + 4∆ 3 t 3 i + 6∆ 2 t 2 i + 12∆ti + 5 (t − ti) 3 81 ∆ + 1 t i 2/3 t 10/3 i (∆ti + 1) 2 + O (t − ti) 4 .(B3)
Both expansions involve the complex power z 1/3 . There are two branch cuts which extend to z = 0 so at ∆ = −1/t the function is not analytic. Additionally, the expansion in t is not analytic at t = 0. The efficacy of various expansions are illustrated in figure B1. In all the plots the black dotted line indicates the exact function. The top left panel shows successively higher order series approximations in ∆ as a function of t for the specific case ∆ = 1/10. The question here is whether the pole at a given time lies with a disk of radius 1/10? The location of the pole is ∆ = −1/t so the answer is "yes" when t > 10. This pole interferes with the convergence of the series expansion for ∆ = 1/10. The figure demonstrates the (future) time of validity is t < 10.
The top right panel shows the series in ∆ at a fixed t = 1/10. The question here is how big a perturbation will converge at t = 1/10? Since the location of the pole is ∆ = −1/t the radius of convergence at the indicated time is 10. Perturbations with |∆| > 10 are not expected to converge and the figure shows that this is indeed the case.
The bottom left panel shows the series in t expanded around ti = 2 for fixed ∆ = 1/10. The poles are at t = −10 and t = 0 in the complex t plane. The expected radius of convergence is min(|2 − 0|, |2 − (−10)|) = 2 or ti − 2 < t < ti + 2. As seen in the plot, the series converges only in the expected range (0, 4)
The bottom right panel shows the series in t expanded around ti = 2 for ∆ = −1/3. The poles are at t = 3 and t = 0 in the complex t plane. The expected radius of convergence is min(|2 − 0|, |2 − 3|) = 1 or ti − 1 < t < ti + 1. As seen in the plot, the series converges only in the expected range (1, 3).
APPENDIX C: PARAMETRIC SOLUTION
The background model has scale factor a0 and Hubble constant H0 =ȧ0/a0. The model, perturbed in density and velocity, is parameterized by ∆ and θ and has scale factor b(t). For the choice of coordinate system given in the text the second order equation for b is
b b = − 1 2 H 2 0 a 3 0 (1 + ∆ cos θ) b 3(C1)
with the initial conditions that at t = t0, b(t0) = a0,ḃ(t0) =ȧ0(1 + ∆ sin θ). The scale factor a0 and the velocity of the backgroundȧ0 at the initial time t0 are positive. The parametrization ofḃ(t0) allows either positive or negative values where ∆ is non-negative and −π < θ π. The quantity (1 + ∆ cos θ), proportional to total density, is non negative. This equation once integrated is
b 2 = H 2 0 a 3 0 (1 + ∆ cos θ) b + (1 + ∆ sin θ) 2 − (1 + ∆ cos θ) a0 .(C2)
The combination
E(∆, θ) = (1 + ∆ sin θ) 2 − (1 + ∆ cos θ)(C3)
is proportional to the total energy and determines the fate of the system. If E(∆, θ) > 0, the model is open and if E(∆, θ) < 0, the model is closed and will re-collapse eventually. Four cases (positive and negative E, positive and negativeḃ0) are shown in figure 5.
C1 Initially Expanding Solutions
The expanding case withḃ0 > 0 for open models (E > 0) has solution b(η, ∆, θ) = a0 2
(1 + ∆ cos θ)
E(∆, θ) (cosh η − 1) (C4) t(η, ∆, θ) = 1 2H0 (1 + ∆ cos θ) E(∆, θ) 3/2 (sinh η − η) + t + bang (∆, θ)(C5)
and the singularity b = 0 occurs at η = 0. For closed models (E < 0) the solution is b(η, ∆, θ) = a0 2
(1 + ∆ cos θ)
|E(∆, θ)| (1 − cos η) (C6) t(η, ∆, θ) = 1 2H0 (1 + ∆ cos θ) |E(∆, θ)| 3/2 (η − sin η) + t + bang (∆, θ).(C7)
For closed models, the convention adopted sets η = 0 at the singularity nearest in time to t0. For both models, the time at η = 0 is denoted t + bang . For closed models the time at η = 2π is denoted t + coll . At the initial time the solutions (both open and closed) satisfy b(t0) = a0,ḃ(t0) =ȧ0(1+∆ sin θ) and t = t0. The condition b(t0) = a0 sets the value of the parameter at the initial time η0. The velocity condition is then manifestly satisfied from the form of eq. (C2). The condition t = t0 at η = η0 sets the value of the bang time
t + bang = t0 − 1 2H 0 (1+∆ cos θ) |E(∆,θ)| 3/2 (η0 − sin η0) E < 0 1 2H 0 (1+∆ cos θ) E(∆,θ) 3/2 (sinh η0 − η0) E > 0. (C8)
The bang time for the model can also be written as
t + bang = t0 − b=a 0 b=0 db (ḃ 2 ) (1/2) ,(C9)
whereḃ 2 is given by eq. (C2) with the sign for the square root positive. The age of the model since its birth is
tage(∆, θ) = b=a 0 b=0 db (ḃ 2 ) (1/2) = η=η 0 η=0 db/dη · dη (ḃ 2 (η)) (1/2) .(C10)
Inserting the appropriate parametric solution, one can verify that the bang times obtained from (C8) and (C9) are identical. Generally t + bang = 0. The velocity at the initial time iṡ b0 =ȧ0|E| 1/2
sin η 0 1−cos η 0 E < 0 sinh η 0 cosh η 0 −1 E > 0.(C11)
First,ḃ0 > 0 implies η0 > 0. Second, if the age of the model increases, η increases. For the open solution if η varies from 0 to ∞ time increases from t + bang to ∞. For a single cycle of the closed solutions, η increases from 0 to 2π and time increases from t + bang to t + coll . In summary, the parametric solutions solve eq. (C1) and eq. (C2) for the specified initial conditions. As a final useful step, rewrite eq. (C9) by defining y = b/a0
t + bang = t0 − 1 H0 y=1 y=0 dy [(1 + ∆ cos θ)y −1 + E(∆, θ)] 1/2 (C12)
which follows from eq. (C2) and uses the same positive square root convention.
C2 Initially Contracting Solutions
Next, consider the caseḃ0 < 0. The parametric solution for E > 0 is b(η, ∆, θ) = a0 2
(1 + ∆ cos θ) E(∆, θ) (cosh η − 1) (C13) t(η, ∆, θ) = 1 2H0
(1 + ∆ cos θ) E(∆, θ) 3/2 (− sinh η + η) + t − bang (∆, θ)
and for E < 0 is b(η, ∆, θ) = a0 2
(1 + ∆ cos θ)
|E(∆, θ)| (1 − cos η) (C15) t(η, ∆, θ) = 1 2H0
(1 + ∆ cos θ) |E(∆, θ)| 3/2 (−η + sin η) + t − bang (∆, θ).
Again, for closed models, the convention adopted is that the singularity nearest to t0 corresponds to η = 0. The time at η = 0 is t − bang and the collapse time for closed models is t − coll . The parametric form of the solutions satisfies eq. (C1) and eq. (C2). Just as in the previous case, the initial conditions set η0 and t − bang . Since the singularity at η = 0 lies to the future of t0,
t − bang = t0 + b=a 0 b=0 db (ḃ 2 ) (1/2) .(C17)
where,ḃ 2 is given by eq. (C2). The sign of the square root is chosen to be positive and the integral is a positive quantity which is added to t0. For closed models the singularity at η = 2π lies to the past of t0 at t − coll . In this case (see figure 5) the labelling implies t − coll < t0 < t − bang . Although this might seem backwards, it facilitates combining the open and closed models into one complex function as was done in the positiveḃ0 case. The initial velocity iṡ b0 =ȧ0|E| 1/2
sinh η 0 1−cosh η 0 E > 0 sin η 0 cos η 0 −1 E < 0 (C18)
The initial velocityḃ0 < 0 implies η0 > 0. For the age of the model to increase, η must decrease. Conversely, if η increases, the time in the open model decreases from t0 to −∞ and the time in the closed model decreases from t0 to t − coll . A table summarising the properties of the physical solutions with η = |η|ζ = ηζ follows.
Closed Open
If η increases If t increases t bang − t0
b0 > 0 ζ = 1 ζ = i t increases from t + bang η increases to ∞ or 2π < 0 b0 < 0 ζ = 1 ζ = i t decreases from t − bang η decreases to 0 > 0
C3 Analytic Extension of the exact solution in parametric form
The differential eq. (32) was solved numerically over the range 0 θ π, 0 < δ < 100 and −π < φ π where ∆ = ∆e iφ . For each value of (∆, φ, θ), the numerical solution matched one of the two possible parametric forms.
Omitting the explicit functional dependence on ∆ and θ the following abbreviations are useful
j = (1 + ∆ cos θ) (C19) h = (1 + ∆ sin θ) 2 j (C20) E = (h − 1)j.(C21)
The two possible parametric forms that agree with the numerical solution are
b(η) = a0 2 j [−E] (1 − cos η) (C22) t(η) = t0 ± 1 2H0 j [−E] 3/2 (η − sin η) − tage (C23) where tage = 1 H0 j √ h − j [E] 3/2 sinh −1 E j .(C24)
The branch cut lies along the negative real axis for all fractional powers and from −i∞ to −i and +i to i∞ for the inverse sinh function. The prescription for the correct form is for the choice of the ± sign in t eq. (C23) and denoted t+ and t−. The correct form depends upon θ, φ, arg[h] (the arg is defined to be between −π and π) and the (real) value j = j when φ = 0 or π. The figure C1 shows the upper half plane for the perturbation partitioned into areas where the complex extension of the solution has one of two forms. The lower half plane has the same structure inverted through the origin. The horizontal red dashed line denotes ∆ sin θ = 1 and the vertical red dashed lines denote ∆ cos θ = ±1. In some areas a single form applies as marked but in the central area both occur. The detailed prescription is The initial conditions are parameterized by ∆ > 0 and −π < θ π. The transformation θ → π ± θ and ∆ → −∆ leaves the solution unchanged. At any time the roots for θ and π ± θ are negatives of each other. The root plot only depends upon the absolute value of the root so the plots for θ and π ± θ are identical. It is sufficient to consider the upper half plane. For a given θ the algorithm to map out R∆(t) is the following: Vary ∆ from 0 to an arbitrarily large value (∼ 100) in small increments. For each ∆ select ∆ = ∆e iφ by varying the phase angles φ over the range 0 to 2π. For each ∆ evaluate t(η) at η = 0 and η = 2π calculated according to eq. (C25). Finally, hunt for solutions that set the imaginary part of t to 0. This last step involves one-dimensional root-finding in φ at fixed ∆. A solution leads to a specific pair (t, ∆) that is a pole in the function b(∆, t).
t = 0 θ π/4 φ = π, |∆| sin θ < 1 and j < 0 t− otherwise t+ π/4 < θ π 0 < φ < π and arg h > 0 t+ −π < φ < 0
Roots with t > t0 limit future evolution; those with t < t0 limit backwards evolution. Both sets are shown in the results. Roots are classified based on whether they are real or complex. For closed models the real roots can represent a singularity that is nearby (η = 0) or far away (η = 2π) from t0. This classification at the initial time is independent of whether the singularity is in the past or future and is independent of whether the model is expanding or contracting. For open models the real roots are always considered nearby (η = 0).
In what follows the numerical answers are first described in qualitative terms. In the next section simple analytic estimates for the time of validity are developed. Figure D1 shows the root plots on a log-log scale. Sixteen panels, each with a particular value of θ listed at the top, are displayed. The x-axis is log 10 H0t and the y-axis is the log of the distance of the singularity from the origin in the complex ∆ plane. The initial time, H0t0 = 2/3, is marked by the vertical black dashed line.
For each θ, the shaded region indicates the range of ∆ that gives rise to closed models. Figure 2 shows that closed models occur only for θ < θ + c = 0.463 in the upper half plane so only some of the root plots have shading and then only at smaller ∆. The colour coding of the dots indicates four types of roots: real and complex roots where η = 2π are in blue and red, respectively; real and complex roots with η = 0 are in cyan and pink, respectively. The radius of convergence at the initial time t0 is infinite, i.e. the Lagrangian series is exact at the initial time by construction. At times very close to the initial time the root loci lie off the plot. Only the roots to the right of H0t0 are relevant for forward evolution and, conversely, only those to the left are relevant for backwards evolution. The discussion is focused on the case of forward evolution but it is straightforward to consider the restrictions on marching backwards in time.
The phase of the root (of smallest magnitude) appears in figure D3. Figure D1. Root plots for θ in the range 0 θ π. In each plot the abscissa is log 10 H 0 t and the ordinate is the logarithm of the magnitude of the root. The vertical black dashed line marks the initial time. The shaded area corresponds to closed models. The blue and red points show real and complex roots with η = 2π, respectively. The cyan and pink show real and complex roots with η = 0, respectively. The green and purple dashed lines are |δv| = 1 (∆ = | sin θ| −1 ) and |δ| = 1 (∆ = | cos θ| −1 ), respectively. The blue dashed line indicates the switch between two forms and a single form of the parametric solution at ∆ = |2 sec θ − csc θ|. Coloured version online. and the transition between one and two complex forms, respectively. For each θ the lines mark the implied, special value of ∆. These dashed lines also appear with the same colour coding in figure D2.
The roots in figure D1 will be analysed in the range 0 < θ π/4, π/4 < θ π/2, π/2 < θ π.
D1.1 0 θ π/4
The top left panel in figure D1 has θ = 0; the blue dots indicate real roots with η = 2π; the blue shading indicates a closed model; the phase is positive (top left panel in figure D3). Only a single branch is evident. In sum, each root is the collapse time of a closed, pure density perturbation. For an expanding model, η = 2π implies that the root is the future singularity. That θ = 0 is a special case can be seen by consulting figure D2: a ray starting at ∆ = 0 with θ = 0 never intersects any of the other lines of the diagram. In general, each time a ray crosses one of the lines there is qualitative change in the properties of the roots. Figure D2. Several conditions determine the nature of the roots in phase space. The most significant are schematically illustrated here. The green horizontal lines are |δv| = ∆| sin θ| = 1; purple vertical lines are |δ| = ∆| cos θ| = 1; the black curved line is the E = 0 critical solution. The red lines ∆rc mark where real roots associated with closed models (or closed mirror models) transform to complex roots. The blue dashed lines mark the division between one and two complex forms (see also figure C1). Physical models lie to the right of δ = −1. Expanding models lie above δv = −1. The intersection δ = δv = 1 occurs at θ = π/4. The point P near θ = 0.84 is the meeting of δv = 1 and ∆rc. Coloured version online.
For 0 < θ π/4 a great deal more complexity is evident in figure D1. First, consider a ray emanating with small angle 0 < θ < θ + c in figure D2 (tan θ + c = 1/2 is the slope of the E = 0 line at the origin). Eventually such a line will cross the black line which is the E = 0 critical solution labelled ∆E=0. For small ∆ the models are closed; for larger ∆ they are open. In figure D1 this distinction corresponds to the the blue shading (closed models) at small ∆ versus the unshaded (open) models at large ∆.
Within the shaded region note that two branches of real roots are present beyond a given time; at large t (asymptotically) the lower branch is ∆ → 0 and the upper branch is ∆ → ∆E=0. The lower branch sets the time of validity for small ∆. Each root is the collapse time of a closed model which has both density and velocity perturbations at the initial time.
As ∆ increases the time of validity inferred from the lower branch decreases. At the critical point ∆ = ∆rc, the two real branches merge and connect to a branch of complex roots (intersection of red and blue points). For ∆ > ∆rc, the complex roots determine the time of validity even though the upper branch provides a real root. The complex roots do not have a direct physical interpretation in terms of future singularities of physical models. On figure D2 the ray emanating from the origin at shallow angle crosses the red dashed line labelled ∆rc at this critical point.
Physically, when ∆ exceeds ∆rc, the velocity perturbation dominates the density perturbation in the sense that the collapse time begins to increase. The real root corresponds to the future singularity of the model. As ∆ increases further, the solution eventually becomes critical (infinite collapse time). The particular value where this occurs is ∆E=0 and it corresponds on figure D2 to the ray crossing the labelled black line. Within the entire range ∆rc < ∆ < ∆E=0 the complex root determines the time of validity. So, even though any model in this range is closed and possesses a real future singularity, the time of validity is determined by the complex root. This gives the sliver on figure 10 which is the overlap of light red and blue shadings.
Both ∆rc and ∆E=0 decrease as θ → θ + c as is evident from figure D2 and both vanish at θ + c . On figure D1 the real roots completely disappear and only the complex roots are present, i.e. the two real branches have been pushed out to infinite times. The panel with θ = 5π/36 = 0.436 is numerically closest to the critical case θ + c = 0.464 and the real branches are just barely visible at the right hand edge.
For the rest of the upper half plane θ + c < θ π the ray no longer intersects any closed models. For θ + c < θ < π/4 the real roots reappear and move back to the left in figure D1 (see panel with θ = π/6). Now, however, the roots are negative (see figure D3). This is a manifestation of mirror symmetry which relates the negative real roots of an open model to the positive real roots of a closed model. At large t the two branches have ∆ → 0 and ∆ → ∆E=0 and are completely analogous to the real branches just discussed for closed models. The separation between the two real branches increases as θ increases and the solution loci shifts upwards in ∆. And just as before the two branches join and meet a complex branch. The second red dashed line ∆rc in figure D2 shows the real to complex transition for the roots for the open models.
This behaviour might be expect to continue for π/4 < θ < π but there is an additional complication: the analytic extension involves two forms. As the ray sweeps counterclockwise in figure C1 it crosses δv = 1 (horizontal dashed line and the curved blue line. These are also schematically illustrated in figure D2.
D1.2 π/4 < θ < π/2
All physical models are in this range are open. Real roots have a straightforward interpretation in terms of the mirror models. Although some of the analysis described for θ < π/4 continues to apply several additional complications ensue. To understand them it is useful refer to the phase space picture shown in figure D2. As θ increases, the point where ∆rc meets δv = 1 is labelled P.
For a fixed θ consider increasing ∆ from small values near the origin to ∞. The order in which this ray intersects the green (δv = 1), purple (δ = 1), red (∆rc) and blue (one or two complex forms) curves will correlate with the change in roots.
The roots are negative real for small ∆. They correspond to the collapse time of a closed mirror model. Increase ∆ and ignore ∆rc. When the δv = 1 line is crossed, the sign of the closed mirror model's velocity switches from expanding to contracting. This just means that the labelling of the future singularity switches from further away (η = 2π) to nearer (η = 0). Now recall ∆ < ∆rc implies real roots and, by definition, δv = ∆ sin θ. Hence, ∆rc(θ) > 1/ sin θ implies that the label switch occurs just as outlined. On figure D2 rays counterclockwise of point P belong to this case. This is responsible for the switch from blue (real η = 2π) to cyan (real η = 0) roots at the green line in figure D1 for θ = π/3 and 17π/36. Conversely, if ∆rc(θ) < 1/ sin θ the roots are already complex and the label switch occurs between the corresponding complex roots. There are no pictured examples in figure D1.
In the previous section with the 0 < θ π/4, the physical interpretation of ∆rc (as ∆ increases) was that the velocity contribution to the perturbation became dominant in the original model if the model was closed or in the mirror closed model if the original model was open. In latter case the mirror models were initially expanding. Now, the same idea continues to apply in the regime π/4 < θ < 0.84. Here the transition from real to complex roots occurs before the δv = 1 line is crossed. The significance of ∆rc is that it marks the increasing importance of velocity perturbations in the closed expanding mirror models.
However, for 0.84 < θ < π/2 as ∆ increases the open model crosses δv = 1, the mirror model swaps from η = 2π to 0 and the roots (real) corresponds to the real future singularity of a closed, contracting model. As ∆ increases further, first the mirror model becomes critical and then an open model contracting to a future singularity. While the magnitude of ∆ grows larger than a critical value the velocity perturbation dominates the mirror model dynamics. When ∆ > ∆rc the roots switch from real to complex. At this point the contracting mirror model can be open, closed or critical.
Note in figure D2 that ∆rc asymptotes to the vertical purple line δ = 1. The corresponding mirror model hits the line δ = −1 in the third quadrant. This is the limiting vacuum solution. Although there are no physical models beyond the analytic extension continues and the roots change from real to complex. All open models with θ < ∼ π/2 see a transition to complex roots as the mirror approaches the vacuum solution.
Finally figure D2 shows as a blue curve the point at which there is a switch in complex form of the analytic extension. Here, the complex roots switch from η = 0 to η = 2π. The roots remain complex and since there is no physical interpretation and it is irrelevant whether they belong to η = 0 or η = 2π.
In figure D1 the panel with θ = π/3 and 17π/36 show these transitions: the blue to cyan transition at the green dashed line is the mirror model switch from expanding to contracting; the cyan to pink transition at the purple dashed line is the mirror model moving through δ = −1; the pink to red transition is the switch from two to one complex roots and η = 0 to η = 2π.
D1.3 θ = π/2
At θ = π/2, only real roots of η = 0 are present for large ∆. This is a special case in that a ray only intersects one special line δv = 1 in the upper half plane.
D1.4 π/2 < θ π All models in this range also correspond to open models. Like the previous cases, small ∆ have real, negative roots with η = 2π. The mirror models in this case lie in the fourth quadrant. The crossover of real roots from η = 0 to η = 2π occurs at δv = 1, however, unlike in the earlier case, the line δ = −1 is never approached by the mirror models in the fourth quadrant. As a result, there is no switch from real to complex roots and all models have real negative roots. The η = 2π roots for small ∆ are collapse times of initially expanding closed mirror models and the η = 0 are future singularities of initially contracting closed and open mirror models for intermediate and large values of ∆ respectively.
D2 Numerical Results
Here we present numerical formulas that give the time of validity for any initial ∆ and θ. Real roots occur for small ∆ when 0 < θ < π/2; and they occur for all ∆ when π/2 θ π or θ = 0. Real roots correspond to past or future singularities of physical models and are known exactly. Figure D1 shows that complex roots occur 0 < θ < π/2. In the range π/4 < θ < π/2 figure D3 shows that the phase of the complex roots is very close to π. We can approximate these roots as real, negative roots. Conversely, figure D3 also shows that in the range 0 < θ π/4 the phase is not close to 0 or π. These roots are complex only when ∆ > ∆rc. First, we fit ∆rc by ∆rc,app(θ) = 0.41 csc 2 θ(cos θ − 2 sin θ) + 3.57(cos θ − 2 sin θ)(sin θ) 4.39 .
(D1)
We cannot approximate the time of validity with the results for physical cases but it turns out that the numerically derived time of validity is insensitive to θ in the range 0 < θ < π/4 and may be fit
H0tapp(∆) = 2 3 +
Using these quantities, the table below gives an approximation to the time of validity, Tapp, for all values of θ and ∆. The times for collapse and the bang times are equivalent to eq. (C23) and reproduced here for convenience:
t coll (∆, θ) = t0 + 1 2H0 (1 + ∆ cos θ) [−E(∆, θ)] 3/2 (2π) − tage(∆, θ) (D3) t − bang (∆, θ) = t0 + tage(∆, θ)(D4)
where, tage(∆, θ) = 1 H0
(1 + ∆ cos θ) (1 + ∆ sin θ) 2 (1 + ∆ cos θ) − 1 H0
(1 + ∆ cos θ)
[E(∆, θ)] 3/2 sinh −1 E(∆, θ) (1 + ∆ cos θ) ,(D5)E(∆, φ, θ) = (1 + ∆ sin θ) 2 − (1 + ∆ cos θ)(D6)
The error in the fit is estimated as
E = T − Tapp T .(D7)
If E > 0 then the approximation is conservative in this sense: the approximate time of validity is less than the true value. Conversely, if E < 0, then the approximation overestimates the time of validity. Using the above fits the worst case is E −0.02. We always use a time step δt which satisfies δt < 0.98Tapp so that the inaccuracy in the approximation is irrelevant. . Approximation to time of validity, Tapp(∆, θ), for 0 θ π. Note that ∆rc,app is an approximation to ∆rc in eq. (D1).
Parameter range Tapp
0 < ∆ < ∆rc,app E(∆, θ) < 0 t coll (∆, θ) 0 < θ < π/4 0 < ∆ < ∆rc,app E(∆, θ) > 0 t coll (−∆, θ) ∆ > ∆rc,app tapp(∆, θ) 0 < ∆ < 1 | sin θ| t coll (−∆, θ) π/4 < θ π/2 1 | sin θ| ∆ 2 sin θ−cos θ sin θ cos θ t − bang (−∆, θ) ∆ > 2 sin θ−cos θ sin θ cos θ [t coll (−∆, θ)] 0 < ∆ 1 | sin θ| t coll (−∆, θ) π/2 θ π ∆ > 1 | sin θ| t − bang (−∆, θ)
APPENDIX E: ERROR CHARACTERISATION OF THE LAGRANGIAN SERIES
We want to characterise the errors associated with calculating the solution at time t f given some fixed initial conditions at time t0. Errors arise because any real calculation involves truncating the Lagrangian expansion. We want to compare the errors that result from different choices of truncation order and of the number of re-expansion steps assuming all series expansions are convergent (i.e. all respect the time of validity). Let Nm represent the final physical coordinate generated with a m-th order calculation using N steps. Ultimately, we seek to characterise differences like Nm − N m . The quantity 1∞ is the exact answer.
E0.1 Single step
The Lagrangian series solution for a single step has the form
1∞ = a(t) 1 + ∞ i=1 b (i) (t) a(t) ∆ i 0 X0,(E1)where each b (i) satisfies b (i) − H 2 0 a 3 0 b (i) a 3 = S (i)(E2)
and initial conditions are specified at t = t0. The initial conditions at each order and the forms for the first few S (i) are given in the text. If t f − t0 = δt << t0, then the solutions can be expanded in the small parameter δt/t0. The solutions are b (1) (t)/a(t) ∼ c (1) δt t0 ,
b (i) (t)/a(t) ∼ c (i) δt t0 i+1 for i 2.
The coefficients c (i) depend on the angle θ and have a weak dependence on the Lagrangian order. The difference between the exact answer and the m-th order approximation for a single step is
1∞ − 1m = ∞ i=m+1 c (i) δt t0 i+1 ∆ i 0 X0.(E5)
As long as t f is within the time of validity of LPT, by definition, the LPT series converges and from the equation above, the leading order error scales as ∼ (δt/t0) m+2 ∆ m+1 (order terms first by powers of ∆ and then by powers of δt/t0).
E0.2 Multiple steps
In general for a practical application one is limited to working at a finite Lagrangian order. In such cases, it becomes necessary to ask if convergence can be achieved by working at a finite Lagrangian order with increasing number of steps. First, we outline the calculation. The initial data is subscripted by "0". For example, let the initial perturbed scale factor be b0 = b(t0), the initial background scale factor a0, the initial density contrast δ0 and the initial velocity perturbation δv0. The Lagrangian expansion parameter ∆0 and angle θ0 follow from the relations δ0 = ∆0 cos θ0 and δv0 = ∆0 sin θ0. The physical coordinate is r0 = b0X0; for given r0 the initial Lagrangian coordinate X0 is fixed by choosing b0 to be equal to a0.
Consider taking N steps from initial to final time with an m-th order Lagrangian expansion. Assume that the final time is within the time of validity of the Lagrangian expansion. For definiteness, let the j-th time be tj = t0β j/N where β = t f /t0 (so tN is just the final time t f ). This geometric sequence of increasing steps is well-suited for an expanding background with a small growing perturbation. The scaling of differences like Nm − 1∞, (N + 1)m − Nm and Nm+1 − Nm with N and m are all of interest. We expect the same scaling of these differences with N and m for any uniformly refined set of time steps.
A finite order Lagrangian expansion accurate to order m is a truncated representation of the full Lagrangian solution
b(t) = m i=0 b (i) (t)∆ i 0 .(E6)
At the beginning of the first step the scale factor at t0 is advanced to t1 and written as b(t0 → t1; ∆0, θ0). Note the explicit dependence on the perturbation parameters at t0. Abbreviate the scale factor and its derivative for the truncated expression as b andḃ. The background scale factor at time t1 is a1. At the end of the first step the Lagrangian coordinate X1 and the new b1 are inferred as described in the main body of the text by re-scaling quantities calculated at t1. The new b1 is not b. The net result for the full step t0 → t1 is
X1 = X0 b a1 (E7) b1 = a1 (E8) b1 =ḃ b a1 (E9) δ1 = (1 + δ0) a1 b 3 − 1 (E10) δv1 = a1ḃ a1b − 1.(E11)
The newly defined quantities subscripted by "1" will be used to initiate the next step. The updated perturbations imply new Lagrangian expansion parameter and angle according to
∆1 = δ 2 1 + δ 2 v1 (E12) cos θ1 = δ1 ∆1 (E13) sin θ1 = δv1 ∆1 .(E14)
The new physical position is r1 = b1X1 = bX0. In a numerical calculation the truncated b is exact to floating point precision but contains an error because of the omitted orders; in a symbolic calculation b is known to order ∆ m 0 . The next step from t1 → t2 involves a similar update with b = b(t1 → t2; ∆1, θ1)
X2 = X1 b a2 (E15) b2 = a2 (E16) b2 =ḃ b a2 (E17) δ2 = (1 + δ1) a2 b 3 − 1 (E18) δv2 = a2ḃ a2b − 1.(E19)
The new physical position is r2 = b2X2. This iterative scheme repeats for a total of N steps. It ultimately yields an approximation to the position at the final time denoted Nm = bN XN . A difference like (N + 1)m − Nm may be calculated numerically for various N and m and the scaling fitted and inferred. In addition, one can approach the problem symbolically. To write Nm we need to expand the final result in powers of ∆0. Note, for example, that ∆1 and θ1 are known as expansions in ∆0 with coefficients that depend upon θ0. Perturbation-related quantities are re-written systematically in terms of initial quantities. For example, b(t1 → t2; ∆1, θ1) may be expanded in powers of ∆1 with coefficients depending upon θ1. Next, all occurrences of ∆1 and θ1 are replaced by expansions in powers of ∆0 and coefficients depending upon θ0. All terms up to and including ∆ m 0 are retained in the final result. This procedure is systematically repeated until all quantities are expressed in terms of initial data. Finally, the difference (N + 1)m − Nm is calculated symbolically. Similar strategies allow construction of all the differences of interest.
To make analytic progress assume that ft = β − 1 = (t f − t0)/t0 << 1 is a small parameter. In a difference like (N + 1)m − Nm many "lower order" terms will coincide. Consider an ordering of terms by the powers of ∆0 (first) and by powers of ft (second). Define the leading-order difference to be the first non-vanishing term proportional to ∆ p 0 f q t for smallest p and then smallest q. It is straightforward to apply this ordering to simplify the differences like (N + 1)m − Nm. The leading order differences satisfy the following simple equalities These differences can be compared with the numerical differences for which no expansion in ft is carried out. We verified the analytical scaling by the following numerical experiment. The parameters of the problem at the starting time t0 are ∆0 = 1/2, θ0 = −π/4. The final time of interest t f is close to the initial time so that (t f − t0)/t0 = 1/4. The m-th order Lagrangian approximation is evaluated at this fixed final time with successively increasing number of steps. Values of m from 1 to 4 and values of N from 10 to 50 were considered. For geometric time steps (ti+1 − ti)/ti = β 1/N − 1 is independent of i and denoted δt/t below.
The results are plotted in figure E1. The points indicate the numerical data points and the solid lines indicate the analytical functions defined in eq. (E20). The numerical calculation was done with a high enough precision that even small errors of the order of 10 −14 are not contaminated by floating point errors. The agreement between the numerical experiment and the symbolic differences is very good.
Thus, the scaling of the errors implies that for a small total time step, any finite order Lagrangian scheme will yield convergent results upon taking multiple steps. Conversely, for a fixed number of steps, a higher order Lagrangian calculation will give better results.
It is useful to express the scaling in terms of the individual small step size δt/t. Under the assumptions that (t f − t0)/t0 is small, (t f − t0)/t0 ∼ N δt/t. The scaling |1∞ − Nm| ∼ N −m ∆ m+1 ((t f − t0)/t0) m+2 can be re-written as |1∞ − Nm| ∼ N ((t f − t0)/t0) · ∆ m+1 (δt/t) m+1 , which can be interpreted as an error of ((t f − t0)/t0)∆ m+1 (δt/t) m+1 per step. In the text, the quantity = ∆δt/t is kept constant. For fixed initial and final times, the error scales ∝ N m+1 . If ∆ does not change appreciably then the error is ∝ ∆ m . Convergence is attained when → 0. Figure D3. Roots with η = 2π plotted in the complex ∆ plane for 0 < θ π. These values of θ correspond to those in figure D1. The colour codes the complex phase of the roots (∆ = ∆e iφ ). The real positive (φ = 0) and negative (φ = π) roots are shown in red and cyan respectively. The complex roots can have any colour other than these two and the bottom figure provides the coding. By comparison with figure D1 one sees that all open models with real roots are cyan (negative); likewise all closed models with real roots are red (positive). Note, however, that there exist complex roots for both open and closed models. Coloured version online. Figure E1. The three panels show the log of the errors |1∞ − Nm|, |N m+1 − Nm| and |(N + 1)m − Nm| vs. N . The final time t f is the same for all these comparisons. The dots correspond to the data generated by the numerical experiment and lines correspond to the analytical formulas given in eq. (E20). The lines from top to bottom correspond to m = 1, 2, 3, 4 respectively for the first and third panels and to m = 1, 2, 3 for the second panel. It is clear that for a fixed m, increasing the number of steps improves convergence. Conversely, for a fixed N , increasing the Lagrangian order m improves convergence.
Figure 4 .
4A schematic illustration of the radius of convergence and the time of validity. The left panel shows the location of poles in the complex ∆ plane at times t 1 and t 2 , denoted by orange squares and green dots, respectively. At a fixed time, the pole nearest the origin determines the disk (black circle) within which a series expansion about the origin converges. The right panel shows |∆| for t 1 and t 2 . The black line is R ∆ (t), the minimum |∆| calculated for a continuous range of times (where t 0 , the initial time, lies far to the left). The arrows show how the time of validity is inferred for a given ∆.
2 shows the parabola E = 0 which separates open and closed regions. For infinitesimal ∆ the line of division has slope tan θ = 1/2. Models with θ ∈ [θ − c , θ + c ] = [−π + tan −1 (1/2), tan −1 (1/2)] = [−2.68, 0.46] are closed while those outside this range are open.
Figure 5 .
5Scale factor as a function of time. The initial conditions (b 0 = a 0 = 1 and varyingḃ 0 ) are given at time t 0 (dashed blue line).
Figure 6 .Figure 7 .
67R ∆ for θ = 2.82 and a 0 = 10 −3 (vertical dashed line). To determine the time of validity for LPT expansion with a given ∆, move horizontally to the right of a = a 0 following the dashed line with arrow and locate the first coloured line with ordinate equal to ∆ and then move vertically down to read off the scale factor at the time of validity av. The specific case illustrated (∆ = 10 −2 ) matches that of the model with problematic convergence in figure 1. The time of validity is correctly predicted. The meaning of the colours is discussed in the text. Coloured version of the figure is available online. R ∆ for θ = 0.44 and a 0 = 10 −3 . The line ∆ E = 0 separates open and closed models. The scale factor at the time of validity is av.
Figure 8 .
8The exact solution (black, dashed) and LPT expansions of successively higher order (blue) for two expanding, closed models with θ = 0.44. The left hand panel has ∆ = 0.01. LPT converges to the exact solution at all times up to the singularity at a = 5.5. The right hand panel has ∆ = 0.2. LPT does not converge beyond a = 0.38.
Small ∆ points are mapped between open and closed; large ∆ points may connect open models to other open models. If the original model is open with limiting pole which is negative real of small magnitude then it corresponds to a future singularity of the closed mirror model. For example, the closed model with parameters (∆ = 0.01, θ = 0.44) in the left panel of figure 8 and the open model with parameters (∆ = 0.01, θ = 0.44 − π) shown in the left panel of figure 9 are mirrors. The time of the validity of the open model equals the time to collapse of its closed mirror.
Figure 9 .
9Mirror models of the closed models of figure 8. Each graph shows the exact solution (black, dashed) and the LPT expansion to successively higher orders (blue) of one mirror model. The original model and the mirror have the same time of validity for the LPT expansion.
Figure 10 .Figure 11 .
1011The red shaded region denotes part of phase space where complex roots play a role. The solid blue line represents the initial conditions which correspond to the background and perturbation having the same big bang time. The black solid parabola separates the closed and open models. Coloured version online. Two open models which are mirrors of each other. Each plot shows the exact solution (black, dashed) and the LPT series expansion to successively higher orders (blue). The left panel is an initially expanding, open model whose convergence is limited to scale factors less than av = 0.0016 (arrow). The right panel shows the initially contracting mirror model whose bang time at av = 0.0016 is responsible for the limitation.
Figure 12 .
12R ∆ for θ = 17π/36 and a 0 = 10 −3 . The blue solid and cyan dashed lines denoted real roots with η = 2π and η = 0 respectively. For ∆ = 2, the time of validity is set by the root with η = 0, which is the bang time of the mirror model with θ = 17π/36 − π. Seefigure 11for the evolution of both models.
Figure 13 .Figure 14 .
1314The left panel shows streamlines of the flow described by eq. (46). The colour coding of the plot is same asfigure 10. The right panel zooms in on the area near the origin which is where all models are located at sufficiently early times. At late times, open models move away from the origin towards the attracting fixed point at (δ, δv) = (−1, 0.5). The attraction to the Zeldovich solution is shown for a set of initial conditions (yellow, cyan, green and black lines) that begin near but not on the critical trajectory. Closed models move out to infinity along the fixed big bang time curve. Coloured version online. Extending the time of validity of LPT. The first step has ∆ 0 = 10 −2 and θ = 2.82 and implies scale factor at the time of validity av = 0.179. Incrementing by half the allowed step gives initial conditions for the second step (∆ 1 , θ 1 ) = (0.91, 2.68). Note that the new time of validity has increased. model. The result is χ = 2.62 (numerical results in Appendix D), the time of validity is proportional to the characteristic age of the background and individual steps grow larger and larger.
Figure 15 .
15Consider, for example, the number of steps needed to extend an open solution from recombination to today. Let t f (ti) The ratio of successive times of validity (α) vs. a(t). The dashed, dot-dashed and dotted lines indicate three initial starting points (0.5, 0.5), (0, 1), (−0.2, 0.2) respectively. The ratio converges to about 3.6 and the time of validity increases geometrically with N .
Figure 16
16investigates the effect of time step and order on the evolution of the open model introduced in figure 1 (∆ = 0.01, θ = 2.82, a0 = 10 −3 ). The series convergence breaks down at a = 0.
Figure 16 .
16LPT re-expansion of an open model with ∆ 0 = 0.01 and θ 0 = 2.82. The top three figures show the scale factor for the same initial conditions calculated with one step (left), three steps (middle) and five steps (right). The black dots indicate the position of the time steps. In the middle and right panels, the solution was advanced 9/10 and 1/2 the allowed time of validity, respectively. The bottom figures show the errors for all LPT approximations to b(t) including the unphysical negative ones. The order of the LPT expansion are colour-coded according the top left figure. The single step expansion does not respect the time of validity whereas both the three and six step examples do. The original expansion does not converge over the full time range whereas the re-expansions do. Coloured version online.
Figure 17 .
17LPT re-expansion of a closed solution with ∆ = 0.2, θ = 0.44. The top three figure show the scale factor calculated with a single step (left) and multiple steps with = 0.5 (middle) and = 0.2 (right) (refer to text for definition of ). The bottom figures show the errors for all LPT approximations to b(t) including the unphysical negative ones. The order of the expansion is colour-coded as in the top left figure. The single step expansion does not respect the time of validity whereas both the other cases do. The black dots indicate the position of the time steps. The original expansion does not converge over the full time range whereas the re-expansions do. Coloured version online.
Figure A1 .
A1A cartoon showing the physical set-up of the problem.
Figure B1 .
B1Series expansions in t and ∆ for an illustrative function f (t, ∆) (see text). The black dotted line indicates the exact function f and the blue solid lines indicate successive approximations. The top left and right panels are series expansions in ∆ around ∆ = 0 plotted as a function of t (for ∆ = 1/10) and function of ∆ (for t = 1/10) respectively. The bottom left and right panels are series expansions in the t around t = 2 plotted as functions of t for ∆ = −1/10 and ∆ = −1/3 respectively.
and arg h <Figure C1 .
C1This figure describes one aspect of the analytic extension of the exact solution. For a given real ∆, the complex extension ∆ → ∆e iφ obeys eq. (C25) with two possible forms t + and t − . The choice depends on φ, ∆, θ. For some (∆, θ) a single form is sufficient for all φ; for other values both forms are needed. This figure illustrates how the upper half plane is partitioned based on this property. Coloured version online.
When closed models have real roots they are positive; when open models have real roots they are negative. However, some open and closed models also possess complex roots. The set of models with complex roots (of smallest magnitude) is evident from the shading in figure 10. The phase of each root of smallest magnitude in figure D1 is indicated by the colour shading in figure D3.There are horizontal dashed lines with colours green, blue and purple in figures D1 and D3 indicating |δv| = 1, |δ| = 1
m = KN,m cos θ sin m θ∆ m+1 f
Table D1
D1
c RAS, MNRAS 000, 1-38
Our analysis is restricted to the case of initially expanding models, i.e. near ∆ = 0. For initially contracting closed models, similar physical arguments motivate a consideration of the behaviour near the initial singularity (not the future bang time). For initially contracting open models the epoch of interest is t → −∞. These models have large ∆ and are not described by the linear limit discussed in the text. c RAS, MNRAS 000, 1-38
Typically, the numerical coefficient is of order unity and varies with m as well as the particular value of θ. For the purposes of a discussion of the scaling of the error term, we assume the numerical coefficients to be constant as m and θ vary. c RAS, MNRAS 000, 1-38
ACKNOWLEDGMENTS S. N. thanks Varun Sahni for discussions on convergence of Lagrangian theory at the IUCAA CMB-LSS summer school, Paul Grabowski, Sergei Dyda and Justin Vines for useful conversations and Saul Teukolsky for feedback on the manuscript. The authors would like to thank Thomas Buchert for useful comments on the paper. This material is based upon work supported by the NSF under Grant No. AST-0406635 and by NASA under Grant No. NNG-05GF79G.
. S Adler, T Buchert, A&A. 343317Adler S., Buchert T., 1999, A&A, 343, 317
. F Bernardeau, S Colombi, E Gaztañaga, R Scoccimarro, Phys. Rep. 3671Bernardeau F., Colombi S., Gaztañaga E., Scoccimarro R., 2002, Phys. Rep., 367, 1
. F R Bouchet, astro-ph/9603013Bouchet F. R., 1996, astro-ph/9603013
. F R Bouchet, S Colombi, E Hivon, R Juszkiewicz, A&A. 296575Bouchet F. R., Colombi S., Hivon E., Juszkiewicz R., 1995, A&A, 296, 575
. F R Bouchet, R Juszkiewicz, S Colombi, R Pellat, ApJ. 3945Bouchet F. R., Juszkiewicz R., Colombi S., Pellat R., 1992, ApJ, 394, L5
Complex Variables and Applications. J W Brown, R Churchill, Weily Buchert T. 254729MNRASBrown J. W., Churchill R., 1996, Complex Variables and Applications. Weily Buchert T., 1992, MNRAS, 254, 729
. T Buchert, MNRAS. 267811Buchert T., 1994, MNRAS, 267, 811
. T Buchert, astro-ph/9509005Buchert T., 1995, astro-ph/9509005
. T Buchert, A Dominguez, J Perez-Mercader, A&A. 349343Buchert T., Dominguez A., J.Perez-Mercader 1999, A&A, 349, 343
. T Buchert, J Ehlers, MNRAS. 264375Buchert T., Ehlers J., 1993, MNRAS, 264, 375
. P Catelan, MNRAS. 276115Catelan P., 1995, MNRAS, 276, 115
Ordinary Differential Equations with Applications, (section 1.1, 1.12), second edn. C Chicone, General Relativity and Gravitation. 29733Springer-Verlag Ehlers J., Buchert TTexts in Applied MathematicsChicone C., 2006, Ordinary Differential Equations with Applications, (section 1.1, 1.12), second edn. Texts in Applied Mathematics, New York: Springer-Verlag Ehlers J., Buchert T., 1997, General Relativity and Gravitation, 29, 733
. G Karakatsanis, T Buchert, A Melott, A&A. 326873Karakatsanis G., Buchert T., Melott A., 1997, A&A, 326, 873
. M Kasai, Phys. Rev. D. 525605Kasai M., 1995, Phys. Rev. D, 52, 5605
The Classical Theory of Fields, fourth edn. L D Landau, E M Lifshitz, Phys. Rev. D. Butterworth Heinemann Matarrese S., Pantano O., Saez D.471311Landau L. D., Lifshitz E. M., 1975, The Classical Theory of Fields, fourth edn. Butterworth Heinemann Matarrese S., Pantano O., Saez D., 1993, Phys. Rev. D, 47, 1311
. S Matarrese, O Pantano, D Saez, MNRAS. 271513Matarrese S., Pantano O., Saez D., 1994, MNRAS, 271, 513
. S Matarrese, D Terranova, MNRAS. 283400Matarrese S., Terranova D., 1996, MNRAS, 283, 400
. P Monaco, MNRAS. 287753Monaco P., 1997, MNRAS, 287, 753
. F Moutarde, J Alimi, F R Bouchet, R Pellat, A Ramani, ApJ. 382377Moutarde F., Alimi J., Bouchet F. R., Pellat R., Ramani A., 1991, ApJ, 382, 377
. D Munshi, V Sahni, A A Starobinsky, ApJ. 436517Munshi D., Sahni V., Starobinsky A. A., 1994, ApJ, 436, 517
. V Sahni, P Coles, Phys. Rep. 2621Sahni V., Coles P., 1995, Phys. Rep., 262, 1
. V Sahni, S Shandarin, MNRAS. 282641Sahni V., Shandarin S., 1996, MNRAS, 282, 641
. R Scoccimarro, R K Sheth, MNRAS. 329629Scoccimarro R., Sheth R. K., 2002, MNRAS, 329, 629
. T Tatekawa, Phys. Rev. D. 7544028Tatekawa T., 2007, Phys. Rev. D, 75, 44028
R C Tolman, Proceedings of the National Academy of Science. the National Academy of Science20169Tolman R. C., 1934, Proceedings of the National Academy of Science, 20, 169
. Y B Zel'dovich, A&A. 584Zel'Dovich Y. B., 1970, A&A, 5, 84
|
[] |
[
"The Edge of Quantum Chaos",
"The Edge of Quantum Chaos"
] |
[
"Yaakov S Weinstein \nDepartment of Nuclear Engineering\nArbeloff Laboratory for Information Systems and Technology\nMassachusetts Institute of Technology\n02139 ‡CambridgeMassachusetts\n\nDepartment of Mechanical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusetts\n",
"Seth Lloyd ",
"Constantino Tsallis \nCentro Brasileiro de Pesquisas Fisicas\nXavier Sigaud 15022290-180Rio de Janeiro-RJBrazil\n"
] |
[
"Department of Nuclear Engineering\nArbeloff Laboratory for Information Systems and Technology\nMassachusetts Institute of Technology\n02139 ‡CambridgeMassachusetts",
"Department of Mechanical Engineering\nMassachusetts Institute of Technology\n02139CambridgeMassachusetts",
"Centro Brasileiro de Pesquisas Fisicas\nXavier Sigaud 15022290-180Rio de Janeiro-RJBrazil"
] |
[] |
We identify a border between regular and chaotic quantum dynamics. The border is characterized by a power law decrease in the overlap between a state evolved under chaotic dynamics and the same state evolved under a slightly perturbed dynamics. For example, the overlap decay for the quantum kicked top is well fitted with [1+(q−1)(t/τ ) 2 ] 1/(1−q) (with the nonextensive entropic index q and τ depending on perturbation strength) in the region preceding the emergence of quantum interference effects. This region corresponds to the edge of chaos for the classical map from which the quantum chaotic dynamics is derived.PACS numbers 03.65. Ta, 05.45.+b Classical chaotic dynamics is characterized by strong sensitivity to initial conditions. Two initially close points move apart exponentially rapidly as the chaotic dynamics evolve. The rate of divergence is quantified by the Lyapunov exponent[1]. At the border between chaotic and non-chaotic regions (the 'edge of chaos'), the Lyapunov exponent goes to zero. However, it may be replaced by a generalized Lyapunov coefficient [2] describing powerlaw, rather than exponential, divergence of classical trajectories.This paper identifies a characteristic signature for the edge of quantum chaos. Quantum states maintain a constant overlap fidelity, or distance, under all quantum dynamics, regular and chaotic. One way to characterize quantum chaos is to compare the evolution of an initially chosen state under the chaotic dynamics with the same state evolved under a perturbed dynamics [3] [4][5]. When the initial state is in a regular region of a mixed system, a system with regular and chaotic regions, the overlap remains close to one. When the initial state is in a chaotic zone, the overlap decay is exponential. This paper shows that at the edge of quantum chaos there is a region of polynomial overlap decay.The Lyapunov exponent description of chaos is as follows [1]. If ∆x 0 is the distance between two initial conditions, we define ξ = lim ∆x0→0 ( ∆xt ∆x0 ) to describe how far apart two initially arbitrarily close points become at time t. Generally, ξ(t) is the solution to the differential equation dξ(t) dt = λ 1 ξ(t), such that ξ(t) = e λ1t (λ 1 is the Lyapunov exponent). When the Lyapunov exponent is positive the dynamics described by ξ(t) is strongly sensitive to initial conditions and we have chaotic dynamics.This description of chaos works well for classical Newtonian mechanics but it cannot hold true for quantum mechanical wave functions governed by the linear Schrödinger equation. Indeed, like the overlap between Liouville probability densities, the overlap between any two quantum wavefunctions is constant in time. This difficulty has led to the study of 'quantum chaos,' the search for characteristics of quantum dynamics that manifest themselves as chaos in the classical realm [6] [7] [8] [9]. As a possible signature of quantum chaos, Peres [3] [4] proposed comparing the evolution of a state under an unperturbed, H, and perturbed, H + V , Hamiltonian for chaotic and non-chaotic dynamics. The divergence of the states after a time t is measured via the overlapwhere ψ u is the state evolved under the unperturbed system operator and ψ p is the state evolved under the perturbed operator. Recent insights have sharpened the differences between chaotic and regular dynamics under this approach and several regimes of overlap decay behavior based on perturbation strength have been identified. The overlap decays for a short time as a quadratic. After this time, for chaotic dynamics with weak perturbation the overlap decay is Gaussian, as expected from first order perturbation theory [10] [11][12]. For stronger perturbations, where perturbation theory breaks down, the overlap decay is exponential. This occurs when the magnitude of a typical off diagonal element of V expressed in the ordered eigenbasis of H is greater than the average level spacing of the system, ∆. The regime of exponential overlap decay is called the Fermi Golden Rule (FGR) regime [12] [13]. The rate of the exponential decay will increase with stronger perturbation as the perturbation strength squared until the decay rate reaches a value given by the classical Lyapunov exponent [14] [12] [15] or the bandwidth of H [12]. The crossover regime from, Gaussian to exponential decay, has also been studied[13]. We note that many of the works cited use O 2 as the fidelity. Here, we follow[11]and simply use the overlap, O. For regular, non-chaotic systems the FGR regime overlap decay is a Gaussian, faster than the exponential decay of chaotic dynamics. This non-intuitive result is ex-1
|
10.1103/physrevlett.89.214101
|
[
"https://arxiv.org/pdf/cond-mat/0206039v2.pdf"
] | 119,389,885 |
cond-mat/0206039
|
dc0f1b988202e16d02aaa570534ea2ee3d970e51
|
The Edge of Quantum Chaos
3 Oct 2002
Yaakov S Weinstein
Department of Nuclear Engineering
Arbeloff Laboratory for Information Systems and Technology
Massachusetts Institute of Technology
02139 ‡CambridgeMassachusetts
Department of Mechanical Engineering
Massachusetts Institute of Technology
02139CambridgeMassachusetts
Seth Lloyd
Constantino Tsallis
Centro Brasileiro de Pesquisas Fisicas
Xavier Sigaud 15022290-180Rio de Janeiro-RJBrazil
The Edge of Quantum Chaos
3 Oct 2002♯ Author to whom correspondence should be addressed
We identify a border between regular and chaotic quantum dynamics. The border is characterized by a power law decrease in the overlap between a state evolved under chaotic dynamics and the same state evolved under a slightly perturbed dynamics. For example, the overlap decay for the quantum kicked top is well fitted with [1+(q−1)(t/τ ) 2 ] 1/(1−q) (with the nonextensive entropic index q and τ depending on perturbation strength) in the region preceding the emergence of quantum interference effects. This region corresponds to the edge of chaos for the classical map from which the quantum chaotic dynamics is derived.PACS numbers 03.65. Ta, 05.45.+b Classical chaotic dynamics is characterized by strong sensitivity to initial conditions. Two initially close points move apart exponentially rapidly as the chaotic dynamics evolve. The rate of divergence is quantified by the Lyapunov exponent[1]. At the border between chaotic and non-chaotic regions (the 'edge of chaos'), the Lyapunov exponent goes to zero. However, it may be replaced by a generalized Lyapunov coefficient [2] describing powerlaw, rather than exponential, divergence of classical trajectories.This paper identifies a characteristic signature for the edge of quantum chaos. Quantum states maintain a constant overlap fidelity, or distance, under all quantum dynamics, regular and chaotic. One way to characterize quantum chaos is to compare the evolution of an initially chosen state under the chaotic dynamics with the same state evolved under a perturbed dynamics [3] [4][5]. When the initial state is in a regular region of a mixed system, a system with regular and chaotic regions, the overlap remains close to one. When the initial state is in a chaotic zone, the overlap decay is exponential. This paper shows that at the edge of quantum chaos there is a region of polynomial overlap decay.The Lyapunov exponent description of chaos is as follows [1]. If ∆x 0 is the distance between two initial conditions, we define ξ = lim ∆x0→0 ( ∆xt ∆x0 ) to describe how far apart two initially arbitrarily close points become at time t. Generally, ξ(t) is the solution to the differential equation dξ(t) dt = λ 1 ξ(t), such that ξ(t) = e λ1t (λ 1 is the Lyapunov exponent). When the Lyapunov exponent is positive the dynamics described by ξ(t) is strongly sensitive to initial conditions and we have chaotic dynamics.This description of chaos works well for classical Newtonian mechanics but it cannot hold true for quantum mechanical wave functions governed by the linear Schrödinger equation. Indeed, like the overlap between Liouville probability densities, the overlap between any two quantum wavefunctions is constant in time. This difficulty has led to the study of 'quantum chaos,' the search for characteristics of quantum dynamics that manifest themselves as chaos in the classical realm [6] [7] [8] [9]. As a possible signature of quantum chaos, Peres [3] [4] proposed comparing the evolution of a state under an unperturbed, H, and perturbed, H + V , Hamiltonian for chaotic and non-chaotic dynamics. The divergence of the states after a time t is measured via the overlapwhere ψ u is the state evolved under the unperturbed system operator and ψ p is the state evolved under the perturbed operator. Recent insights have sharpened the differences between chaotic and regular dynamics under this approach and several regimes of overlap decay behavior based on perturbation strength have been identified. The overlap decays for a short time as a quadratic. After this time, for chaotic dynamics with weak perturbation the overlap decay is Gaussian, as expected from first order perturbation theory [10] [11][12]. For stronger perturbations, where perturbation theory breaks down, the overlap decay is exponential. This occurs when the magnitude of a typical off diagonal element of V expressed in the ordered eigenbasis of H is greater than the average level spacing of the system, ∆. The regime of exponential overlap decay is called the Fermi Golden Rule (FGR) regime [12] [13]. The rate of the exponential decay will increase with stronger perturbation as the perturbation strength squared until the decay rate reaches a value given by the classical Lyapunov exponent [14] [12] [15] or the bandwidth of H [12]. The crossover regime from, Gaussian to exponential decay, has also been studied[13]. We note that many of the works cited use O 2 as the fidelity. Here, we follow[11]and simply use the overlap, O. For regular, non-chaotic systems the FGR regime overlap decay is a Gaussian, faster than the exponential decay of chaotic dynamics. This non-intuitive result is ex-1
We identify a border between regular and chaotic quantum dynamics. The border is characterized by a power law decrease in the overlap between a state evolved under chaotic dynamics and the same state evolved under a slightly perturbed dynamics. For example, the overlap decay for the quantum kicked top is well fitted with [1+(q−1)(t/τ ) 2 ] 1/(1−q) (with the nonextensive entropic index q and τ depending on perturbation strength) in the region preceding the emergence of quantum interference effects. This region corresponds to the edge of chaos for the classical map from which the quantum chaotic dynamics is derived. Classical chaotic dynamics is characterized by strong sensitivity to initial conditions. Two initially close points move apart exponentially rapidly as the chaotic dynamics evolve. The rate of divergence is quantified by the Lyapunov exponent [1]. At the border between chaotic and non-chaotic regions (the 'edge of chaos'), the Lyapunov exponent goes to zero. However, it may be replaced by a generalized Lyapunov coefficient [2] describing powerlaw, rather than exponential, divergence of classical trajectories.
This paper identifies a characteristic signature for the edge of quantum chaos. Quantum states maintain a constant overlap fidelity, or distance, under all quantum dynamics, regular and chaotic. One way to characterize quantum chaos is to compare the evolution of an initially chosen state under the chaotic dynamics with the same state evolved under a perturbed dynamics [3] [4] [5]. When the initial state is in a regular region of a mixed system, a system with regular and chaotic regions, the overlap remains close to one. When the initial state is in a chaotic zone, the overlap decay is exponential. This paper shows that at the edge of quantum chaos there is a region of polynomial overlap decay.
The Lyapunov exponent description of chaos is as follows [1]. If ∆x 0 is the distance between two initial conditions, we define ξ = lim ∆x0→0 ( ∆xt ∆x0 ) to describe how far apart two initially arbitrarily close points become at time t. Generally, ξ(t) is the solution to the differential equation dξ(t) dt = λ 1 ξ(t), such that ξ(t) = e λ1t (λ 1 is the Lyapunov exponent). When the Lyapunov exponent is positive the dynamics described by ξ(t) is strongly sensitive to initial conditions and we have chaotic dynamics.
This description of chaos works well for classical Newtonian mechanics but it cannot hold true for quantum mechanical wave functions governed by the linear Schrödinger equation. Indeed, like the overlap between Liouville probability densities, the overlap between any two quantum wavefunctions is constant in time. This difficulty has led to the study of 'quantum chaos,' the search for characteristics of quantum dynamics that manifest themselves as chaos in the classical realm [6] [7] [8] [9].
As a possible signature of quantum chaos, Peres [3] [4] proposed comparing the evolution of a state under an unperturbed, H, and perturbed, H + V , Hamiltonian for chaotic and non-chaotic dynamics. The divergence of the states after a time t is measured via the overlap
O(t) = | ψ u (t)|ψ p (t) |,(1)
where ψ u is the state evolved under the unperturbed system operator and ψ p is the state evolved under the perturbed operator. Recent insights have sharpened the differences between chaotic and regular dynamics under this approach and several regimes of overlap decay behavior based on perturbation strength have been identified. The overlap decays for a short time as a quadratic. After this time, for chaotic dynamics with weak perturbation the overlap decay is Gaussian, as expected from first order perturbation theory [10] [11] [12]. For stronger perturbations, where perturbation theory breaks down, the overlap decay is exponential. This occurs when the magnitude of a typical off diagonal element of V expressed in the ordered eigenbasis of H is greater than the average level spacing of the system, ∆. The regime of exponential overlap decay is called the Fermi Golden Rule (FGR) regime [12] [13]. The rate of the exponential decay will increase with stronger perturbation as the perturbation strength squared until the decay rate reaches a value given by the classical Lyapunov exponent [14] [12] [15] or the bandwidth of H [12]. The crossover regime from, Gaussian to exponential decay, has also been studied [13]. We note that many of the works cited use O 2 as the fidelity. Here, we follow [11] and simply use the overlap, O.
For regular, non-chaotic systems the FGR regime overlap decay is a Gaussian, faster than the exponential decay of chaotic dynamics. This non-intuitive result is ex-plained using a correlation functions formulation of the overlap by Prosen [11]. In addition, a power law decay ∝ t 3/2 has been found for an integrable system [16].
The initial overlap decay behavior continues until some saturation level [11]. For coherent and random pure states the saturation level 1/N for the exponential decay (in the FGR regime) and 2/N for Gaussian decay (in the weak perturbation regime), where N is the dimension of the system Hilbert space. However, for eigenstates of the system and mixed random states the saturation level increases with increasing perturbation strength [11].
Here we study a mixed system, a system with both chaotic and regular dynamics. Coherent states within the regular regime are practically eigenstates of the system and the overlap of these states oscillates close to unity (see figure 1) [4] [11]. This is shown in figure 1 where the initial coherent state is centered at a fixed point of order one of the regular map. Coherent states in the chaotic regime of the system show exponential overlap decay in the FGR regime and Gaussian overlap decay for weak perturbations. We show that in both the FGR and weak perturbation regimes states near the chaotic border have a polynomial overlap decay.
In 1988, one of us [17] introduced in statistical mechanics the generalized entropy
S q = k 1 q − 1 1 − W i=1 p q i (2)
where k is a positive constant, p i is the probability of finding the system in microscopic state, i, and W is the number of possible microscopic states of the system; q is the entropic index which characterizes the degree of the system nonextensivity. In the limit q −→ 1 we recover the usual Boltzmann entropy
S 1 = −k W i=1 p i lnp i .(3)
To demonstrate that q characterizes the degree of the system nonextensivity it is useful to examine the S q entropy addition rule [18]. If A and B are two independent systems such that the probability p(A + B) = p(A)p(B) the entropy of the total system S q (A + B) is given by the following equation:
S q (A + B) k = S q (A) k + S q (B) k + (1 − q) S q (A)S q (B) k 2 .(4)
From the above equation it is realized that q < 1 corresponds to superextensivity and q > 1 to subextensivity.
Using this entropy to generalize statistical mechanics and thermodynamics has helped explain many natural phenomena in a wide range of fields. One application of this nonextensive entropy occurs in one dimensional dynamical maps. As explained above, when the Lyapunov exponent is positive, the system dynamics is strongly sensitive to initial conditions and is characterized as chaotic dynamics. When the Lyapunov exponent is zero it has been conjectured [2] (and proven [19] for the logistic map) that the distance between two initially arbitrarily close points is described by dξ dt = λ qsen ξ qsen leading to ξ = [1 + (1 − q sen )λ qsen t] 1/(1−qsen) (sen stands for sensitivity). This requires the introduction of λ qsen as a generalized Lyapunov coefficient. The Lyapunov coefficient scales inversely with time as a power law instead of the characteristic exponential of a Lyapunov exponent. Thus, there exists a regime, q sen < 1, λ 1 = 0, λ qsen > 0, which is weakly sensitive to initial conditions and is characterized by having power law, instead of exponential, mixing. This regime is called the edge of chaos.
The polynomial overlap decay found for initial states of a mixed system near the chaotic border are at the 'edge of quantum chaos', the border between regular and chaotic quantum dynamics. This region is the quantum parallel of the region characterized classically by the generalized Lyapunov coefficient.
The system studied is the quantum kicked top (QKT) [20] defined by the unitary operator:
U QKT = e −iπJy/2h e −iαJ 2 z /2Jh .(5)
J is the angular momentum of the top and α is the 'kick' strength. We use a QKT with α = 3 whose classical analog has a mixed phase space, regions of chaotic and regular dynamics. The perturbed operator used is a QKT with a stronger kick strength α ′ = 3.015. Hence, V = e −iδJ 2 z /2Jh where δ = α ′ − α. The classical kicked top is a map on the unit sphere, x 2 + y 2 + z 2 = 1:
x ′ = z y ′ = x sin(αz) + y cos(αz) z ′ = −x cos(αz) + y sin(αz).
(6)
For α = 3 there are two fixed points of order one at the center of the regular regions of the map. They are located at
x f = z f = ±.6294126, y f = .4557187.(7)
The regular regions of the classical phase space are seen clearly in figure 4.
To locate the edge of quantum chaos, we work in the oo (even under a 180 • rotation about x and odd under a 180 • about y; see page 359 of [4]) symmetry subspace of the QKT with J = 120 (in the oo subspace N = J/2). We set y equal to y f of the positive fixed point and change z so that the initial state can be systematically moved closer and further from the fixed point of the map. An initial state with a power law decrease of overlap is found at z f − .124. The overlap decay for this state at the edge of quantum chaos is illustrated in figure 2 and is very well fit by the solution of dO/d(t 2 ) = −O q rel /τ 2 q rel (rel stands for relaxation). Although we do not know how to derive this differential equation from first principles the numerical agreement is remarkable (see also [21]). A time-dependent q-exponential expression analogous to the one shown here has recently been proved for the edge of chaos and other critical points of the classical logistic map [19]. The polynomial overlap decay is the transition between the quadratic and exponential overlap decays. This transitory region does not appear for chaotic states (as shown in figure 1) and is a signature of the 'edge of quantum chaos. ' A power law also emerges for the above initial state in the weak perturbation or Gaussian regime. Here we use α ′ = 3.0003. The power law in this regime is illustrated in figure 2 and fit with the above equation. The value of q rel remains constant at 3.8 for small perturbations until the critical perturbation strength, δ c , when the typical off diagonal elements of V are larger than ∆. We can approximate δ c ≃ 2π/N 3 = 5.4×10 −3 [12]. When δ is larger than δ c , q rel increases. The behavior of q rel and τ q rel versus δ can be seen in figure 3. The edge of chaos in the quantum and classical maps are not obseved at the same value due to the size of the angular momentum coherent state. The coherent state grows as J decreases causing it to 'leak out' into the chaotic region even though it is centered away from the chaotic border. This causes behavior characteristic of the edge of chaos to appear at different values depending on the dimension of Hilbert space. Figure 4 shows the wavefunctions for two values of J superimposed on the classical phase space. This gives an idea as to how large the wavefunction is compared to the regular region of the map. In the region of J values that we have explored, no significant changes have been detected for q rel , because the δ c changes only slightly. However, τ q rel decreases with increasing J.
To conclude, we have located a region on the border of chaotic and non-chaotic quantum dynamics. Quantum states located in this region exhibit a power-law decrease in overlap as opposed to the exponential overlap decay exhibited by fully chaotic quantum dynamics. The classical parallel to this region is the border between regular and chaotic classical dynamics which is characterized by the generalized Lyapunov coefficient.
PACS numbers 03.65.Ta, 03.67.-a, 05.20.-y, 05.45.+b
FIG. 1 .
1Overlap versus time for initial angular momentum coherent states located in the chaotic region and the regular region of the quantum kicked top. The system has a spin 120 and is evolved under the kicked top Hamiltonian with α = 3 and α ′ = 3.015. The overlap of the state in the chaotic region decreases exponentially with the number of iterations of the map. The state in the regular region is practically an eigenstate of the system and therefore oscillates close to unity.
chaotic zones of the QKT of spin 120 and α = 3. This region is called the edge of quantum chaos and shows the expected power law decrease in overlap. The top figure is for a perturbation strength within the FGR regime, δ = .015 and the bottom figure is for a perturbation strength of δ = .0003, well below the FGR regime. On the log-log plot the power law decay region, from about 20-70 in the FGR regime and 2000-7000 in the Gaussian regime, is linear. We can fit the decrease in overlap with the expression [1 + (q rel − 1)(t/τq rel ) 2 ] 1/(1−q rel ) where, in the FGR regime, the entropic index q rel = 5.1 and τq rel = 40 and in the Gaussian regime q rel = 3.8 and τq rel = 2500. The insets of both figures show lnq rel O ≡ (O 1−q rel −1)/(1−q rel ) versus t 2 ; since lnq x is the inverse function of e x q ≡ [1 + (1 − q) x]11−q , this produces a straight line with a slope −1/τ 2 (also plotted).
. 3. q rel versus perturbation strength. The value of q rel remains constant at 3.8 until the critical perturbation after which it increases. The relationship of τq rel versus perturbation strength is shown on a log-log plot in the inset and is well fit by a line with slope -1.06. The location of the edge of quantum chaos for the QKT of spin 120 does not match up with the edge of the classical kicked top which appears at approximately z f − .2296. This implies that classically regular regions of the kicked top appear chaotic on the QKT. As J is increased, the top becomes more and more classical and states exhibiting 'edge of quantum chaos' behavior are centered closer to the classical value for the edge of chaos. Hence, for J = 150, 180, 210, 240 the edge is observed at z f − .124, .139, .151, .160 and .176, respectively.
FIG. 4 .
4Classical phase space of the kicked top with angular momentum coherent wavefunctions. 10000 iterations of a chaotic orbit starting from the point x = .6294, y = .7424, z = .2294. The spherical phase space and the ellipsoidal coherent states are projected onto the x − z plane (only y > 0 shown) by multiplying the x and z coordinates of each point by R/r where R = 2(1 − |y|) and r = (1 − y 2 ) [4]. The regular regions of the kicked top are clearly visible. Shown is a J = 120 wavefunction (stars) and a J = 240 (circles) wavefunction both at the edge of quantum chaos. Note that the variance of the J = 120 wavefunction is much larger than the variance of the J = 240 wavefunction. Hence, the behavior characteristic of the edge of quantum chaos appears further from the fixed point of the classical map.
The authors would like to acknowledge C. Anteneodo and J. Emerson for useful remarks. This work was supported by DARPA/MTO through ARO grant DAAG55-97-1-0342.
Lieberman Regular and Chaotic Dynamics. A J Lichtenberg, M A , Springer VerlagA.J. Lichtenberg and M.A. Lieberman Regular and Chaotic Dynamics (Springer Verlag, 1992).
. C Tsallis, A R Plastino, W M Zheng, Chaos, Solitons and Fractals. 8885C. Tsallis, A.R. Plastino, W.M. Zheng, Chaos, Solitons and Fractals 8, 885 (1997);
. M L Lyra, C Tsallis, Phys. Rev. Lett. 8053M.L. Lyra, C. Tsallis, Phys. Rev. Lett. 80, 53 (1998);
. V Latora, M Baranger, A Rapisarda, C Tsallis, Phys. Lett. A. 27397V. Latora, M. Baranger, A. Rapisarda, C. Tsallis, Phys. Lett. A, 273 97 (2000).
A Peres, Quantum Chaos. H. A. Cerdeira, R. Ramaswamy, M. CA. Peres, in Quantum Chaos, Quantum Measurement edited by H. A. Cerdeira, R. Ramaswamy, M. C.
. G Gutzwiller, Casati, World ScientificSingaporeGutzwiller, G. Casati (World Scientific, Singapore, 1991).
A Peres, Quantum Theory: Concepts and Methods. Kluwer Academic PublishersA. Peres, Quantum Theory: Concepts and Methods. Kluwer Academic Publishers (1995).
. R Schack, C M Caves, Phys. Rev. Lett. 71525R. Schack, C.M. Caves, Phys. Rev. Lett. 71, 525 (1993).
. M V Berry, Proc. R. Soc. London Ser. A. 413183M. V. Berry, Proc. R. Soc. London Ser. A 413, 183 (1987).
M V , New Trends in Nuclear Collective Dynamics. Y. A be, H. Horiuchi, and K MatsuyanagiBerlinSpringer183M. V. Berry in New Trends in Nuclear Collective Dynam- ics, edited by Y. A be, H. Horiuchi, and K Matsuyanagi (Springer, Berlin, 1992), p. 183.
F Haake, Quantum Signatures of Chaos. New YorkSpringerF. Haake, Quantum Signatures of Chaos (Springer, New York, 1991).
. O Bohigas, M J Giannoni, C Schmit, Phys. Rev. Lett. 521O. Bohigas, M.J. Giannoni, C. Schmit, Phys. Rev. Lett. 52, 1, 1984.
. A Peres, Phys. Rev. A. 301610A. Peres, Phys. Rev. A 30, 1610 (1984).
. T Prosen, M Znidaric, J. Phys. A. 351455T. Prosen, M. Znidaric, J. Phys. A 35 1455, 2002.
. Ph, P G Jacquod, C W J Silvestrov, Beenakker, Phys. Rev. E. 6455203Ph. Jacquod, P.G. Silvestrov, C.W.J. Beenakker, Phys. Rev. E 64, 055203 (2001)
. N R Cerruti, S Tomsovic, Phys. Rev. Lett. 88N.R. Cerruti, S. Tomsovic, Phys. Rev. Lett. 88, (2002).
. R A Jalabert, H M Pastawski, Phys. Rev. Lett. 862490R.A. Jalabert, H.M. Pastawski, Phys. Rev. Lett. 86, 2490 (2001);
. F M Cucchietti, C H Lewenkopf, E R Mucciolo, H M Pastawski, R O , Vallejos Phys. Rev. E. 6546209F.M. Cucchietti, C.H. Lewenkopf, E.R. Mucciolo, H.M. Pastawski, R.O. Vallejos Phys. Rev. E 65, 046209 (2002);
. F Cucchietti, H M Pastawski, D A Wisniacki, Phys. Rev. E. 6545206F. Cucchietti, H.M. Pastawski, D. A. Wisniacki, Phys. Rev. E 65, 045206 (2002).
. G Benenti, G Casati, Phys. Rev E. 6566205G. Benenti, G. Casati, Phys. Rev E 65, 066205 (2002).
. Ph, I Jacquod, C W Adagideli, quant- ph/0206160Ph. Jacquod, I. Adagideli, C.W.J. Beenakker quant- ph/0206160.
. C Tsallis, J E M F Stat, C Curado, Tsallis, Corrigenda: 24J. Phys. A. 521019Phys.C. Tsallis, J. Stat. Phys. 52, 479 (1988). E.M.F. Cu- rado, C. Tsallis, J. Phys. A 24, L69 (1991) [Corrigenda: 24, 3187 (1991) and 25, 1019 (1992)].
. C Tsallis, R S Mendes, A R Plastino, Physica A. 261534C. Tsallis, R.S. Mendes, A.R. Plastino, Physica A 261, 534 (1998).
Nonextensive Statistical Mechanics and Its Applications, Series Lecture Notes in Physics. S. Abe and Y. OkamotoBerlinSpringer-VerlagS. Abe and Y. Okamoto, eds., Nonextensive Statisti- cal Mechanics and Its Applications, Series Lecture Notes in Physics (Springer-Verlag, Berlin, 2001);
Non Extensive Thermodynamics and Physical Applications. G. Kaniadakis, M. Lissia, A. RapisardaAmsterdamElsevier305G. Kani- adakis, M. Lissia, A. Rapisarda, eds., Non Extensive Thermodynamics and Physical Applications, Physica A 305 (Elsevier, Amsterdam, 2002);
Nonextensive Entropy-Interdisciplinary Applications. M. Gell Mann, C. TsallisOxfordOxford University Pressin press. A complete bibliography on the subject can beM. Gell Mann, C. Tsallis, eds., Nonextensive Entropy-Interdisciplinary Ap- plications, (Oxford University Press, Oxford, 2002), in press. A complete bibliography on the subject can be found at http://tsallis.cat.cbpf.br/biblio.htm.
. F Baldovin, A Robledo, PRE in pressF. Baldovin, A. Robledo, PRE in press (2002);
. F Baldovin, A Robledo, Europhys. Lett. in pressF. Bal- dovin, A. Robledo, Europhys. Lett. in press (2002).
. F Haake, M Kus, R Scharf, Z. Phys. B. 65381F. Haake, M. Kus, R. Scharf, Z. Phys. B, 65, 381, 1987.
. E P Borges, C Tsallis, G F J Ananos, P M C Oliveira, cond-mat/0203348E.P. Borges, C. Tsallis, G.F.J. Ananos, P.M.C. Oliveira, cond-mat/0203348.
|
[] |
[
"THE PROPORTION OF TREES THAT ARE LINEAR",
"THE PROPORTION OF TREES THAT ARE LINEAR"
] |
[
"Tanay Wakhare ",
"Eric Wityk ",
"ANDCharles R Johnson "
] |
[] |
[] |
We study several enumeration problems connected to linear trees, a broad class which includes stars, paths, generalized stars, and caterpillars. We provide generating functions for counting the number of linear trees on n vertices, and prove that they form an asymptotically vanishing fraction of all trees on n vertices. MSC(2010): Primary: 05C30; Secondary: 05C05.
|
10.1016/j.disc.2020.112008
|
[
"https://arxiv.org/pdf/1901.08502v2.pdf"
] | 119,141,920 |
1901.08502
|
966782189dcdf64ea538e5ef9189d69925ebb852
|
THE PROPORTION OF TREES THAT ARE LINEAR
24 Jan 2019
Tanay Wakhare
Eric Wityk
ANDCharles R Johnson
THE PROPORTION OF TREES THAT ARE LINEAR
24 Jan 2019
We study several enumeration problems connected to linear trees, a broad class which includes stars, paths, generalized stars, and caterpillars. We provide generating functions for counting the number of linear trees on n vertices, and prove that they form an asymptotically vanishing fraction of all trees on n vertices. MSC(2010): Primary: 05C30; Secondary: 05C05.
Introduction
A high degree vertex (HDV) in a simple undirected graph is one of degree at least 3. A tree is called linear if all of its HDV's lie on a single induced path, and k-linear if there are k HDV's. The linear trees include the familiar classes of paths, stars, generalized stars (g-stars, wth exactly one HDV), double g-stars [3], and caterpillars [2], etc. They have become important, as all multiplicity lists of eigenvalues occurring among Hermitian matrices, whose graph is a given linear tree, may be constructed via a linear superposition principal (LSP) that respects the precise structure of the linear tree [3,4]. For other, nonlinear trees, multiplicity lists require different methodology. For a tree to be nonlinear, there must be at least 4 HDV's (and at least 10 vertices altogether). An example of a nonlinear tree and a linear tree, both on 13 vertices, is given in Figure 1 Linear trees are a substantial generalization of caterpillars, and the problem of counting the number of nonisomorphic linear trees is significantly harder than for caterpillars. We define a bivariate generating function for the number of k-linear trees on n vertices, which enables the fast computation of these numbers. Additionally, we are able to obtain aymptotic upper bounds which show that the probability that a randomly chosen tree on n vertices will be linear approaches 0 as n → ∞. This shows that while the LSP is a useful characterization, it has limited applicability to studying the spectra of general trees. As n increases, the LSP characterizes the spectra of an asymptotically vanishing proportion of all trees. However, the proportion of linear trees vanishes slowly, so that the LSP is very important, especially for small numbers of vertices. This preliminary investigation suggests many open questions about linear trees -for instance, for fixed n, what value of k maximizes the number of k-linear trees on n-vertices? Total 10 25 56 22 1 0 0 0 0 0 0 0 105 11 36 114 74 6 0 0 0 0 0 0 0 231 12 50 224 219 37 1 0 0 0 0 0 0 532 13 70 441 576 158 8 0 0 0 0 0 0 1,254 14 94 733 1,394 591 58 1 0 0 0 0 0 2,872 15 127 1,252 3,150 1,896 304 9 0 0 0 0 0 6,739 16 168 2,091 6,733 5,537 1,342 82 1 0 0 0 0 15,955 17 222 3,393 13,
n k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10 k = 11
Generating Functions
There are strong links between nonisomorphic linear trees and partitions, which are famously difficult to enumerate. In construcing a generating function for k-linear trees on n vertices, we will rely the generating function for integer partitions. Let
P (x) := ∞ i=1 1 1 − x i = ∞ n=0 p(n)x n
denote the generating function for p(n), the number of unrestricted partitions of n. Let r n,k be the number of reflections of linear trees on n vertices with k HDVs (which counts linearly symmetric trees once and linearly asymmetric trees twice), and let s n,k denote the number of linearly symmetric trees on n vertices with k HDVs. We can conclude that the number of non-isomorphic k-linear trees on n vertices is equal to a n,k = 1 2 (r n,k + s n,k ) .
The following generating function allows us to compute recurrences for the coefficients which allow for fast computation of a n,k .
Theorem 1. The generating function for k-linear trees on n vertices is
2 ∞ n=1 ∞ k=0 a n,k x n y k = P (x) − 1 1 − x 2 x 2 y 2 (1 − x + xy − xyP (x)) + 1 1 − x P (x 2 ) − 1 1 − x 2 x 2 y 2 1 − (P (x 2 )−1)x 2 y 2 1−x 2 + P (x 2 ) − 1 1 − x 2 P (x) − 1 1 − x 2 x 3 y 3 1 − (P (x 2 )−1)x 2 y 2 1−x 2 .
Proof. First, we enumerate the nonisomorphic generalized stars on n vertices. Since two g-stars are non-isomorphic if and only if the lengths of their arms differ, we notice a one-to-one correspondence between nonisomorphic generalized stars and partitions. In particular, the number of nonsiomorphic g-stars on n vertices is p(n − 1) (the −1 accounting for the designated central vextex), with each partition corresponding to a distinct set of possible arm lengths. Linear trees are formed from generalized stars on ≥ 2 vertices, with intermediate paths which can have non-trivial length. Therefore, we will use the generating function for the number of non-isomorphic generalized stars on ≥ 2 vertices, which is x(P (x) − 1).
Let an exterior star be a generalized star at the end of the linear tree. Such stars must have a central vertex of degree ≥ 2, not counting the concatenating path. Therefore, there is a bijection between partitions of n − 1 with ≥ 2 parts and non-isomorphic exterior stars on n vertices. Since there is only a single partition of n with one part, n itself, the generating function for exterior stars is x(P (x) − 1 1−x ). Additionally, up to isomorphism, there is a unique path of length i, so that the generating function for the number of paths on n vertices has the form 1 1−x . Therefore, the number of linear trees generated by concatenating an exterior star, k − 2 interior stars, and a trailing exterior star, by k − 1 paths of arbitrary length, is
∞ n=1 ∞ k=0 r n,k x n y k = ∞ k=2 xP (x) − x 1 − x 2 (xP (x) − x) k−2 1 (1 − x) k−1 y k = P (x) − 1 1 − x 2 ∞ k=2 (P (x) − 1) k−2 1 (1 − x) k−1 x k y k = P (x) − 1 1 − x 2 x 2 y 2 (1 − x + xy − xyP (x))
.
We now enumerate s n,k , the number of reflectionally symmetric k-linear trees on n vertices. These have a freely chosen central component, after which one half of the tree completely determines the other half. The component is a path when k is even, and a generalized star when k is odd.
If the central component is a path, it is free to have an arbitrary number of vertices, while every other component on n vertices determines 2n vertices due to reflectional symmetry. Therefore, we count the number of 2k-linear trees which can be generated by concatenating an exterior star, k − 1 interior stars, a freely chosen central path, and their reflections:
∞ k=2 x 2 P (x 2 ) − x 2 1 − x 2 x 2 P (x 2 ) − x 2 k−2 1 (1 − x 2 ) k−2 1 1 − x y 2k−2 = 1 1 − x P (x 2 ) − 1 1 − x 2 ∞ k=2 P (x 2 ) − 1 k−2 1 (1 − x 2 ) k−2 x 2k−2 y 2k−2 = 1 1 − x P (x 2 ) − 1 1 − x 2 x 2 y 2 1 − (P (x 2 )−1)x 2 y 2 1−x 2 .
We can conduct a similar analysis for a (2k + 1)-linear tree, where the central component is instead a generalized star. We obtain the generating function
∞ k=2 x 2 P (x 2 ) − x 2 1 − x 2 x 2 P (x 2 ) − x 2 k−2 1 (1 − x 2 ) k−1 (xP (x) − x)y 2k−1 = P (x 2 ) − 1 1 − x 2 (P (x) − 1) ∞ k=2 P (x 2 ) − 1 k−2 1 (1 − x 2 ) k−1 x 2k−1 y 2k−1 = P (x 2 ) − 1 1 − x 2 P (x) − 1 1 − x 2 x 3 y 3 1 − (P (x 2 )−1)x 2 y 2 1−x 2 .
Noting that 2a n,k = r n,k + s n,k and summing all three of these generating functions completes the proof.
From this generating function, we can extract the number of k-linear trees on n vertices for small values of n. Table 2 displays this information for 10 ≤ n ≤ 25. Note that for fixed n, the distribution of k-linear trees is not uniform or normal. Instead, the dominant contribution appears at around k ≃ 0.3n. It would be interesting to see what distribution the number of k-linear trees for fixed n assumes as we take n → ∞.
Asymptotics
We wish to show that linear trees form an asymptotically small subset of all trees. Wityk [6] showed that the fraction of k-linear trees on n-vertices to the number of trees with k high degrees vertices approaches 0 as the number of vertices tends to infinity. However, this was only for a fixed k, and only partial results were shown for the natural extension to account for all linear trees. Heuristically, we expect the proportion of trees that are linear to decrease as the number of vertices increases. Given a large tree, we can color all the high degree vertices. The probability that these HDVs all lie on a single induced path intuitively decreases as the number of vertices increases. The next theorem asymptoticaly proves this result, and Table 3 shows this phenomenon for small values of n. 746 n ). Hence, the probability that a randomly chosen tree on n vertices will be linear approaches 0 as n → ∞.
Proof. To begin, fix a finite n. We then wish to consider the number of linear trees on n vertices, up to isomorphism. It is known that the number of nonisomorphic unlabelled trees on n vertices, T n , is asymptotically given by T n ∼ Cn − 5 2 α n [5] where C = .5349 . . . and α = 2.9557 . . .. If we can show that the number of linear trees on n vertices has an asymptotically smaller growth rate, we are done.
For fixed n, let the number of linear trees on n vertices with an induced central path of length k be denoted T (k) (note that this is not equivalent to k-linearity). We will analyze an explicit expression for T (k) with some overcounting. However, because the condition for a tree to be linear is so strong, this overcounting does not result in any difficulties. First, consider the following alternate construction of a linear trees from generalized stars:
• construct k generalized stars {G 1 , G 2 , . . . , G k } (where we now allow |G i | ≥ 1); • connect the central vertices of {G 1 , G 2 , . . . , G k } by single edges, in that order.
Note that every linear tree, up to isomorphism, is counted by this construction, since |G i | = 1 corresponds to a path of length longer than 1 between two central vertices. Now how do we count all possible cases of this construction? We consider two cases, based on the value of k.
Case 1: 0 ≤ k ≤ 0.5389n. We again note that we have a correspondence between partitions of n − 1 and nonisomorphic generalized stars on n vertices. Hence, letting n i denote the size of the i-th g-star in our construction, we wish to estimate the multiple sum
T (k) ≤ n1+n2+···+n k =n ni≥1 p(n 1 − 1)p(n 2 − 1) · · · p(n k − 1) (3.1) = n1+n2+···+n k =n−k ni≥0
p(n 1 )p(n 2 ) · · · p(n k ).
(3.2)
We now appeal to the well-known asymptotic expansion for the partition function [1, p. 97]
(3.3) p(n) ∼ 1 4 √ 3n exp π 2n 3 ,
the expansion which led to the introduction of the circle method. Substituting into (3.2) gives
T (k) ≤ n1+n2+···+n k =n−k ni≥0 p(n 1 )p(n 2 ) · · · p(n k ) ∼ n1+n2+···+n k =n−k ni≥0 1 (4 √ 3) k k i=1 n i exp π 2 3 k i=1 √ n i .
Applying the method of Lagrange multipliers to the k variable function
1 k i=1 n i exp π 2 3 k i=1 √ n i
subject to the constraints n i ≥ 0 and k i=1 n i = n − k shows that the function achieves its maximum value at n i = n k − 1 ≥ 0 (since k < n), and that this maximum value is 1
n k − 1 k exp π 2 3 (n − k)k .
Hence, noting that the Diophantine equation n i + n 2 + · · · + n k = n − k has n−1 k−1 ≤ n k solutions in non-negative integers by a stars and bars argument, we have the estimate
T (k) = O n1+n2+···+n k =n−k ni≥0 1 4 √ 3 k n k − 1 k exp π 2 3 (n − k)k = O 1 4 √ 3 k n k − 1 k exp π 2 3 (n − k)k n k .
Note that, if k is a fixed constant, n k grows polynomially in n, and the result is proven for any fixed finite k. Therefore, now let k = Cn, 0 < C ≤ 0.5389. Then we have to appeal to Stirling's asymptotic expansion n! ∼ n e n √ 2πn to write
n Cn ∼ 1 2πnC(1 − C) 1 1 − C n 1 − C C Cn .
Our asymptotic formula for T (k) then reduces to
T (k) = O 1 4 √ 3 Cn C 1 − C Cn exp π 2C(1 − C) 3 n n Cn = O 1 4 √ 3 Cn exp π 2C(1 − C) 3 n 1 1 − C n = O 1 1 − C 1 4 √ 3 C exp π 2C(1 − C) 3 n .
Numerical examination reveals that
1 1 − C 1 4 √ 3 C exp π 2C(1 − C) 3 < 2.744
whenever C ≤ 0.5389. Hence, in this regime T (k) = O(2.745 n ). Case 2: 0.5389n ≤ k ≤ 1.
In this regime, we have many extremely small components. For instance, we cannot even have a single component of size n 2 , since we have k ≥ n 2 star components. While the asymptotic expansion (3.3) is good for large values of n, it is a poor bound for small n. Therefore, we instead appeal to the easy bound p(n) ≤ 2 n−1 , which counts the number of compositions of n.
We have the simple estimate T (k) ≤ n1+n2+···+n k =n ni≥1 p(n 1 − 1)p(n 2 − 1) · · · p(n k − 1) ≤ n1+n2+···+n k =n ni≥1 2 n1−1 2 n2−1 · · · 2 n k −1 = 2 n−k n1+n2+···+n k =n ni≥1 1 = 2 n−k n − 1 k − 1 ≤ 2 n−k n k .
Again writing k = Cn, .5389 ≤ C ≤ 1, and using Stirling's expansion to asymptotically expand the binomial coefficient gives
T (k) = O 2 1−C 1 − C 1 − C C C n .
For .5389 ≤ C ≤ 1, numerical examination reveals which is dominated by T n ∼ Cn − 5 2 2.9557 n , completing the proof.
2 1−C 1 − C 1 − C C C < 2.745,
Note that we have still overcounted some classes of linear trees, and that the true asymptotic growth rate of the number of linear trees on n vertices may be even smaller. We count reflectionally symmetric linear trees twice. We also overcount linear trees which correspond to compositions of n with leading or trailing ones, as illustrated in Figure 3.1, since we can regard some vertices both as degenerate star centers or as parts of an arm.
.1.
Figure 1 . 1 .
11Nonlinear and 3-linear trees on 13 vertices (HDVs in red)
Figure 2 . 1 .
21The p(3) = 3 non-isomorphic generalized stars on 4 vertices (star centers in red)
Theorem 2 .
2[6, Conjecture 3.4.7] The number of nonisomorphic linear trees on n vertices grows at the rate O(2.
so that T (k) = O(2.745 n ). Combining both cases reveals that the total number of linear trees on n vertices is n k=1 T (k) = O(n2.745 n ) = O(2.746 n ),
Figure 3 . 1 .
31Blue vertices lead to overcounting, since they can be regarded either as degenerate star centers or as part of an arm
Table 1 .
1[6, Appendix A] The number of k-linear trees on n vertices
Table 2 .
2[6, Appendix A] The percentage of nonlinear trees on n verticesn Nonlinear trees Linear Trees % Nonlinear
Total
10
1
105
0.9
106
11
4
231
1.7
235
12
19
532
3.4
551
13
47
1,254
3.6
1,301
14
287
2,872
9.1
3,159
15
1,002
6,739
12.9
7,741
16
3,365
15,955
17.4
19,320
17
10,853
37,776
22.3
48,629
18
34,088
89,779
27.5
123,867
19
104,574
213,381
32.9
317,955
20
315,116
507,949
38.3
823,065
21
935,321
1,209,184
43.6
2,144,505
22
2,743,364
2,880,382
48.8
5,623,756
23
7,966,723
6,681,351
53.7
14,828,074
24
22,951,010
16,348,887
58.4
39,299,897
25
65,681,536
38,955,354
62.8 104,636,890
AcknowledgementsPart of this work was carried out by the second author as part of his honors thesis at the College of William and Mary. T.W. would also like to thank Roberto Costas-Santos for being an excellent advisor during the College of William and Mary Matrix REU, where part of this work was completed.
The theory of partitions. G Andrews, Addison-Wesley Publishing CoReading, MassG. Andrews, The theory of partitions. Addison-Wesley Publishing Co., Reading, Mass, 1976
The number of caterpillars. F Harary, A J Schwenk, Discrete Math. 6F. Harary and A. J. Schwenk, The number of caterpillars, Discrete Math., 6, 359-365, 1973
C R Johnson, C M Saiago, Eigenvalues, multiplicities and graphs. CambridgeCambridge University PressC. R. Johnson and C. M. Saiago, Eigenvalues, multiplicities and graphs. Cambridge University Press, Cambridge, 2018
Ordered multiplicity lists for eigenvalues of symmetric matrices whose graph is a linear tree. C R Johnson, A A Li, A J Walker, Discrete Math. 333C. R. Johnson, A. A. Li, and A. J. Walker, Ordered multiplicity lists for eigenvalues of symmetric matrices whose graph is a linear tree, Discrete Math., 333, 39-55, 2014
The number of trees. R Otter, Ann. of Math. 442R. Otter, The number of trees, Ann. of Math. (2), 44, 583-599, 1948
Linear and Nonlinear Trees: Multiplicity Lists of Symmetric Matrices, College of William and Mary, Undergraduate Honors Theses. E Wityk, Paper. 113E. Wityk, Linear and Nonlinear Trees: Multiplicity Lists of Symmetric Matrices, College of William and Mary, Undergraduate Honors Theses, Paper 113, 1-55, 2014
|
[] |
[
"Dimensional Regularization and Dispersive Two-Loop Calculations",
"Dimensional Regularization and Dispersive Two-Loop Calculations"
] |
[
"A Aleksejevs \nGrenfell Campus of Memorial University\nCorner BrookNLCanada\n",
"S Barkanova \nGrenfell Campus of Memorial University\nCorner BrookNLCanada\n"
] |
[
"Grenfell Campus of Memorial University\nCorner BrookNLCanada",
"Grenfell Campus of Memorial University\nCorner BrookNLCanada"
] |
[] |
The two-loop contributions are now often required by the precision experiments, yet are hard to express analytically while keeping precision. One way to approach this challenging task is via the dispersive approach, allowing to replace sub-loop diagram by effective propagator. This paper builds on our previous work, where we developed a general approach based on representation of many-point Passarino-Veltman functions in two-point function basis. In this work, we have extracted the UV-divergent poles of the Passarino-Veltman functions analytically and presented them as the dimensionally-regularized and multiply-subtracted dispersive sub-loop insertions, including self-energy, triangle, box and pentagon type.
| null |
[
"https://arxiv.org/pdf/1905.07936v1.pdf"
] | 159,041,707 |
1905.07936
|
d1c23308b9dd007ef86bc8e7efef4bdf8e67201f
|
Dimensional Regularization and Dispersive Two-Loop Calculations
20 May 2019
A Aleksejevs
Grenfell Campus of Memorial University
Corner BrookNLCanada
S Barkanova
Grenfell Campus of Memorial University
Corner BrookNLCanada
Dimensional Regularization and Dispersive Two-Loop Calculations
20 May 2019
The two-loop contributions are now often required by the precision experiments, yet are hard to express analytically while keeping precision. One way to approach this challenging task is via the dispersive approach, allowing to replace sub-loop diagram by effective propagator. This paper builds on our previous work, where we developed a general approach based on representation of many-point Passarino-Veltman functions in two-point function basis. In this work, we have extracted the UV-divergent poles of the Passarino-Veltman functions analytically and presented them as the dimensionally-regularized and multiply-subtracted dispersive sub-loop insertions, including self-energy, triangle, box and pentagon type.
I. INTRODUCTION
The electroweak precision searches for the physics beyond the Standard Model (BSM) frequently demand a sub-percent level of accuracy from both experiment and theory. For the new-generation precision experiments such as MOLLER [1] and P2 [2] , for example, that means evaluating electroweak radiative corrections up to two-loop level with massive propagators and control of kinematics, which is a highly challenging task. In some cases, it may not possible to express the final results analytically, so one would have to use approximations and/or numerical methods. See, for example, an overview of numerical loop integration techniques in [3], a general case of the two-loop two-point function for arbitrary masses in [4], and a method of calculating scalar propagator and vertex functions based on a double integral representation in [5] and [6]. The more recent developments on analytical evaluation of two-loop self-energies can be found in [7][8][9][10][11][12], and on numerical evaluation of general n-point two-loop integrals using sector decomposition in [13,14]. The idea of the sub-loop insertions with the help of the dispersive approach was implemented for the self-energies [15], [18] and partially for the vertex graphs with the help of Feynman parametrization [19].
A somewhat relevant case of the self-energy dispersive insertions for Bhabha scattering in QED was considered in [20] and [21].
In [22,23], we have developed a general approach in calculations of the two-loops diagrams, which is based on the representation of many-point Passarino-Veltman (PV) functions in two-point function basis. As a result, we where able to replace a sub-loop integral by the dispersive representation of the two-point function. In that case, the second loop received an additional propagator and we where able to use the PV basis for the second loop integration in the final stage of the calculations. The final results where presented in a compact analytic form suitable for numerical evaluation. Since in the majority of applications such two-loops integrals are either ultraviolet or infrared (IR) divergent, a regularization scheme is required. In case of the IR-divergence, the regularization can be done by introducing a small mass of the photon which is later removed by a contribution of a combination of one-photon bremsstrahlung from one-loop and two-photon bremsstrahlung from tree level diagrams. Since the IR-divergence does not impact convergence of the dispersion sub-loop integral, the mass of the photon in the insertion could be carried into second loop without an additional complications. If necessary, the dependence on the photon mass can be extracted analytically. For the UV-divergent two-loops diagrams, the regularization of the sub-loop insertion is done by an introduction of a cut-off parameter for the divergent dispersive integral. The second-loop regularization is done by dimensional regularization, but in this case, when counter terms are added, one set of renormalization constants is evaluated in dispersive approach with a cut-off parameter, and another set of the constants is calculated using dimensional regularization. In this case, the independence of the final results from the regularization parameters could be confirmed numerically only. That can result in additional complications, since the two-loops integrals could suffer from a number of the numerical instabilities. In some simple cases, when sub-loop renormalization is possible (for ex. box diagram with self-energy insertion), one can represent the sub-loop by doubly-subtracted dispersive integrals and carry on the second-loop integration using the PV-function basis without dealing with additional UV divergences. In this paper, we follow a general approach developed in [22] and extract the UV-divergent parts of the two-loop integrals analytically. For that, we need to represent the UV-divergent dispersive sub-loop insertion using dimensional regularization and extract UV poles analytically. Since in [22,23] the two-loop integrals where all reduced to the two-point PV-function basis, we start with the outline of the ideas on how to express the two-point sub-loop insertion with UV divergent part written out in the dimensional regularization and the UV-finite part represented by a multiply-subtracted dispersive integral. Later, we extend this approach to triangle-, box-and pentagon-type of insertions.
II. METHODOLOGY
Generally, a two-point function of an arbitrary rank could be written in the dimensional regularization as:
B 0...0 2l 1...1 n p 2 , m 2 1 , m 2 2 ≡ B {2l,n} p 2 , m 2 1 , m 2 2 = µ 2ǫ e γ E ǫ (−1) 2+n+l 2 l Γ (ǫ − l) (1) × lim ε→0 + 1 0 dx x n p 2 x 2 + m 2 1 + x m 2 2 − m 2 1 − p 2 − iε −ǫ+l
Here, ǫ = 4−D 2 is the dimensional regularization and µ is the mass-scale parameter. The UVdivergent part Eq.1 can be expressed as a polynomial in p 2 multiplied by 1 ǫ + ln µ 2 m 2 2 term.
A linear term in ǫ will give rise to the local terms after taking the second-loop integration, and can be considered as a finite part of the two-point functions which has dependence on
ln µ 2 m 2 2 .
Hence, the regularized one-loop UV-divergent part has the following form:
B U V {2l,n} p 2 , m 2 1 , m 2 2 = 1 ǫ + ln µ 2 m 2 2 l i=0 a {2l,n} i p 2i .(2)
Here, coefficients a {2l,n} i are the functions of masses m 2 {1,2} with structure provided in Tbl.I. In order to satisfy the definition given in Eq.1, the UV-divergent pole 1/ǫ in Eq.2 should be treated as 1
ǫ → 1 ǫ − γ E + ln (4π).
In the case of sub-loop insertion, the UV part represented by Eq.2 can be easily carried into the second-loop integral. Here, the momentum p 2 could depend on the momentum of the second loop and Feynman parameters used in [22]. In order to keep the UV-divergent term presented in Eq.2 as simple as possible, we will treat masses as constants. In the case where masses depend on the Feynman and mass shift parameters (see [22]), a simple transformation ln µ 2 is UV-finite and scaleparameter independent, and hence can be moved to the UV-finite part of Eq.1 for which we will construct a dispersive representation. The UV-finite part could be presented through the dispersive integral:
a {2l,n} i l = 0 l = 1 l = 2 n = 0 a {0,0} 0 = 1 a {2,0} 0 = 1 4 m 2 1 + m 2 2 a {2,0} 1 = − 1 12 a {4,0} 0 = 1 24 m 4 1 + m 4 2 + m 2 1 m 2 2 a {4,0} 1 = − 1 48 m 2 1 + m 2 2 a {4,0} 2 = 1 240 n = 1 a {0,1} 0 = − 1 2 a {2,1} 0 = − 1 12 m 2 1 + 2m 2 2 a {2,1} 1 = 1 24 a {4,1} 0 = − 1 96 m 4 1 + 3m 4 2 + 2m 2 1 m 2 2 a {4,1} 1 = 1 240 2m 2 1 + 3m 2 2 a {4,1} 2 = − 1 480 n = 2 a {0,2} 0 = 1 3 a {2,2} 0 = 1 24 m 2 1 + 3m 2 2 a {2,2} 1 = − 1 40 a {4,2} 0 = 1 240 m 4 1 + 6m 4 2 + 3m 2 1 m 2 2 a {4,2} 1 = − 1 240 m 2 1 + 2m 2 2 a {4,2} 2 = 1 840 n = 3 a {0,3} 0 = − 1 4 a {2,3} 0 = − 1 40 m 2 1 + 4m 2 2 a {2,3} 1 = 1 60 a {4,3} 0 = − 1 480 m 4 1 + 10m 4 2 + 4m 2 1 m 2 2 a {4,3} 1 = 1 840 2m 2 1 + 5m 2 2 a {4,3} 2 = − 1 1344 n = 4 a {0,4} 0 = 1 5 a {2,4} 0 = 1 60 m 2 1 + 5m 2 2 a {2,4} 1 = − 1 84 a {4,4} 0 = 1 840 m 4 1 + 15m 4 2 + 5m 2 1 m 2 2 a {4,4} 1 = − 1 672 m 2 1 + 3m 2 2 a {4,B f in {2l,n} p 2 , m 2 1 , m 2 2 = 1 π ∞ (m 1 +m 2 ) 2 ds ℑB f in {2l,n} (s, m 2 1 , m 2 2 ) s − p 2 − iε .(3)
Here, B f in {2l,n} (p 2 , m 2 1 , m 2 2 ) is the UV-finite part of Eq.1:
B {2l,n} (p 2 , m 2 1 , m 2 2 ) = B U V {2l,n} (p 2 , m 2 1 , m 2 2 ) + B f in {2l,n} (p 2 , m 2 1 , m 2 2 ). The function B f in {2l,n} (p 2 , m 2 1 , m 2 2 )
consists of the finite part of the two-point function, b f in {2l,n} (p 2 , m 2 1 , m 2 2 ), which is free from any of the regularization parameters plus an additional terms linear in ǫ, which are also finite. More specifically, we can write:
B f in {2l,n} p 2 , m 2 1 , m 2 2 = b f in {2l,n} 1 + ǫ ln µ 2 m 2 2 + (−1) n ǫ d 1l I 1 + d 2l I 2 + d 3l I 3 + d 3l I 1 ln 2 µ 2 m 2 2 where (4) I 1 = 1 0 dx x n A l p 2 ,I 3 = 1 0 dx x n A l p 2 , m 2 1 , m 2 2 ln 2 m 2 2 A (p 2 , m 2 1 , m 2 2 ) A p 2 , m 2 1 , m 2 2 = p 2 x 2 + m 2 1 + x m 2 2 − m 2 1 − p 2 − iε.
The integrals in Eq.4 can be evaluated analytically, but that can be done later. The coefficients d il are given in the Tbl.II. The Eq.3 is only valid if the Schwartz reflection principle is applicable and the function B f in {2l,n} (z, m 2 1 , m 2 2 ) (with z ∈ C) converges to zero as 1/z n≥2 when z → ∞. These conditions on Eq.3 applicability often require the use of multiple subtractions at a given pole, which results in replacement of Eq.3 by the multiply-subtracted dispersive integral. In our view, the best way to transform Eq.3 into the multiply-subtracted dispersive integral is to follow the same idea as if we would to remove UV-part of Eq.1 by using the subtractive scheme at an arbitrary scale Λ. Of course, the final result should not depend on any scale, and hence where will be an additional terms to remove any dependence. To remove the UV-part of Eq.1, we can easily generalize this procedure by using the following subtractions:
B sub {2l,n} p 2 , m 2 1 , m 2 2 , Λ 2 = B {2l,n} − l i=0 1 i! ∂ i B {2l,n} ∂ (p 2 ) i p 2 =Λ 2 p 2 − Λ 2 i .(5)
Here, B {2l,n} ≡ B {2l,n} (p 2 , m 2 1 , m 2 2 ) and B sub {2l,n} (p 2 , m 2 1 , m 2 2 , Λ 2 ) is multiply-subtracted Eq.1. Now, we will subtract and add the finite part of the second term of Eq.5 to Eq.3, and use the subtracted terms to construct the multiply-subtracted dispersive integral of Eq.3. As a result, we can write the following:
B f in {2l,n} p 2 , m 2 1 , m 2 2 = (p 2 − Λ 2 ) l+1 π ∞ (m 1 +m 2 ) 2 ds ℑB f in {2l,n} (s, m 2 1 , m 2 2 ) (s − p 2 − iε) (s − Λ 2 − iε) l+1 (6) + l i=0 1 i! ∂ i B f in {2l,n} (p 2 , m 2 1 , m 2 2 ) ∂ (p 2 ) i p 2 =Λ 2 p 2 − Λ 2 i .
Eq.6 has no dependence on the scale Λ and its second term is finite with a polynomial structure in p 2 , which can be easily evaluated in the second-loop integration. Finally, we can write dimensionally regularized sub-loop insertion as:
B {2l,n} p 2 , m 2 1 , m 2 2 = l i=0 1 ǫ + ln µ 2 m 2 2 a {2l,n} i p 2i + 1 i! ∂ i B f in {2l,n} (p 2 , m 2 1 , m 2 2 ) ∂ (p 2 ) i p 2 =Λ 2 p 2 − Λ 2 i (7) + (p 2 − Λ 2 ) l+1 π ∞ (m 1 +m 2 ) 2 ds ℑB f in {2l,n} (s, m 2 1 , m 2 2 ) (s − p 2 − iε) (s − Λ 2 − iε) l+1 .
The first term of Eq.7 will contribute to the numerator algebra and the second term will add an additional propagator
(p 2 −Λ 2 ) l+1 s−p 2 −iε
to the second-loop integral.
In the case of the triangle insertion, the three-point PV functions which can be written in the form of the derivatives of the two-point functions. To begin with, the scalar three-point function function is given by:
C 0 ≡ C 0 p 2 1 , p 2 2 , (p 1 + p 2 ) 2 , m 2 1 , m 2 2 , m 2 3 = (8) µ 4−D iπ D/2ˆd D q 1 [q 2 − m 2 1 ] (q + p 1 ) 2 − m 2 2 (q + p 1 + p 2 ) 2 − m 2 3 .
With Feynman's trick, we can join the first two propagators in Eq.8, and after shifting momentum q = τ − p 1 − p 2 , we can write:
C 0 = µ 4−D iπ D/2 1 0 dxˆd D τ 1 (τ − (p 1x + p 2 )) 2 − m 2 12 2 [τ 2 − m 2 3 ](9)m 2 12 = m 2 1x + m 2 2 x − p 2 1 xx.
Here,x = 1 − x, and momentum p 1 does not enter the second loop integral and is treated as a combination of the external momenta of the two-loop graph. Term
(τ − (p 1x + p 2 )) 2 − m 2 12
−2 can be replaced after shifting mass m 2 12 by a small parameter φ:
1 (τ − (p 1x + p 2 )) 2 − m 2 12 2 = lim φ→0 ∂ ∂φ 1 (τ − (p 1x + p 2 )) 2 − (m 2 12 + φ) .(10)
As a result, Eq.9 can be represented in the form of
C 0 = µ 4−D iπ D/2 lim φ→0 ∂ ∂φ 1 0 dxˆd D τ 1 (τ − (p 1x + p 2 )) 2 − (m 2 12 + φ) [τ 2 − m 2 3 ] = (11) lim φ→0 ∂ ∂φ 1 0 dx B 0 (p 1x + p 2 ) 2 , m 2 3 , m 2 12 + φ .
Since C 0 function is UV finite, its dispersive representation will be given by a singly sub-tracted integral:
C 0 = lim φ→0 ∂ ∂φ 1 0 dx ln m 2 3 m 2 12 + φ + B f in 0 Λ 2 , m 2 3 , m 2 12 + φ(12)+ (p 1x + p 2 ) 2 − Λ 2 π ∞ m 3 + √ m 2 12 +φ 2 ds ℑB f in 0 (s, m 2 3 , m 2 12 + φ) s − (p 1x + p 2 ) 2 − iε (s − Λ 2 − iε)
.
In this representation of C 0 function, we have momentum p 2 as a combination of the second-
B f in 0 p 2 , m 2 1 , m 2 2 = 2 + κ 1/2 (p 2 , m 2 1 , m 2 2 ) p 2 ln κ 1/2 (p 2 , m 2 1 , m 2 2 ) + m 2 1 + m 2 2 − p 2 2m 1 m 2 (13) − (m 2 1 − m 2 2 + p 2 ) 2p 2 ln m 2 1 m 2 2 .
Here, κ (p 2 , m 2 1 , m 2 2 ) is a Källen function, κ (a, b, c) = a 2 + b 2 + c 2 − 2 (ab + bc + ac). In the case of the higher rank three-point tensor coefficient functions, we can represent them through a combinations of B {2l,n} functions following the prescription of [22]:
C 0...0 2l 1...1 n 2...2 m ≡ C {2l,n,m} = lim φ→0 ∂ ∂φ 1 0 dx x n m i=0 b {m} i B {2l,i+n} .(14)
Here, UV-divergence:
B {2l,i+n} ≡ B {2l,i+n} (p 1x + p 2 ) 2 ,C {2l,n,m} = lim φ→0 ∂ ∂φ 1 0 dx x n m i=0 b {m} i l j=0 1 ǫ + ln µ 2 m 2 12 + φ a {2l,i+n} j p 2j 12x + 1 j! ∂ j B f in {2l,i+n} (p 2 , m 2 3 , m 2 12 + φ) ∂ (p 2 ) j p 2 =Λ 2 p 2 12x − Λ 2 j(15)+ (p 2 12x − Λ 2 ) l+1 π ∞ m 3 + √ m 2 12 +φ 2 ds ℑB f in {2l,i+n} (s, m 2 3 , m 2 12 + φ) (s − p 2 12x − iε) (s − Λ 2 − iε) l+1 ,
with p 12x is defined as p 12x = p 1x + p 2 . As an example, let's consider expression for C 001
where UV-divergent pole is extracted explicitly:
C 001 = − 1 12 1 ǫ + ln µ 2 m 2 3 + lim φ→0 ∂ ∂φ 1 0 dx x 1 12 1 2 p 2 12x − m 2 3 − 2 m 2 12 + φ ln m 2 3 m 2 12 + φ(16)+ B f in 001 Λ 2 , m 2 3 , m 2 12 + φ + ∂B f in 001 (p 2 , m 2 3 , m 2 12 + φ) ∂p 2 p 2 =Λ 2 p 2 12x − Λ 2 + (p 2 12x − Λ 2 ) 2 π ∞ m 3 + √ m 2 12 +φ 2 ds ℑB f in 001 (s, m 2 3 , m 2 12 + φ) (s − p 2 12x − iε) (s − Λ 2 − iε) 2 .
To derive expressions for the four-point PV functions in the two-point function basis, we can use the ideas outlined in Eqns.8-11:
where D {2l,n,k,m} ≡ D {2l,n,k,m} p 2 1 , p 2 2 , p 2 3 , p 2 4 , (p 1 + p 2 ) 2 , (p 2 + p 3 ) 2 , m 2 1 , m 2 2 , m 2 3 , m 2 4 and B {2l,i+n+k} ≡ B {2l,i+n+k} (p 1 (x − y) + p 2ȳ + p 3 ) 2 , m 2 4 , m 2 123 + φ with m 2 123 = m 2 1 (x − y) + m 2 2 x + m 2 3 y − p 2 1 xx − p 2 12 yȳ + 2xy (p 1 p 12 ) and p 12 = p 1 + p 2 . As a result, the dispersive generalization can be written as:
III. CONCLUSION
In this work, we have extracted the UV-divergent poles of the Passarino-Veltman functions analytically and presented them as the dimensionally-regularized and multiplysubtracted dispersive sub-loop insertions. We have also retained the terms linear in ǫ, which are required to produce local terms for the second-loop integration. Finally, all sub-loop insertions are conveniently expressed in the two-point function basis, which allows to carry out the calculations analytically, with numerical integration done only over the Feynman and dispersion parameters. As a result, this approach will allow to speed up calculations for the two-loop radiative corrections and to better account for the experiment-specific kinematics.
is the arbitrary constant mass. A term proportional to ln
loop and external momenta. When taking a derivative with respect to the mass shift parameter φ, we use transformation ln µ in order to remove µ-scale dependence from the Feynman integral. The finite part of the B 0 function has a rather simple analytical structure:
m 2 3
3, m 2 12 + φ , and the UV-divergent three-point functions have l 1. Coefficients b {m} i are given in the Tbl.III. Using Eq.7 in Eq.14, we can write the generalized three-point function dispersively with dimensionally regularized
ℑB f in {2l,i+n+k} (s, m 2 4 , m 2 123 + φ) s − p 2 123xy − iε (s − Λ 2 − iε) l+1 .Here, we have p 123xy = p 1 (x − y) + p 2ȳ + p 3 . Eq.18 shows that the UV-divergent four-point functions show up at l 2. The five-point function also can be easily expressed in two
Table I :
ICoefficients a{2l,n}
i
for B U V
{2l,n} .
Table II :
IICoefficients d il used in the representation of the linear in ǫ term in Eq.4.
Table III :
IIIExpansion coefficients b{m}
i
for many-points Passarino-Veltman functions.
where momentum p 1234xyz is defined as p 1234xyz = p 1 (x − y − z) + p 2 (ȳ − z) + p 3z + p 4 .
AcknowledgmentsThe authors are grateful to A. Davydychev, H. Spiesberger and M. Vanderhaeghen for the fruitful and exciting discussions. We would also like to express special thanks to the Institut für Kernphysik of Johannes Gutenberg-Universität Mainz for hospitality and support. This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada.Here, E {2l,n,k,r,m} ≡ E {2l,n,k,r,m} (p 2 1 , p 2 2 , p 2 3 , p 2 4 , p 2 5 , p 2 12 , p 2 23 , p 2 34 , p 2 45 , p 2 51 , m 2
. D Becker (u. Mainz, U Prisma & Mainz, 10.1140/epja/i2018-12611-6nucl-ex/1802.04759Inst. Kernphys.) et al. D. Becker (U. Mainz, PRISMA & Mainz U., Inst. Kernphys.) et al., DOI: 10.1140/epja/i2018- 12611-6 (2018), [nucl-ex/1802.04759].
. A Freitas, Prog. Part. Nucl.Phys. 90A. Freitas, Prog. Part. Nucl.Phys. 90 201-240 (2016).
. D Kreimer, Phys. Lett. 273D. Kreimer, Phys. Lett. B273 277-281 (1991).
. A Czarnecki, hep-ph/9405423Nucl. Phys. 433A. Czarnecki et al. Nucl. Phys. B433 259-275 (1995), [hep-ph/9405423].
. A Frink, hep-ph/9610285Nucl.Phys. 488A. Frink et al. Nucl.Phys. B488 426-440 (1997), [hep-ph/9610285].
. L Adams, C Bogner, S Weinzierl, J. Math. Phys. 5452303L. Adams, C. Bogner, S. Weinzierl, J. Math. Phys. 54 052303 (2013).
. L Adams, C Bogner, S Weinzierl, J. Math. Phys. 5672303L. Adams, C. Bogner, S. Weinzierl, J. Math. Phys. 56 072303 (2015).
. L Adams, C Bogner, S Weinzierl, J. Math. Phys. 5732304L. Adams, C. Bogner, S. Weinzierl, J. Math. Phys. 57 032304 (2016).
. E Remiddi, L Tancredi, Nucl.Phys. 907E. Remiddi, L. Tancredi, Nucl.Phys. B907 400-444 (2016).
. S Bloch, M Kerr, P Vanhove, Compos. Math. 151S. Bloch, M. Kerr, P. Vanhove, Compos. Math. 151 2329-2375 (2015).
. S Bloch, M Kerr, P Vanhove, Adv. Theor. Math. Phys. 21S. Bloch, M. Kerr, P. Vanhove, Adv. Theor. Math. Phys. 21 1373-1453 (2017).
. S Borowka, J Carter, G Heinrich, J. Phys. Conf. Ser. 36812051S. Borowka, J. Carter, G. Heinrich, J. Phys. Conf. Ser. 368 012051 (2012).
. S Borowka, J Carter, G Heinrich, Comput. Phys. Commun. 184S. Borowka, J. Carter, G. Heinrich, Comput. Phys. Commun. 184 396-408 (2013).
. S Bauberger, M Böhm, Nucl. Phys. B. 445S. Bauberger, M. Böhm, Nucl. Phys. B 445, 25-46 (1995).
. T Hahn, hep-ph/0404043Comput. Phys. Commun. 16878T. Hahn, Comput. Phys. Commun. 168 78 [hep-ph/0404043] (2005).
. T Hahn, arXiv:1408.6373Comput. Phys. Commun. 207341T. Hahn, Comput. Phys. Commun. 207 341 [arXiv:1408.6373] (2016).
. W Hollik, U Meier, S Uccirati, Nucl. Phys. 731W. Hollik, U. Meier, S. Uccirati, Nucl. Phys. B731 213-224 (2005).
. A Freitas, W Hollik, W Walter, G Weiglein, Nucl. Phys. 632A. Freitas, W. Hollik, W. Walter, G. Weiglein, Nucl. Phys. B632 189-218 (2002).
. M Czakon, J Gluza, T Riemann, Phys.Rev. 7173009M. Czakon, J. Gluza, T. Riemann, Phys.Rev. D71, 073009 (2005).
. S Actis, M Czakon, J Gluza, T Riemannl, Phys. Rev. Lett. 100131602S. Actis, M. Czakon, J, Gluza, and T. Riemannl., Phys. Rev. Lett. 100, 131602 (2008).
. A , hep-th/1804.08914Phys. Rev. D. 9836021A. Aleksejevs, Phys. Rev. D 98, 036021 (2018), [hep-th/1804.08914].
. A , hep-th/1809.05592A. Aleksejevs (2018), [hep-th/1809.05592].
. G J Van Oldenborgh, Comput. Phys. Commun. 66G.J. van Oldenborgh, Comput. Phys. Commun. 66 1-15 (1991).
. T Hahn, M Perez-Victoria, hep-ph/9807565Comput. Phys. Commun. 118T. Hahn and M. Perez-Victoria, Comput. Phys. Commun. 118 153-165 (1999), [hep-ph/9807565].
. A Denner, S Dittmaier, Nucl. Phys. B. 73462A. Denner and S. Dittmaier, Nucl. Phys. B 734, 62 (2006).
. J A M Vermaseren, Int. J. Mod. Phys. 142037J. A. M. Vermaseren, Int. J. Mod. Phys. A14 2037 (1999).
. M Awramik, M Czakon, A Freitas, hep-ph/0608099JHEP. 061148M. Awramik, M. Czakon and A. Freitas, JHEP 0611 048 (2006), [hep-ph/0608099].
. A Ghinculov, J J Van Der, Bij, Nucl. Phys. B. 43630A. Ghinculov and J. J. van der Bij, Nucl. Phys. B 436, 30 (1995).
. T Hahn, hep-ph/0012260Comput. Phys. Commun. 140418T. Hahn, Comput. Phys. Commun. 140 418 (2001), [hep-ph/0012260]
. R J Eden, P V Landshoff, D I Olive, J C Polkinghorne, Cambridge University PressR. J. Eden, P. V. Landshoff, D. I. Olive, J. C. Polkinghorne, Cambridge University Press (1996).
|
[] |
[
"Santa Fe Institute Working Paper 17-09-XXX Local Causal States and Discrete Coherent Structures",
"Santa Fe Institute Working Paper 17-09-XXX Local Causal States and Discrete Coherent Structures"
] |
[
"Adam Rupe *[email protected]†[email protected] \nComplexity Sciences Center Physics Department\nUniversity of California at Davis\nOne Shields Avenue95616DavisCA\n",
"James P Crutchfield \nComplexity Sciences Center Physics Department\nUniversity of California at Davis\nOne Shields Avenue95616DavisCA\n"
] |
[
"Complexity Sciences Center Physics Department\nUniversity of California at Davis\nOne Shields Avenue95616DavisCA",
"Complexity Sciences Center Physics Department\nUniversity of California at Davis\nOne Shields Avenue95616DavisCA"
] |
[] |
Coherent structures form spontaneously in nonlinear spatiotemporal systems and are found at all spatial scales in natural phenomena from laboratory hydrodynamic flows and chemical reactions to ocean, atmosphere, and planetary climate dynamics. Phenomenologically, they appear as key components that organize the macroscopic behaviors in such systems. Despite a century of effort, they have eluded rigorous analysis and empirical prediction, with progress being made only recently. As a step in this, we present a formal theory of coherent structures in fully-discrete dynamical field theories. It builds on the notion of structure introduced by computational mechanics, generalizing it to a local spatiotemporal setting. The analysis' main tool employs the local causal states, which are used to uncover a system's hidden spatiotemporal symmetries and which identify coherent structures as spatially-localized deviations from those symmetries. The approach is behavior-driven in the sense that it does not rely on directly analyzing spatiotemporal equations of motion, rather it considers only the spatiotemporal fields a system generates. As such, it offers an unsupervised approach to discover and describe coherent structures. We illustrate the approach by analyzing coherent structures generated by elementary cellular automata, comparing the results with an earlier, dynamic-invariant-set approach that decomposes fields into domains, particles, and particle interactions. PACS numbers: 05.45.-a 89.75.Kd 89.70.+c 05.45.Tp 02.50.EyPatterns abound in systems far from equilibrium across all spatial scales, from planetary and even galactic structures down to the microscopic scales of snowflakes and bacterial and crystal growth. Most studies of pattern formation, both theory and experiment, focus on particular classes of human-scale pattern-forming system and invoke standard bases to describe pattern organization. This becomes particularly problematic when, for example, inhomogeneities give rise to relatively more localized patterns, called coherent structures. Though key to structuring a system's macroscopic behaviors and causal organization, they have remained elusive for decades. We suggest an alternative approach that provides constructive answers to the questions of how to use spacetime fields generated by spatiotemporal systems to extract their emergent patterns and how to describe them in an objective way.
|
10.1063/1.5021130
|
[
"https://arxiv.org/pdf/1801.00515v1.pdf"
] | 6,891,037 |
1801.00515
|
9d1f75880a142b078ad3f2787ce24bc7c6c7c73c
|
Santa Fe Institute Working Paper 17-09-XXX Local Causal States and Discrete Coherent Structures
Adam Rupe *[email protected]†[email protected]
Complexity Sciences Center Physics Department
University of California at Davis
One Shields Avenue95616DavisCA
James P Crutchfield
Complexity Sciences Center Physics Department
University of California at Davis
One Shields Avenue95616DavisCA
Santa Fe Institute Working Paper 17-09-XXX Local Causal States and Discrete Coherent Structures
(Dated: January 3, 2018)coherent structuresspatially extended dynamical systemsemergencesymmetry breakingcellular automata
Coherent structures form spontaneously in nonlinear spatiotemporal systems and are found at all spatial scales in natural phenomena from laboratory hydrodynamic flows and chemical reactions to ocean, atmosphere, and planetary climate dynamics. Phenomenologically, they appear as key components that organize the macroscopic behaviors in such systems. Despite a century of effort, they have eluded rigorous analysis and empirical prediction, with progress being made only recently. As a step in this, we present a formal theory of coherent structures in fully-discrete dynamical field theories. It builds on the notion of structure introduced by computational mechanics, generalizing it to a local spatiotemporal setting. The analysis' main tool employs the local causal states, which are used to uncover a system's hidden spatiotemporal symmetries and which identify coherent structures as spatially-localized deviations from those symmetries. The approach is behavior-driven in the sense that it does not rely on directly analyzing spatiotemporal equations of motion, rather it considers only the spatiotemporal fields a system generates. As such, it offers an unsupervised approach to discover and describe coherent structures. We illustrate the approach by analyzing coherent structures generated by elementary cellular automata, comparing the results with an earlier, dynamic-invariant-set approach that decomposes fields into domains, particles, and particle interactions. PACS numbers: 05.45.-a 89.75.Kd 89.70.+c 05.45.Tp 02.50.EyPatterns abound in systems far from equilibrium across all spatial scales, from planetary and even galactic structures down to the microscopic scales of snowflakes and bacterial and crystal growth. Most studies of pattern formation, both theory and experiment, focus on particular classes of human-scale pattern-forming system and invoke standard bases to describe pattern organization. This becomes particularly problematic when, for example, inhomogeneities give rise to relatively more localized patterns, called coherent structures. Though key to structuring a system's macroscopic behaviors and causal organization, they have remained elusive for decades. We suggest an alternative approach that provides constructive answers to the questions of how to use spacetime fields generated by spatiotemporal systems to extract their emergent patterns and how to describe them in an objective way.
I. INTRODUCTION
Complex patterns are generated by systems in which interactions among their basic elements are amplified, propagated, and stabilized in a complicated manner. These emergent patterns present serious difficulties for traditional mathematical analysis, as one does not know a priori in what representational basis to describe them, let alone predict them. Notably, analogous difficulties of describing and predicting the behavior of highly complex systems had been identified in the early years of computation theory [1] and linguistics [2].
A more familiar and perhaps longer-lived example of complex emergent patterns arises in fluid turbulence [3]. From its earliest systematic studies, complex flow patterns were described as linear combinations of periodic solutions. The maturation of nonlinear dynamical systems theory, though, led to a radically different view: The mechanism generating complex, unpredictable behavior was a relatively low-dimensional strange attractor [4][5][6]. Using behavior-driven "state-space reconstruction" techniques [7,8] this hypothesis was finally demonstrated [9]. The behavior-driven methods were even extended to extracting the equations of motion themselves from time series of observations [10]. Success in this required arXiv:1801.00515v1 [cond-mat.stat-mech] 1 Jan 2018 knowing an appropriate language with which to express the equations of motion. Those successes, however, tantalizingly suggested that behavior-driven methods could let a system's behavior determine the basis for identifying and describing their emergent patterns.
To lay the foundations for this and determine what was required for success, a new approach to discovering patterns generated by complex systems-computational mechanics [11][12][13]-was developed. It employs mathematical structures analogous to those found in computation theory to build intrinsic representations of temporal behavior. The structure of a system's dynamic, the rules of its temporal evolution, are captured and quantified by the intrinsic representations of computational mechanics-its -machines. Before this view was introduced, one was tempted to assume a system's evolution rules were simply its equations of motion. A hallmark of emergent systems, however, arises exactly when this is not the case [14]. While a system's emergent dynamical structure ultimately derives from the governing equations of motion, arriving at the former from the latter is typically unfeasible. Similarly, chemistry cannot be considered simply as "applied physics" nor biology, "applied chemistry" [15].
The use of automata-theoretic constructs lends computational mechanics its name: it extends statistical mechanics beyond statistics to include computation-theoretic mechanisms. Operationally, the rise of computer simulation and numerical analysis as the "third paradigm" for physical sciences provides a research ecosystem that is well-complemented by computational mechanics, as the latter is a theory built to describe behavior (data) and, in this, it focuses relatively less on analyzing governing equations [16]. The need for behavior-driven theory-"datadriven", as some say today-such as computational mechanics becomes especially apparent in high-dimensional, nonlinear systems.
Patterns abound in systems far from equilibrium across all spatial scales [17][18][19], from galactic structures to planetary-such as Jupiter's famous Red Spot and similar climatological structures on Earth-down to the microscopic scales of snowflakes [20] and bacterial [21] and crystal growth [22]. For imminently practical reasons, though, most studies of pattern formation, both theory [23,24] and experiment, focus on particular classes of human-scale pattern-forming system, including Rayleigh-Bénard convection [25][26][27], Taylor-Couette flow [28,29], the Belousov-Zhabotinsky chemical reaction [30,31], and Faraday's crispations [32,33] to mention several. Often studied under the rubric of nonequilibrium phase transitions [34][35][36], these systems are amenable to careful experimental control and systematic mathematical analysis, facilitated by imposing idealized boundary conditions. Nonequilibrium is maintained in these systems via homo-geneous fluxes that give rise to cellular patterns described and analyzed through global Fourier modes.
While much progress has been made in understanding the instability mechanisms driving pattern formation and the dynamics of the patterns themselves in idealized systems [23,24,37,38], many challenges remain, especially with wider classes of real world patterns. In particular, the inescapable inhomogeneities of systems found in nature give rise to relatively more localized patterns, rather than the cellular patterns captured by simple Fourier modes. We refer to these localized patterns as coherent structures. There has been intense interest recently in coherent structures in fluid flows, including structures in geophysical flows [39,40], such as hurricanes [41,42], and in more general turbulent flows [43].
A principled universal description of the organization of such structures does not exist. So, while we can exploit vast computing resources to simulate models of everincreasing mathematical sophistication, analyzing and extracting insights from such simulations becomes highly nontrivial. Indeed, given the size and power of modern computers, analyzing their vast simulation outputs can be as daunting as analyzing any real physical experiment [16]. Finally, there is no unique, agreed-upon approach to analyzing and predicting coherent material structures in fluid flows, for instance [44]. Even today ad hoc thresholding is often used to identify extreme weather events in climate data, such as cyclones and atmospheric rivers [45][46][47]. Developing a principled, but general mathematical description of coherent structures is our focus.
Parallels with contemporary machine learning are worth noting, given the increasing overlap between these technologies and the needs of the physical sciences. Imposing Fourier modes as templates for cellular patterns is the mathematical analog of the technology of (supervised) pattern recognition [48]. Patterns are given as a finite number of classes and learning algorithms are trained to assign inputs into these classes by being fed a large number of labeled training data, which are inputs already assigned to the correct pattern class.
Computational mechanics, in contrast, makes far fewer structural assumptions [13]. As we will see, for discrete spatially extended systems it makes only modest yet reasonable assumptions about the existence and conditional stationarity of lightcones in the orbit space of the system. In so doing, it facilitates identifying representations that are intrinsic to a particular system. This is in contrast with subjectively imposing a descriptional basis, such as Fourier modes, wavelets, or engineered pattern-class labels. We say that our subject here is not simply pattern recognition, but (unsupervised) pattern discovery.
To start to address these challenges, we briefly review a particular spatiotemporal generalization of computational mechanics [49]. We adapt it to detect coherent structures in terms of the underlying constituents from which they emerge, while at the same time providing a principled description of such structures. The development is organized as follows. Section II introduces the local causal states, the main tool of computational mechanics used for coherent structure analysis. We also give an overview of elementary cellular automata (ECAs), which is the class of pattern-forming mathematical models we use to demonstrate our coherent structure analysis.
Section III introduces the computational mechanics of coherent structures. The dynamical notion of background domains plays a central role since, after transients die away, the fields produced by spatially extended dynamical systems can be decomposed into domain regions and coherent structures embedded in them [50]. Furthermore, the domains' internal symmetries typically dictate how the overall spatiotemporal dynamic organizes itself, including what large-scale patterns may form. More to the point, we formally define coherent structures with respect to a system's domains.
Crutchfield and Hanson introduced a principled analysis of CA domains and coherent structures [11,[50][51][52][53][54][55]. They defined domains as dynamically invariant sets of spatially statistically stationary configurations with finite memory. This led to formal methods for proving that domains were spacetime shift-invariant and so dominant patterns for a given CA. Having identified these significant patterns, they created spatial transducers that decomposed a CA spacetime field into domains and nondomain structures, such as particles and particle interactions [56]. We refer to this analysis of CA structures as the domain-particleinteraction decomposition (DPID). The following extends DPID but, for the first time, uses local causal states to define domains and coherent structures. In this, domains are given by spacetime regions where the associated local causal states have time and space translation symmetries.
Section IV gives detailed examples for the two main classes of CA domains-those with explicit symmetries and those with hidden symmetries. We show empirically that there is a strong correspondence between domains and structures of elementary CAs identified by local causal states and by the DPID approach. For domains, we show that a homogeneous invariant set of spatial configurations (DPID domains) produces a local causal state field with a spacetime symmetry tiling. Since local causal state inference is fully behavior-driven, it applies to a broader class of spatiotemporal systems than the DPID transducers. And so, this correspondence extends both the theory and application of the coherent structure analysis they engender.
Similar approaches using local causal states have been pursued by others [57][58][59][60][61]. However, as will be elaborated upon in future work, these underutilize computational mechanics, developing only a qualitative filtering toollocal statistical complexity-that assists in subjective visual recognition of coherent structures. Moreover, they provide no principled way to describe structures and thus cannot, to take one example, distinguish two distinct types of structures from one another. There have also been other unsupervised approaches to coherent structure discovery in cellular automata using information-theoretic measures [62][63][64][65]. Recent critiques of employing such measures to determine information storage and flow and causal dependency [66,67] indicate that these uses of information theory for CAs are still in early development and have some distance to go to reach the structuredetection performance levels presented here.
II. BACKGROUND
Modern physics evolved to use group theory to formalize the concept of symmetry [68]. The successes in doing so are legion in twentieth-century fundamental physics. When applied to emergent patterns, though, group-theoretic descriptions formally describe only their exact symmetries. This is too restrictive for more general notions-naturally occurring patterns and structures that are an amalgam of strict symmetry and randomness. Thus, one appeals to semigroup theory [69,70] to describe partial symmetries. This use of semigroup algebra is fundamental to automata as developed in early computation theory [71,72]. In this, different classes of automata or "machines" formalize the concept of structure [1]. Through the connection with semigroup theory, structure captured by machines can be seen as a system's generalized symmetries. The variety of computational model classes [73] then becomes an inspiration for understanding emergent natural patterns [71].
To capture structure in complex physical systems, though, computational mechanics had to move beyond computation-theoretic automata to probabilistic representations of behavior. That said, its parallels to semigroups and automata are outlined in Ref. [12,Apps. D and H], for example. Early on, the theory was most thoroughly developed in the temporal setting to analyze structured stochastic processes [74]. It was also applied to continuous-valued chaotic systems using the methods [75] of symbolic dynamics to partition low-dimensional attractors [11]. More recently, it has been directly applied to continuous-time and continuous-value processes [76][77][78][79][80][81][82].
A. Temporal processes, canonical representations
A stochastic process P is the distribution of all a system's allowed behaviors or realizations . . . x −2 , x −1 , x 0 , x 1 , . . . as specified by their joint probabilities Pr(. . . , X −2 , X −1 , X 0 , X 1 , ...). Here, X t is the random variable for the outcome of the measurement x t ∈ A at time t, taking values from a finite set A of all possible events. (Uppercase denotes a random variable; lowercase its value.) We denote a contiguous chain of random variables as X 0: = X 0 X 1 · · · X −1 and their realizations as x 0: = x 0 x 1 · · · x −1 . (Left indices are inclusive; right, exclusive.) We suppress indices that are infinite. We will often work with stationary processes for which Pr X t:t+ = Pr X 0: for all t and .
The canonical representation for a stochastic process within computational mechanics is the process' -machine. This is a type of stochastic state machine, commonly known as a hidden Markov model (HMM), that consists of a set Ξ of causal states and transitions between them. The causal states are constructed for a given process by calculating the classes determined by the causal equivalence relation:
x :t ∼ x :t ⇐⇒ Pr(X t: |X :t = x :t ) = Pr(X t: |X :t = x :t ) .
Operationally, two pasts x :t and x :t are causally equivalent, i.e., belong to the same causal state, if and only if they make the same prediction for the future. Equivalent states lead to the same future conditional distribution. Behaviorally, the interpretation is that whenever a process generates the same future (a conditional distribution), it is effectively in the same state.
Each causal state ξ ∈ Ξ is an element of the coarsest partition of a process' pasts {x :t : t ∈ Z} such that every x :t ∈ ξ has the same predictive distribution: Pr(X t: |x :t ) = Pr(X 0: |·). The associated random variable is Ξ. Thefunction (x :t ) maps a past to its causal state: : x :t → ξ. In this way, it generates the partition defined by the causal equivalence relation ∼ . One can show that the causal states are the unique minimal sufficient statistic of the past when predicting the future. Notably, the causal state set Ξ can be finite, countable, or uncountable [14,83,84], even if the original process is stationary, ergodic, and generated by an HMM with a finite set of states. Reference [12] gives a detailed exposition and Refs. [81,82,85] give closed-form calculational tools.
B. Spatiotemporal processes, local causal states
The state x of a spatiotemporal system specifies the values x r at sites r of a lattice L. Assuming values lie in set A, a configuration x ∈ A L is the collection of values over the lattice sites. If the values are generated by random variables X r , then we have a spatial process Pr(X)-a stochastic process over the random variable field X = {X r : r ∈ L}.
A spatiotemporal system, in contrast to a purely temporal one, generates a process Pr(. . . , X −1 , X 0 , X 1 , . . .) consisting of the series of fields X t . (Subscripts denote time; superscripts sites.) A realization of a spatiotemporal process is known as spacetime field x ∈ A L⊗Z , consisting of a time series x 0 , x 1 , . . . of spatial configurations x t ∈ A L . A L⊗Z is the orbit space of the process; that is, time is added onto the system's state space. The associated spacetime field random variable is X. A spacetime point x r t ∈ A is the value of the spacetime field at coordinates (r, t)-that is, at location r ∈ L at time t. The associated random variable at that point is X r t . Being interested in spatiotemporal systems that exhibit spatial translation symmetries, we narrow consideration to regular spatial lattices with topology L = Z d . (As needed, the lattice will be infinite or periodic along each dimension.)
Purely temporal computational mechanics views the spatiotemporal process Pr(. . . , X −1 , X 0 , X 1 , . . .) as a time series over events with the very large or even infinite alphabet-the configurations in A L . In special cases, one can calculate the temporal causal equivalence classes and their causal states and transitions from the time series of spatial configurations, giving the global -machine. While formally well defined, determining the global -machine is for all practical purposes intractable. Some form of simplification is required to make headway.
Random variable lightcones
To circumvent this we introduce a different, spatially local representation. This respects and leverages the configurations' spatial nature; the otherwise unwieldy configuration alphabet A L has embedded structure. In particular, for systems that evolve under a homogeneous local dynamic and for which information propagates through the system at a finite speed, it is quite natural to use lightcones as spatially local notions of pasts and futures.
Formally, the past lightcone L − of a spacetime random variable X r t is the set of all random variables at previous times that could possibly influence it. That is:
L − (r, t) ≡ X r t : t ≤ t and ||r − r|| ≤ c(t − t) ,(1)
where c is the finite speed of information propagation in the system. Similarly, the future lightcone L + is given as all the random variables at subsequent times that could possibly be influenced by X r t :
L + (r, t) ≡ X r t : t > t and ||r − r|| ≤ c(t − t ) . (2)
We include the present random variable X r t in its past lightcone, but not its future lightcone. An illustration for one-space and time (1 + 1D) fields on a lattice with nearest-neighbor (or radius-1) interactions is shown in Fig 1. We use L − to denote the random variable for past lightcones with realizations − ; similarly, L + those with realizations + for future lightcones.
The choice of lightcone representations for both local pasts and futures is ultimately a weak-causality argument: influence and information propagate locally through a spacetime site from its past lightcone to its future lightcone. A sequel [86] goes into more depth, exploring this choice and possible variations. For now, we work with the given assumptions.
Using lightcones as local pasts and futures, generalizing the causal equivalence relation to spacetime is now straightforward. Two past lightcones are causally equivalent if they have the same distribution over future lightcones:
− i ∼ − j ⇐⇒ Pr L + | − i = Pr L + | − j .(3)
This local causal equivalence relation over lightcones implements an intuitive notion of optimal local prediction [49]. At some point x r t in spacetime, given knowledge of all past spacetime points that could possibly affect x r t -i.e., its past lightcone − (r, t)-what might happen at all subsequent spacetime points that could be affected by x r t -i.e., its future lightcone + (r, t)? The equivalence relation induces a set Ξ of local causal states ξ. A functional version of the equivalence relation is helpful, as in the pure temporal setting, as it directly maps a given past lightcone − to the equivalence class [ − ] of which it is a member:
( − ) = [ − ] = { − : − ∼ − }
or, even more directly, to the associated local causal state:
( − ) = ξ − .
Closely tracking the standard development of temporal computational mechanics [12], a set of results for spatiotemporal processes parallels those of temporal causal states [49]. For example, one concludes that local causal states are minimal sufficient statistics for optimal local prediction. Moreover, the particular local prediction uses lightcone-shaped random-variable templates, associated with local causality in the system. Specifically, the future follows the past and information propagates at a finite speed. Thus, local causal states do not detect direct causal relationships-say, as reflected in learning equations of motion from data. Rather, they exploit an intrinsic causality in the system in order to discover emergent spacetime structures.
As an aside, if viewed as a form of data-driven machine learning, our coherent-structure theory, implemented using either DPID or local causal states, allows for unsupervised image-segmentation labeling of spatiotemporal structures. We should emphasize that this is spacetime segmentation and not a general image segmentation algorithm [48], since it works only in systems for which local causality exists and for which lightcone templates are well defined.
Causal state filtering
As in purely-temporal computational mechanics, the local causal equivalence relation Eq. (3) induces a partition over the space of (infinite) past lightcones, with the local causal states being the equivalence classes. We will use the same notation for local causal states as was used for temporal causal states above, as there will be no overlap later: Ξ is the set of local causal states defined by the local causal equivalence partition, Ξ denotes the random variable for a local causal state, and ξ for a specific realized causal state. The local -function ( − ) maps past lightcones to their local causal states : − → ξ, based on their conditional distribution over future lightcones.
For spatiotemporal systems, a first step to discover emergent patterns applies the local -function to an entire spacetime field to produce an associated local causal state field S = (x). Each point in the local causal state field is a local causal state S r t = ξ ∈ Ξ. The central strategy here is to extract a spatiotemporal process' pattern and structure from the local causal state field. The transformation S = (x) of a particular spacetime field realization x is known as causal state filtering and is implemented as follows. For every spacetime coordinate (r, t):
1. At x r t determine its past lightcone L − (r, t) = − ; 2. Form its local predictive distribution Pr(L + | − ); 3. Determine the unique local causal state ξ ∈ Ξ to which it leads; and 4. Label the local causal state field at point (r, t) with ξ: S r t = ξ.
Notice the values assigned to S in step 4 are simply the labels for the corresponding local causal states. Thus, the local causal state field is a semantic field, as its values are not measures of any quantity, but rather labels for equivalence classes of local dynamical behaviors as in the measurement semantics introduced in Ref. [87].
In practice, there are inference details involved in causal filtering which we discuss more in Ref. [86]. The main inference parameters are the finite lightcone horizons h − and h + , as well as the speed of information propagation c. For cellular automata c is simply the radius R of local neighborhoods; see below. These parameters determine the shape of the lightcone templates that are extracted from spacetime fields.
Causal state filtering will be used shortly in Sec. III to analyze spacetime domains and coherent structures. For each case we will give the past and future lightcone horizons used. But first we must introduce prototype spatial dynamical systems to study.
C. Cellular automata
The spatiotemporal processes whose structure we will analyze are deterministically generated by cellular automata. A cellular automaton (CA) is a a fully-discrete spatially-extended dynamical system with a regular spatial lattice in d dimensions L = Z d , consisting of local variables taking values from a discrete alphabet A and evolving in discrete time steps according to a local dynamic φ. Time evolution of the value at a site on a CA's lattice depends only on values at sites within a given radius R. The collection of all sites within radius R of a point x r t , including x r t itself, is known as the point's neighborhood η(x r t ):
η(x r t ) = {x r t : ||r − r || ≤ R, r, r ∈ L} .
The neighborhood specification depends on the form of the lattice distance metric chosen. The two most common neighborhoods for regular lattice configurations are the Moore and von Neumann neighborhoods, defined by the Chebyshev and Manhattan distances in L, respectively.
The local evolution of a spacetime point is given by:
x r t+1 = φ η(x r t ) ,
and the global evolution Φ : A L → A L of the spatial field is given by:
x t+1 = Φ(x t ) .(4)
For example, this might apply φ in parallel, simultaneously to all neighborhoods on the lattice. Although, other local update schemes are encountered.
As noted, CAs are fully discrete dynamical systems. They evolve an initial spatial configuration x 0 ∈ A L according to Eq. (4)'s dynamic. This generates an orbit
x 0:t = {x 0 , x 1 , . . . x t−1 } ∈ A L⊗Z .
Usefully, dynamical systems theory classifies a number of orbit types. Most basically, a periodic orbit repeats in time:
x t+p = Φ p (x t ) = x t ,(5)
where p is its period-the smallest integer for which this holds. A fixed point has p = 1 and a limit cycle has finite p > 1. An aperiodic orbit has no finite p; a behavior that can occur only on infinite lattices.
Since CA states are spatial configurations an orbit x 0:t is a spacetime field. These orbits constitute the spatiotemporal processes of interest in the following.
D. Elementary CAs
The prototype spatial systems we use to demonstrate coherent structure analysis are the elementary cellular automata (ECAs) that have a one-dimensional spatial lattice L = Z and local random variables taking binary values A = {0, 1}. Thus, ECA spatial configurations x t ∈ A Z are strings of 0s and 1s. Equation (4)'s time evolution is implemented by simultaneously applying the local dynamic (or lookup table) φ over radius-1 neighborhoods
η(x r t ) = x r−1 t x r t x r+1 t : η O η = φ(η) 1 1 1 O 7 1 1 0 O 6 1 0 1 O 5 1 0 0 O 4 0 1 1 O 3 0 1 0 O 2 0 0 1 O 1 0 0 0 O 0 ,
where each output O η = φ(η) ∈ A and the ηs are listed in lexicographical order. There are 2 8 = 256 possible lookup tables, as specified by the string of output bits: [88].
O 7 O 6 O 5 O 4 O 3 O 2 O 1 O 0 . A specific ECA lookup
Over the years, CAs have been designed as distributed implementations of various kinds of computation. In this, one studies specific combinations of initial conditions and CA rules. For example, over a restricted set of initial configurations ECA 110 is computation universal, a capability it embodies via its coherent structures [89]. Here, though, we are interested in typical spatiotemporal behaviors generated by ECAs. Practically speaking, this means analyzing spacetime fields that are generated from random initial conditions under a given ECA rule. In short, our studies will randomly sample the space of field configurations generated by given ECA rules. It is convenient to consider boundary conditions consistent with spatial translation symmetry. For numerical simulations, as we used here, this means using periodic boundary conditions.
To close, we note the relationship between past lightcones and a CA's local dynamic φ. The i th -order lookup table φ i maps the radius R = i · c neighborhood of a site to that site's value i time steps in the future. Said another way, a spacetime point x r t+i is completely determined by the radius R = i · c neighborhood i time-steps in the past according to φ i η i·c (x r t ) . To fill out the elements of φ i , apply φ to all points of η i·c to produce η (i−1)·c and so on until η 0 = x r t is reached. This is what we call the lookup table cascade, the elements of which are finite-depth past lightcones.
E. Automata-theoretic CA evolution
For cellular automata in one spatial dimension, such as ECAs, configurations x t ∈ A Z are strings over the alphabet A. Rather than study how a CA evolves individual configurations, it is particularly informative to investigate how CAs evolve sets. This is a key step DPID uses in discovering a CA's emergent patterns [50]. We are particularly interested in how the spatial structure in a CA's configurations evolve. To monitor this, we use automata-theoretic representations of sets of spatial configurations.
Sets of strings recognized by finite-state machines are called regular languages. Any regular language L has a unique minimal finite-state machine M (L) that recognizes or generates it [73]. These automata are particularly useful since they give a finite mathematical representation of a typically infinite set of configurations that are regular languages.
To explore how a CA evolves languages we establish a dynamic that evolves machines. This is accomplished via finite-state transducers. Transducers are a particular type of input-output machine that maps strings to strings [90]. This is exactly what the global dynamic of a CA does [91]. As a mapping from a configuration x t at time t to one x t+1 at time t + 1, it is also a map on a configuration set L t from one time to the next L t+1 :
L t+1 = Φ(L t ) .(6)
A CA's global dynamic Φ, though, can be represented as a finite-state transducer T Φ that evolves a set of configurations represented by a finite-state machine. This is the finite machine evolution (FME) operator [50]. Its operation composes the CA transducer T Φ and finite-state machine M (L t ) to get the machine M t+1 = M (L t+1 ) describing the set L t+1 of spatial configurations at the next time step:
M t+1 = min T Φ • M (L t ) .(7)
Here, min(M ) is the automata-theoretic procedure that minimizes the number of states in machine M . While not entirely necessary for language evolution, the minimization step is helpful when monitoring the complexity of L t . The net result is that Eq. (7) is the automata-theoretic version of Eq. (6)'s set evolution dynamic. Analyzing how the FME operator evolves configuration sets of different kinds is a key tool in understanding CA emergent patterns.
III. DOMAINS AND COHERENT STRUCTURES
The following develops our theory of coherent structures and then demonstrates it by identifying patterns in ECA-generated spacetime fields. The theory builds off the conceptual foundation laid out by DPID in which structures, such as particles and their interactions, are seen as deviations from spacetime shift-invariant domains. The new local causal state formulation differs from DPID in how domains and their deviations are formally defined and identified. The two distinct approaches to the same conceptual objective complement and inform one another, lending distinct insight into the patterns and regularity captured by the other.
We begin with an overview of DPID CA pattern analysis and then present the new formulation of domains based on local causal states. Generalizing DPID particles, coherent structures are then formally defined as particular deviations from domains. Specifically, coherent structures are defined through semantic filters that use either the local causal state field S = (x) or the DPID domain-transducer filter described shortly. CA coherent structures defined via the latter are DPID particles. Defining particles using local causal states, in contrast, extends domain-particle-interaction analysis to a broader class of spatiotemporal systems for which DPID transducers do not exist. Due to this improvement, in the local causal states analysis we adopt the terminology of "coherent structures" over "particles".
A. Domains
The approach to coherent structures begins with what they are not. Generally, structures are seen as deviations from spatially and temporally statistically homogeneous regions of spacetime. These homogeneous regions are generally called domains, alluding to solid state physics. They are the background organizations above which coherent structures are defined.
Structure from breaking symmetries
Structure is often described as arising from broken symmetries [15,18,23,37,[92][93][94][95]. Though key to our development, broken symmetry is a more broadly unifying mechanism in physics. Care, therefore, is required to precisely distinguish the nature of broken symmetries we are interested in. Specifically, our formalism seeks to capture coherent structures as temporally-persistent, spatially-localized broken symmetries.
Drawing contrasts will help delineate this notion of coherent structure from others associated with broken symmetries. Equilibrium phase transitions also arise via broken symmetries. There, the degree of breaking is quantified by an order parameter that vanishes in the symmetric state. A transition occurs when the symmetry is broken and the order parameter is no longer zero [94].
This, however, does not imply the existence of coherent structures. When the order parameter is global and not a function of space, symmetry is broken globally, not locally. And so, the resulting state may still possess additional global symmetries. For example, when liquids freeze into crystalline solids, continuous translational symmetry is replaced by a discrete translational symmetry of the crystal lattice-a global symmetry.
Similarly, the primary bifurcation exhibited in nonequilibrium phase transitions occurs when the translational invariance of an initial homogeneous field breaks [23,37]. It is often the case, though, as in equilibrium, that this is a continuous-to-discrete symmetry breaking, since the cellular patterns that emerge have a discrete lattice symmetry. To be concrete, this occurs in the conduction-convection transition in Rayleigh-Bénard flow. The convection state just above the critical Rayleigh number consists of convection cells patterned in a lattice [25,96]. In the language used here, the above patterns arise as a change of domain structure, not the formation of coherent structures. Coherent structures, such as topological defects [37,97], form at higher Rayleigh numbers when the discrete cellular symmetries are locally broken.
Describing domains, their use as a baseline for coherent structures, and how their own structural alterations arise from global symmetry breaking transitions delineates what our coherent structures are not. To make positive headway, we move on to a direct formulation, starting with how they first appeared in the original DPID and then turning to express them via local causal states.
DPID patterns
Domains of one-dimensional cellular automata were defined in DPID pattern analysis [50][51][52][53]55] as configuration sets that, when evolved under the system's dynamic, produce spacetime fields that are time-and space-shift invariant. Formally, the computational mechanics of spacetime fields was augmented with concepts from dynamical systems-invariant sets, basins, attractors, and the like-adapted to describe organization in the CA infinitedimensional state space A ∞ . There, a domain Λ ⊆ A ∞ of a CA Φ is a set of spatial configurations with:
1. Temporal invariance: Λ is mapped onto itself by the global dynamic Φ:
Φ p (Λ) = Λ ,(8)
for some finite time p; and 2. Spatial invariance: Λ is mapped onto itself by the spatial-shift dynamic σ:
σ s (Λ) = Λ ,(9)
for some finite distance s.
The smallest p for which the temporal invariance of Eq. (8) holds gives the domain's recurrence time. Similarly, the smallest s is domain's spatial period. In this way, a domain Λ consists of p temporal phases, each its own spatial language:
Λ = {Λ 1 , Λ 2 , . . . , Λ p }.
In the terminology of symbolic dynamics [75], each temporal phase Λ i is a shift space X Λi ⊆ A Z (spatial shift invariance) such that the CA dynamic Φ p is a conjugacy from X Λi to itself (temporal invariance).
An ambiguity arises here between Λ's recurrence time p and its temporal period p. For a certain class of CA domain (those with explicit symmetries, see Sec IV A), the domain states x ∈ Λ are periodic orbits of the CA, with orbit period equal to the domain period, x = Φ p (x). More generally, the recurrence time p is the time required for the domain to return to the spatial language temporal phase it started in. That is, if initially in phase Λ i , p is the number of time steps required to return to Λ i . The temporal period of the domain, in contrast, is the number of time steps required not just to return to Λ i , but to return to Λ i in the same spatial phase it started in. Thus p ≤ p. Determining p involves examining how φ interacts with Λ, rather than Φ.
Once a domain Λ is found it is straightforward to use φ to construct a DPID spacetime machine that describes Λ's allowed spacetime regions [55]. We refer to a CA domain that is a regular language as a regular domain. Roughly speaking, this captures the notion of a spatial (or a spacetime) region generated by a locally finite-memory process.
How does one find domains for a given CA in the first place? While there are no general analytic solutions to Eq. (8), checking that a candidate language L is invariant under the dynamic is computationally straightforward using Eq. (7)'s FME operator, if potentially compute intensive. The FME operator is repeated p times to construct M (Φ p L) to symbolically-that is, exactly-check whether a candidate language is periodic under the CA dynamic:
M (L) M (Φ p L),
where we compare up to isomorphism implemented using automata minimization. Spatial translation invariance then requires checking that M (L) has a single strongly connected set of recurrent states. This is a subtle point, as a corollary to the Curtis-Hedlund-Lyndon theorem [98] states that every image of cellular automata is a shift space and thus described by a strongly connected automata [99]. This however, concerns evolving single configurations, whereas the FME operator evolves configuration sets. Thus, a single strongly connected set of recurrent states as output of FME is nontrivial and shows that set consists of spatially homogeneous configurations.
Using FME, one can "guess and check" candidate domains. This can be automated since candidate regular domain machines can be exactly enumerated in increasing number of states and transitions [100]. Fortunately, too, not all possible candidates need be considered. Loosely speaking, one may think of domain languages as "spatial -machines". Equation (9)'s domain spatial-shift invariance establishes -machine properties (e.g. minimality and unifilarity) for candidate languages L. This substantially constrains the space of possible languages, as well as introduces the possibility of using -machine inference algorithms [101] when working with empirical spacetime datasets. Additional constraints can further reduce search time, but these details need not concern us here.
Once a CA's domains Λ 0 , Λ 1 , . . . are discovered, they can be used to create a domain transducer τ that identifies which of configuration x's sites are in which domain and which are not in any domain [56]. For a given 1+1 dimension spacetime field x, each of its spatial configurations x = x t are scanned by the transducer, with output T t = τ (x). Although the transducer maps strings to strings, the full spacetime field can be filtered with τ by collecting the outputs of each configuration in time order to produce the domain transducer filter field of x:
T = τ (x).
Sites x r t "participating" in domain Λ i are labeled i in the transducer field. That is:
T r t = τ (x t ) r = i .
Other sites are similarly labeled by the particular way in which they deviate from domains. One or several sites, for example, can indicate transitions from one domain temporal phase or domain type to another. If that happens in a way that is localized across space, one refers to those sites as participating in a CA particle. Particle interactions can also be similarly identified. Reference [50] gives describes how this is carried out. In general, a stack automaton is needed to perform this domain-filtering task, but it may be efficiently approximated using a finite-state transducer [56].
This filter allows us not only to formally define CA domains, the transducer allows for site-by-site identification of domain regions and thus also sites participating in nondomain patterns. In this way and in a principled manner, one finds localized deviations from domains-these are our candidate coherent structures.
Originally, this was called cellular automata computational mechanics. Since then, other approaches to spatiotemporal computational mechanics developed, such as local causal states. We now refer to the above as DPID pattern analysis.
Local causal state patterns
DPID pattern analysis formulates domains directly in terms of how a system's dynamic evolves spatial configurations. That is, domains are sets of structurally homogeneous spatial configurations that are invariant under Φ. While this is appealing in many ways, it can become cumbersome in more complex spatiotemporal systems.
Let's be clear where such complications arise. On the one hand, estimating a CA's rule φ and so building up Φ is straightforwardly implemented by scanning a spacetime field for neighborhoods and next-site values. This sets up DPID with what it needs. On the other, there are circumstances in which a finite-range rule φ is not available, leaving DPID mute. This can occur even in very simple settings. The simplest with which we are familiar arises in hidden cellular automata-the cellular transducers of Ref. [102]. There, perhaps somewhat surprisingly, ECA evolution observed through other radius-1 rule tables generate spacetime data that no finite-radius CA can generate.
For these reasons and to develop methods for even more complicated spatiotemporal systems where the FME operator cannot be applied, we now develop a companion approach. Just as the causal states help discover structure from a temporal process, we would like to use the local causal states to discover structure, in the more concrete sense of coherent structures, directly from spacetime fields. To do so, we start with a precise formulation of domains in terms of local causal states. Since local causal states apply in arbitrary spatial dimensions, the following addresses general d-dimensional cellular automata. In this, index n ∈ {1, 2, . . . , d} identifies a particular spatial coordinate.
A simple but useful lesson from DPID is that domains are special (invariant) subsets of CA configurations. Since they are deterministically generated, a CA's spacetime field is entirely specified by the rule φ, the initial condition x 0 , and the boundary conditions. Here, in analyzing a CA's behavior, φ is fixed and we only consider periodic boundary conditions. This means for a given CA rule, the spacetime field is entirely determined by x 0 . If it belongs to a domain-x 0 ∈ Λ i -all subsequent configurations of the spacetime field will, by definition, also be in the domain-x t = Φ t (x 0 ) ∈ Λ i . In this sense a domain Λ ⊆ A L is a subset of a CA's allowed behaviors: Λ ⊆ Φ t (A L ), t = 1, 2, 3, . . ..
Lacking prior knowledge, if one wants to use local causal states to discover a CA's patterns, their reconstruction should be performed on all of a CA's spacetime behavior Φ t (A L ). This gives a complete sampling of spacetime field realizations and so adequate statistics for good local causal state inference. Doing so leaves one with the full set of local causal states associated with a CA. Since domains are a subset of a CA's behavior, they must be described by some special subset of the associated local causal states. What are the defining properties of this subset of states which define them as one or another domain?
The answer is quite natural. The defining properties of local causal states associated with domains are expressed in terms of symmetries. For one-dimensional CAs these are time and space translation symmetries. In general, alternative symmetries may be considered as well, such as rotations, as appropriate to other settings. Such symmetries are directly probed through causal filtering.
Consider a domain Λ, the local causal states Ξ induced by the local causal equivalence relation ∼ over spatiotemporal process X, and the local causal state field S = (x) over realization x. Let σ p denote the temporal shift operator that shifts a spacetime field x p steps along the time dimension. This translates a point x r t in the spacetime field as: σ p (x) r t = x r t+p . Similarly, let σ sn denote the spatial shift operator that shifts a spacetime field x by s n steps along the n th spatial dimension. This translates a spacetime point x r t as: σ sn (x) r t = x r t , where r n = r n +s n .
Definition.
A pure domain field x Λ is a realization such that σ p and the set of spatial shifts {σ sn } applied to S Λ = (x Λ ) form a symmetry group. The generators of the symmetry group consist of the following translations:
1. Temporal invariance: For some finite time shift p the domain causal state field is invariant:
σ p (S Λ ) = S Λ ,(10)
and:
2. Spatial invariance: For some finite spatial shift s n in each spatial coordinate n the domain causal state field is invariant:
σ sn (S Λ ) = S Λ .(11)
The symmetry group is completed by including these translations' inverses, compositions, and the identity nullshift σ 0 (x) r t = x r t . The set Ξ Λ ⊆ Ξ is Λ's domain local causal states:
Ξ Λ = { S Λ r t : t ∈ Z, r ∈ L}.
The smallest integer p for which the temporal invariance of Eq. (10) is satisfied is Λ's temporal period. The smallest s n for which Eq. (11)'s spatial invariance holds is Λ's spatial period along the n th spatial coordinate.
The domain's recurrence time p is the smallest time shift that brings S Λ back to itself when also combined with finite spatial shifts. That is, σ j σ p (S Λ ) = S Λ for some finite space shift σ j . If p > 1, this implies there are distinct tilings of the spatial lattice at intervening times between recurrence. The distinct tilings then correspond to Λ's temporal phases: Λ = {Λ 1 , Λ 2 , . . . , Λ p }. For systems with a single spatial dimension, like the ECAs, the spatial symmetry tilings are simply (S Λ ) t = · · · w · w · w · · · = w ∞ , where w = (S Λ ) i:i+s t . Each domain phase Λ i corresponds to a unique tiling w i .
For both the DPID and local causal state formulations of domain we use the notation p for temporal period, s for spatial period, and p for recurrence time. While there is as yet no theoretical justification or a priori reason to assume these are the same, we anticipate the empirical correspondence between the two distinct formulations of domain when applied to CAs, as seen below in Sec IV. This also relieves us and the reader of excess notation.
Consider a contiguous region R Λ ⊂ L ⊗ Z in S = (x) for spacetime field x for which all points S r t in the region are domain local causal states: S r t ∈ Ξ Λ , (r, t) ∈ R Λ . The space and time shift operators over the region obey the symmetry groups of pure domain fields. Such regions, over both x and S = (x), are domain regions.
Once a CA's local causal states are identified, one can track unit-steps in space and in time over local causal state fields S to construct a spacetime machine (an automaton) consisting of the local causal states and their allowed transitions. That is, if S r t = ξ and S r+1 t = ξ , then if one moves from (r, t) to (r + 1, t) in the spacetime field, one sees a spatial transition between ξ and ξ in the spacetime machine. Similarly, a temporal transition between ξ and ξ is seen if S r t = ξ and S r t+1 = ξ . The symmetry tiling of domain states determines a particular substructure in the full spacetime machine. Specifically, for each state ξ ∈ Ξ Λ there is a transition leading to state ξ ∈ Ξ Λ if (S Λ ) r t = ξ and σ(S Λ ) r t = ξ , where σ generically denotes a unit shift in time or space. This domain submachine is the analog of the DPID domain spacetime machine [55]. In fact, in all known cases the two spacetime domain machines are identical, up to isomorphism.
With this set-up, discovering the domains of a spatiotemporal process is straightforward: find submachines with the symmetry tiling property. Reference [103,Def. 43] attempted a similar approach to define domains using local causal states: the domain temporal phase was defined as a strongly-connected set of states where state transitions correspond to spatial transitions. A domain then was a strongly-connected (in time) set of domain phases. Unfortunately, this can be interpreted either as not allowing for single-phase domains, which are prevalent, or else as allowing for nondomain submachines to be classified as domain. In contrast, the symmetry tiling conditions in the above formulation provide stricter conditions, in accordance with the symmetry group algebra, for submachines to be classified as domain. For example, the simple cyclic symmetry groups for CA domains lead to cyclic domain submachines. Our formulation also allows for a simpler (and more scalable) analysis through causal filtering.
B. Structures as domain deviations
With domain regions and their symmetries established, we now define coherent structures in spatiotemporal systems as spatially localized, temporally persistent broken symmetries. For clarity, the following definition is given for a single spatial dimension, but the generalization to arbitrary spatial dimensions is straightforward.
Definition.
A coherent structure Γ is a contiguous nondomain region R ⊂ L ⊗ Z of a spacetime field x such that R has the following properties in the semantic-filter fields of S = (x) or T = τ (x):
1. Spatial locality: Given a spatial configuration x t at time t, Γ occupies the spatial region R t = [i : j] if S i:j t is bounded by domain states on its exterior and contains nondomain states on its interior, S i−1 t ∈ Λ, S i t / ∈ Λ, S j t / ∈ Λ, and S j+1 t ∈ Λ.
2. Lagrangian temporal persistence: Given Γ occupies the localized spatial region R t at time t, Γ persists to the next time step if there is a spatially localized set of nondomain states in S at time t + 1 occupying a contiguous spatial region R t+1 that is within the depth-1 future lightcone of R t . That is, for every pair of coordinates (r, t) ∈ R t and (r , t + 1) ∈ R t+1 , ||r − r|| ≤ c.
For simplicity and generality we gave coherent structure properties in terms of local causal state fields. For CAs, to which the FME operator may be applied, the DPID transducer filter may similarly be used to identify coherent structures. However, the condition for temporal persistence is less strict: the regions R t+1 and R t , when given over T rather than S , must have finite overlap. That is, there exists at least one pair of coordinates (r, t) ∈ R t and (r , t + 1) ∈ R t+1 such that r = r . Coherent structures in CAs identified in this way are DPID particles. Both notions of temporal persistence are referred to as Lagrangian since they allow Γ to move through space over time.
Since local causal states are assigned to each point in spacetime, coherent structures of all possible sizes can be described. The smallest scale possible is a single spacetime point and the structure is captured by a single local causal state. Larger structures are given as a set of states localized at the corresponding spatial scale. Such sets may be arbitrarily large and have (almost) arbitrary shape. In this way, the local causal states allow us to discover complex structures, without imposing external templates on the structures they describe. This leaves open the possibility of discovering novel structures that are not readily apparent from a raw spacetime field or do not fit into known shape templates.
IV. CA STRUCTURES
We now apply the theory of domains and coherent structures to discover patterns in the spacetime fields generated by elementary cellular automata. We first classify ECA domain types. For each class we analyze one exemplar ECA in detail. We begin describing the ECA's domain(s) and coherent structures generated by the ECA, from both the DPID and local causal state perspectives.
The analysis of domains and structures gives a sense of the correspondence between DPID and the local causal states. The general correspondence, found empirically, between their descriptions of CA domains is as follows. For every known DPID CA domain language, a configuration from the language is used as an initial condition to generate a pure domain field x Λ . We see that the spacetime shift operators over S Λ = (x Λ ) form symmetry groups with the same spatial period, temporal period, and recurrence time as the DPID domain language.
Though the CA dynamic Φ is not directly used to infer local causal states, the correspondence between DPID and local causal state domains shows that local causal states incorporate detailed dynamical features and they can be used to discover patterns and structures that can be defined directly from Φ using DPID.
A. CA domains and their classification
ECA domains fall into one of two categories: explicit symmetry or hidden symmetry. In the local causal state formulation, a domain Λ has explicit symmetry if the space and time shift operators, σ p and {σ sn }, which generate the domain symmetry group over S Λ = (x Λ ), also generate that same symmetry group over x Λ . That is, σ p (x Λ ) = x Λ and σ sn (x Λ ) = x Λ , for all n. From this, we can see that explicit symmetry domains are periodic orbits of the CA, with the domain period equal to the orbit period. This follows since time shifts of the CA spacetime field are essentially equivalent to applying the CA dynamic Φ; x t+p = σ p (x) t and x t+p = Φ p (x t ). Thus, let x Λ be any spatial configuration of a domain spacetime field,
x Λ = (x Λ ) t , for any t, then Φ p (x Λ ) = x Λ if and only if σ p (x Λ ) = x Λ .
A hidden symmetry domain is one for which the time and space shift operators, that generate the domain symmetry group over S Λ , do not generate a symmetry group over
x Λ : σ p (x Λ ) = x Λ or σ sn (x Λ ) = x Λ or both.
In the DPID formulation, a domain is classified as having explicit or hidden symmetry based on the algebra of the domain languages. In this, group elements are the strings of the spatial languages of the domain and the group action is concatenation of the strings. If this algebra for every domain phase Λ i is a proper group, Λ has explicit symmetry. Otherwise, if the algebra is something more general, like a semigroup or monoid, Λ has hidden symmetry. Notably, hidden symmetry domains are associated with a level of stochasticity in the raw spacetime field. We sometimes refer to these as stochastic domains. As the above domain algebra is only used for classification here, we will not give the explicit mathematics. See Refs.
[12, App. D] or [70] for those details.
Example domains from each category are shown in Figure 2. ECA 110 is given as the explicit symmetry example; a sample spacetime field x Λ110 of its domain is shown in Figure 2(a). The associated local causal state field S Λ110 is shown in Figure 2(c). Each unique color corresponds to a unique local causal state. The local causal state field clearly displays the domain's translation symmetries. ECA 110's domain has spatial period s = 14 and temporal period p = 7. These are gleaned by direct inspection of the spacetime diagram. Pick any color in S Λ110 and one must go through 13 other colors moving through space to return to the original color and, likewise, 6 other colors in time before returning. One can also see that at every time step S Λ110 has a single spatial tiling w of the 14 states. Thus, the recurrence time is p = 1. Finally, notice from Figure 2(a) that spatial configurations of x Λ110 are periodic orbits of Φ 110 , with orbit period equal to the domain period, p = 7.
For a prototype hidden symmetry domain, ECA 22 is used. Crutchfield and McTague used DPID analysis to discover this ECA's domain in an unpublished work [104] that we used here to produce the domain spacetime field x Λ22 shown in Figure 2(b). The associated causal state field S Λ22 is shown in Figure 2 from the raw spacetime field. However, the causal state field S Λ22 is immediately revealing. Domain translation symmetries are clear. The domain is period 4 in both space and time: p = s = 4. There are eight unique local causal states in S Λ22 and, as the spatial period is 4, the eight states come in two distinct spatial tilings w 1 and w 2 , each consisting of 4 states. And so, the recurrence time for ECA 22 is p = 2. Shortly, we examine hidden symmetries in more detail to illustrate how the local causal states lend a new semantics that exposes stochastic symmetries.
Having given concrete demonstrations of the new local causal state formulation of domains and their classification in CAs, we move on to more detailed examples that have been thoroughly studied from the DPID perspective. In doing so, we will see the strong correspondence between the two approaches, in terms of both domains as well as the coherent structures which form atop the domains.
B. Explicit symmetries
We start with a detailed look at ECA 54, whose domains and structures were worked out in detail via DPID [55]. ECA 54 was said to support "artificial particle physics" and this emergent "physics" was specified by the complete catalog of all its particles and their interactions. Here, we analyze the domain and structures using local causal states and compare. Since the particles (structures) are defined as deviations from a domain that has explicit symmetries, the resulting higher-level particle dynamics themselves are completely deterministic. As we will see later, this is not the case for hidden symmetry systems; stochastic domains give rise to stochastic structures.
ECA 54's domain
A pure-domain spacetime field x Λ of ECA 54 is shown in Fig. 3(a). As can be seen, it has explicit symmetries and is period 4 in both time and space. From the DPID perspective, though, it consists of two distinct spatialconfiguration languages, Λ A = (0001) * and Λ B = (1110) * , that map into each other under Φ 54 ; see Fig. 3(c). This gives a recurrence time of p = 2. The finite-state machines, M (Λ A ) and M (Λ B ), shown there for these languages each have four states, reflecting the period-4 spatial translation symmetry: s = 4. Although the domain's recurrence time is p = 2, the raw states x t are period 4 in time due to a spatial phase slip in the language evolution: p = 4. This is shown explicitly in the spacetime machine given in Ref. [55]. We can see that the machine in Fig. 3(c) fully describes the domain field in Fig. 3(a). At some time t, the system is either in (0001) * or (1110) * and at the next time step t + 1 it switches, then back again at t + 2, and so on.
Let's compare this with the local causal state analysis.
The corresponding local causal state field S Λ = (x Λ ) was generated from the pure domain field x Λ of Fig. 3(a) via causal filtering; see Fig. 3(b). We reiterate here that this reconstruction in no way relies upon the invariant set languages of Λ 54 identified in DPID. Yet we see that the local causal states correspond exactly to M (Λ 54 )'s states. In total there are eight states, and these appear as two distinct tilings in the field. These tilings correspond to the two temporal phases of Λ 54 :
w A = [A, B, C, D] = Λ A and w B = [E, F, G, H] = Λ B .
At any given time t, a spatial configuration is tiled by only one of these temporal phases, which each consist of 4 states, giving a spatial period s = 4. And, at the next time t + 1 there are only states from the other tiling. Then back to previous tiling, and so, the evolution continues. Thus, we can see the recurrence time is p = 2. In contrast, the actual local causal states are temporally period p = 4, which is also the orbit period of configurations in x Λ , as can be seen in Fig. 3(a). This is in agreement with DPID's invariant set analysis, shown in Fig. 3(c). As noted before and as will be emphasized, there is a strong correspondence between DPID's dynamically invariant sets of spatially homogeneous configurations and the local causal state description, both for coherent structures and the domains from which they are defined.
ECA 54's structures
Let's examine the structures (particles) supported by ECA 54 and their interactions. Rule 54 organizes itself into domains and structures when started with random initial conditions. A sample spacetime field x produced by evolving a random binary configuration under Φ 54 is shown in Fig. 4(a). We first give a qualitative comparison of the structures in this field from both the DPID and local causal state perspectives.
From the DPID side, a simple domain-nondomain filter is used with binary outputs that flag sites in transducer filter field T = τ (x) as either domain (white) or not domain (black). Applying this filter to the spacetime field of Fig. 4(a) generates the diagram shown in Fig. 4(b). Similarly, a domain-nondomain filter built from local causal states when applied to Fig. 4(a) gives the output shown in Fig. 4(c). For this filter, the eight domain local causal states in S = (x) are in white and all other local causal states black. While domain-nondomain detections differ site-by-site, we see that in aggregate there is again strong agreement on the structures identified by the two filter types.
There are four types of particles found in ECA 54 [55], which we can now examine in detail. Before doing so, we must make a comment about the domain transducer τ used by DPID to identify structures. As mentioned, a stack automaton is generally required, but may be well-approximated with a finite-state transducer [56]. A trade-off is made with the transducer, however, since it must choose a direction to scan configurations-left-toright or right-to-left. To best capture the proper spatial extent of a particle, an interpolation may be done by comparing right and left scans. This was done in the domain-nondomain filter of Fig. 4(b). The bidirectional interpolation used does not capture fine details of domain deviations. For the particle analysis that follows, a single direction (left to right) scan is applied to produce each T t = τ (x t ) in T = τ (x). A noticeable side-effect of the single direction scan is that it covers only about half of any given particle's spatial extent. (This scan-direction issue simply does not arise in local causal state filtering.)
The first structure we analyze is the large stationary α particle, shown in Fig. 5. For both diagrams the white and black squares represent the values 0 and 1, respectively, of the underlying ECA field x α . Overlaid blue letters and red numbers are the semantic filter fields. In Fig. 5(a) these come from the DPID domain transducer filtered field T = τ (x α ). In Fig. 5(b) they come from the local causal state field S = (x α ).
For the DPID domain transducer filtered field in Fig. 5(a), overlaid blue letters are sites flagged as participating in domain by the transducer τ , with the letter representing the spatial phase of the domain as given by M (Λ 54 ). Red numbers correspond to sites flagged as various deviations from domain [55]. Here, the collection of such deviations outlines the α particle's structure; though, as stated above, the unidirectional transducer only identifies about half of the particle's spatial extent. The main feature to notice is that the particle has a period-4 temporal oscillation. As the α is recognizable by eye from the raw field values, one can see this period-4 structure is intrinsic to the raw spacetime field and not an artifact of the domain transducer. However, the period-4 temporal structure is clearly displayed by the DPID domain transducer description of α. Fig. 3(b), and all other nondomain states, which outline the α, are red numbers. We see the local causal states fill out the α's full spatial extent. Since the numeric labels for each state are arbitrarily assigned during reconstruction, the α's spatial reflection symmetry that is clearly present does not appear in the local causal state labels. However, the underlying lightcones that populate the equivalence classes of these states do exhibit this sym-metry. As with the DPID domain transducer description though, the local causal states properly capture the α's temporal period-4.
We emphasize that coherent structures are behaviors of the underlying system and, as such, they exist in the system's spacetime field. The semantic filter fields are formal methods that identify sites in the underlying spacetime field which participate in a particular structure. This is how overlay diagrams, like Fig. 5, derive their utility.
We discuss the three remaining structures of ECA 54 by examining an interaction among them; the left-traveling γ − particle can collide with the right-traveling γ + particle to form the β particle. This interaction is displayed with overlay diagrams in Fig. 6. The values of the underlying field x β are given by white (0) and black (1) squares. The DPID domain transducer filter field T = τ (x β ) is overlaid over top of x β in Fig. 6(a) and the local causal state field S = (x β ) atop x β in Fig. 6(b).
In both cases, the color scheme is as follows. Sites identified by the semantic filters as participating in a domain are colored blue, with the letters specifying the particular phase of the domain. In Fig. 6(a) the domain phases are specified by T and in Fig. 6(b) they are specified by S. And, as we saw in Fig. 3 and can see here, these specifications of Λ 54 are identical. For both Figs. 6(a) and 6(b), nondomain sites participating in the γ + are flagged with red, those participating in the γ − with yellow, and those uniquely participating in the β with orange. As with the α particle, the local causal state description better covers the particles' spatial extent, but both filters agree on the temporal oscillations of each particle. Both γs are period 2 and β is period 4. Unlike the α and β, the γ particles are not readily identifiable by eye. They arise as a result of a phase slip in the domain. For example, a spatial configuration with a γ present is of the form
Λ A γ Λ B .
Related to this, we point out here an observation about this interaction that illustrates how our methods uncover structures in spatiotemporal systems. At the top of each diagram in Fig. 6 the spatial configurations are of the form Λ A γ + Λ B γ − Λ A . At each subsequent time step, the domains change phase A → B and B → A and the intervening domain region shrinks as the γs move towards each other. The intervening domain disappears when the γs finally collide. Then we have local configurations of the form Λ A β Λ A . However, there is an indication that a phase slip between these domain regions still happens "inside" the β particle. Notice in Fig. 6 there are several spatial configurations (horizontal time slices) in which domain states appear inside the β that are the opposite phase of the bordering domain phases, indicating a phase slip. Also, the states constituting the γs are found as constituents of the β. For the DPID domain transducer τ , each γ consists of just two states, and all four of these states (two for each γ) are found in the β. In the local causal state field S, each γ is described by eight local causal states. Not all of these show up as states of the β, but several do. Those γ states that do show up in the β appear in the same spatiotemporal configurations they have in the γs.
These observations tell us about the underlying ECA's behavior and so can be gleaned from the raw spacetime field itself. That said, the discovery that the β particle is a "bound state" of two γs and that it contains an internal phase slip of the bordering domain regions is not at all obvious from inspecting raw spacetime fields. That is, γ + Λ γ − → β. Such structural discovery, however, is greatly facilitated by the coherent structure analysis. To emphasize, these insights concern the intrinsic organization embedded in the spacetime fields generated by the ECA. No structural assumptions, beyond the very basic definitions of local causal states, are required.
Let's recapitulate the correspondence between the independent DPID and local causal state descriptions of the ECA 54 domain and structures. From the DPID perspective, the ECA 54 domain Λ 54 consists of two homogeneous spatial phases that are mapped into each other by Φ 54 . In contrast, Λ 54 is described by a set of local causal states with a spacetime translation symmetry tiling. The two descriptions agree completely, giving a spatial period 4, temporal period 4, and recurrence time of 2. On the one hand, for ECA 54's structures DPID directly uses domain information to construct a transducer filter T = τ (x) that identifies structures as groupings of particular domain deviations. On the other, the local causal states are assigned uniformly to spacetime field sites via causal filtering S = (x). Domains and sites participating in a domain are found by identifying spatiotemporal symmetries in the local causal states. Coherent structures are then localized deviations from these symmetries. Though the agreement is not exact as with the domain, DPID and the local causal states still agree to a large extent on their descriptions of ECA 54's four particles and their interactions.
ECA 110
As the most complex explicit symmetry ECA, ECA 110 is worth a brief mention. It is the only ECA proven to support universal computation (on a specific subset of initial configurations) and implements this using a subset of the ECA's coherent structures [89]. This was shown by mapping ECA 110's particles and their interactions
C. Hidden stochastic symmetries
Our attention now turns to ECAs with hidden symmetries and stochastic domains. These are the so-called "chaotic" ECAs. Since the structure of an ECA's domain heavily dictates the overall behavior, stochastic domains give rise to stochastic structures and hence, in combination, to an overall stochastic behavior. To be clear, since all ECA dynamics are globally deterministic-the evolution of spatial configurations is deterministic-the stochasticity here refers to local structures rather than global configurations. In contrast to explicit symmetry ECAs whose structures are largely identifiable from the raw spacetime field, the structures found in stochasticdomain ECAs are often not at all apparent. In this case the ability of our methods to facilitate the discovery and description of such hidden structures is all the more important and sometimes even necessary. While the distinction between stochastic and explicit symmetry domains does not make a difference when determining DPID's spacetime invariant sets, local causal state inference is relatively more difficult with stochastic domains, usually requiring large lightcone depths and an involved domain-structure analysis.
Here, we examine ECA 18 in detail, as its stochastic domain is relatively simple and well understood. An empirical domain-structure analysis of ECA 18 was first given in Ref. [105] and then more formally in Refs. [106][107][108][109], which notes the domain's temporal invariance. It was not until the FME was introduced in Ref. [50] that this was rigorously proven and shown to follow within the more DPID general framework. The distinguishing feature of ECA 18's domain observed in the early empirical analysis was that the lookup table φ 18 becomes additive when restricted to domain configurations. Specifically, when restricted to domain, φ 18 is equivalent to φ 90 , which is the sum mod 2 of the outer two bits of the local neighborhood;
x r t+1 = φ 90 (x r−1 t x r t x r+1 t ) = x r−1 t + x r+1 t (mod 2).
ECA 18's structures illustrate additional complications of local causal state analysis with stochastic symmetry systems. Nondomain states of ECA 54 and other explicit symmetry ECAs always indicate a particle or particle interaction, after transients. This is not the case with chaotic ECAs, and our formal definition is needed to identify ECA 18's coherent structures.
ECA 18's domain
Iterates of a pure domain spacetime field x Λ18 for the ECA 18 domain Λ 18 is shown in Fig. 8(a). White and black cells represent site values 0 and 1, respectively. A symmetry is not apparent in the spacetime field. One noticeable pattern, though, is that 1s (black cells) always appear in isolation, surrounded by 0s on all four sides. This still does not reveal symmetry, since neither time nor space shifts match the original field. When scanning along one dimension, making either timelike or spacelike moves (vertically or horizontally), one sees that every other site is always a 0 and the sites in between are wildcards-they can be either 0 or 1. Making this identification finally reveals the symmetry in the ECA 18 domain [50].
In contrast to this ad hoc description, the 0-wildcard pattern is clearly and immediately identified in the local causal state field S Λ = (x Λ ), shown in Figure 8(b). State A occurs on the fixed-0 sites and state B on the wildcard sites. And, these states occur in a checkerboard symmetry that tiles the spacetime field. An interesting observation of this symmetry group is that it has rotational symmetry, in addition to the time and space translation symmetries. This is a rotation, though, in spacetime. While unintuitive at first, the above discussion shows this spacetime rotational symmetry is not just a coincidence. The 0wildcard semantics applies for both spacelike and timelike scans through the field.
The DPID invariant-set language for this domain is given in Figure 8(c). Not surprisingly, this is the 0wildcard language. It is easy to see that φ 18 creates a tiling of 0-wildcard local configurations. Also, note the transition branching (the wildcard) leaving state A indicates a semigroup algebra. This identifies Λ 18 as a stochastic symmetry domain. We again see a clear correspondence between the local causal state identification of the domain and that of DPID. Both give spatial period s = 2, temporal period p = 2, and recurrence time p = 1, as there is a single local causal state tiling and a single DPID spatial language, both corresponding to the 0-wildcard pattern.
ECA 18's structures
ECA 18's two-state domain Λ 18 supports a single type of coherent structure-the α particle that appears as a phase-slip in the spatial period-2 domain and consists of local configurations 10 2k 1, k = 0, 1, 2, . . .. The domain's stochastic nature drives the αs in an unbiased left-right random-walk. When two collide they pairwise annihilate; resolving each α's spatial phase shift. To clarify, the α of ECA 18 has no relation to the α of ECA 54. Figure 9 shows these structures as they evolve from a random initial configuration under Φ 18 . The raw spacetime field is given in Fig. 9(a) with the DPID transducer domain-nondomain filter (bidirectional scan interpolation) in Fig. 9(b) and the local causal state domain-nondomain filter in Fig. 9(c). With the aid of these domain filters, visual inspection shows that ECA 18's structures are, in fact, pairwise annihilating random-walking particles. This was explored in detail by Ref. [52].
As noted above, the domain-structure local causal state analysis for stochastic domain systems is generally more subtle. In the DPID analysis, ECA 18 consists solely of the single domain and random-walking α particle structures. Thus, using the DPID transducer to filter out sites participating in domains leaves only α particles, as done in Fig. 9(b). The situation is more complicated in the local causal state analysis. As described in more detail shortly, filtering out domain states leaves behind more than the structures. Why exactly this happens is the subject of future work. The field shown in Fig. 9(c) was produced from a coherent structure filter, rather than from a domain-nondomain filter. There, local causal states that fit the coherent structure criteria are colored blue and all others are colored white.
To illustrate the more involved local causal state analysis let's take a closer look at the α particle. This also highlights a major difference between DPID and local causal state analyses. As the DPID transducer is strictly a spatial description it can identify structures that grow in a single time step to arbitrary size. One artifact of this is that the spatial growth can exceed the speed of local information propagation and thus make structures appear acausal. The local causal states, however, are constructed from lightcones and so naturally take into account this notion of causality. They cannot describe such acausal structures. Accounting for this, though, there is a strong agreement between the two descriptions.
From the perspective of the DPID domain transducer τ ECA 18's α particles are simple to understand. From the domain language in Fig. 8(c), the domain-forbidden words are those in the regular expression 1(00) * 1. That is, pairs of ones with an even number of 0s (including no 0s) in between. This is the description of α particles at the spatial configuration level. The DPID bidirectional scan interpolation domain transducer perfectly captures α described this way; see Fig. 10(a). To aid in visual identification we employed a different color scheme for Fig. 10: the underlying ECA field values are given by green (0) and gray (1) squares. For the DPID transducer filtered field T = τ (x) in Fig. 10(a), overlaid white 0s identify domain sites and black 1s identify particle sites. Every local configuration identified as an α is of the form 1(00) * 1. As noted above, however, αs described in this way can grow in size arbitrarily in a single time step as the number of pairs of zeros in 1(00) * 1 is unbounded.
Local causal state inference-whether topological [86] or probabilistic [49]-is unsupervised in the sense that it uses only raw spacetime field data and no other external information such as the CA rule used to create that spacetime data. Once states are inferred, further steps are needed for coherent structure analysis.
The first step is to identify domain states in the local causal state field S = (x). They tile spacetime regions, i.e., domain regions. For explicit-symmetry domain ECAs this step is sufficient for creating a domain-structure filter. Tiled domain states can be easily identified and all other states outline ECA structures or their interactions. The situation is more subtle, however, for ECAs with stochastic domains. A detailed description of the implementation of additional "decontamination" steps is given in a sequel.
For our purposes here, though, it suffices to strictly apply the definition of coherent structures after this first "out of the box" unsupervised causal filter. The initial unsupervised filtered spacetime diagram identifies a core set of states that are spatially localized and temporally persistent. A coherent structure filter then isolates these states by coloring them black and all other states white in the local causal state field S . The output of this filter is shown in Fig. 10(b). The growth rate of the structures identified in this way-by the local causal states-is limited by the speed of information propagation, which for ECAs is unity. Applying this growth-rate constraint on the DPID structure transducer, one again finds strong agreement. A comparison is shown in Fig. 10(c). It shows the output of the DPID filter applied to the spacetime field of Fig. 10(a) and, in red, sites corresponding to the structure according to the local causal states in Fig. 10(b).
V. DISCUSSION
Having laid out our coherent structure theory and illustrating it in some detail, it is worth looking back, as there are subtleties worth highlighting. The first is our use of the notion of semantics, which derives from the measurement semantics introduced in Ref. [87]. Performing causal filtering S = (x) may at first seem counterproductive, especially for binary fields like those generated by ECAs, as the state space of the system is generally larger in S than in x. As the local state space of ECAs is binary, complexity is manifest in how the sites interact and arrange themselves. Not all sites in the field play the same role. For instance, in ECA 110's domain, Fig. 2(a), the 0s in the field group together to form a triangular shape. This triangle has a bottom-most 0 and a rightmost 0, but they are both still 0s. To capture the semantics of "bottom-most" and "rightmost" 0 of that triangle shape, a larger local state space is needed. And, indeed, this is exactly the manner in which the local causal states capture the semantics of the underlying field. We saw a similar example with the fixed-0 and wildcard semantics of Λ 18 .
The values in the fields S = (x) and T = τ (x) are not measures of some quantity, but rather semantic labels. For the local causal states, they are labels of equivalence classes of local dynamical behaviors. For the DPID domain transducer, they label sites as being consistent with the domain language Λ or else as the particular manner in which they deviate from that language.
This, however, is only the first level of semantics used in our coherent structure theory. While the filtered fields S = (x) and T = τ (x) capture semantics of the original field x, to identify coherent structures a new level of semantics on top of these filtered fields is needed. These are semantics that identify sites as domain or coherent structure using S and T . For the DPID domain transducer T , the domain semantics are by construction built into T . Our coherent structure definition adds the necessary semantics to identify collections of nondomain sites as participating in a coherent structure.
For the local causal states, one may think of the field S = (x) as being the semigroup level of semantics. That is, they represent pattern and structure as generalized symmetries of the underlying field x. This is the same manner in which the -machine captures pattern and structure of a stochastic process with semigroup algebra [87]. The next level of semantics, used to identify domains, requires finding explicit symmetries in S . Thus, domain semantics are the group-theoretic level of semantics, since domains are identified by spacetime translation symmetry groups over S . With states participating in those symmetry groups identified, our coherent structure definition again provides the necessary semantics to identify structures in x through S . These remarks hopefully also clarify the interplay between group and semigroup algebras in our development.
Lastly, we highlight the distinction between a CA's local update rule φ and its global update Φ-the CA's equations of motion. For many CAs, as with ECAs, Φ is constructed from simultaneous synchronous application of φ across the lattice. In a sense, then, there is a simple relation between φ and Φ. However, as demonstrated by many ECAs, most notably the Turing complete ECA 110, the behaviors generated by Φ can be extraordinarily complicated, even though φ is extraordinarily simple. This is why complex behaviors and structures generated by ECAs are said to be emergent.
This point is worth emphasizing here due to the relationship between past lightcones and φ i for CAs. Since the local causal states are equivalence classes of past lightcones, they are equivalence classes of the elements of φ i for CAs. Thus, the system's local dynamic is directly embedded in the local causal states. As we saw, the local causal states are capable of capturing emergent behaviors and structures of CAs and so, in a concrete way, they provide a bridge between the simple local dynamic φ and the emergent complexity generated by Φ. Moreover, the correspondence between the local causal state and DPID domain-structure analysis shows the particular equivalence relation over the elements of φ i used by the local causal states captures key dynamical features of Φ, used explicitly by DPID.
The relationship, though, between φ i and Φ captured by the local causal states is not entirely transparent, as most clearly evidenced by the need for behavior-driven reconstruction of the local causal states. Given a CA lookup table φ, one may pick a finite depth i for the past lightcones and easily construct φ i . It is not at all clear, however, how to use Φ to generate the equivalence classes over the past lightcones of φ i that have the same conditional distributions over future lightcones. The only known way to do this is by brute-force simulation and reconstruction, letting Φ generate past lightcone-future lightcone pairs directly.
VI. CONCLUSIONS
Two distinct, but closely related, approaches to spatiotemporal computational mechanics were reviewed: DPID and local causal states. From them, we developed a theory of coherent structures in fully discrete dynamical field theories. Both approaches identify special symmetry regions of a system's spatiotemporal behavior-a system's domains. We then defined coherent structures as localized deviations from domains; i.e., coherent structures are locally broken domain symmetries.
The DPID approach defines domains as sets of homogeneous spatial configurations that are temporally invariant under the system dynamic. In 1+1 dimension systems, dynamically important configuration sets can be specified as particular types of regular language. Once these domain patterns are identified, a domain transducer τ can be constructed that filters spatial configurations T t = τ (x t ), identifying sites that participate in domain regions or that are the unique deviations from domains. Finding a system's domains and then constructing domain transducers requires much computational overhead, but full automation has been demonstrated. Once acquired, the domain transducers provide a powerful tool for analyzing emergent structures in discrete, deterministic 1+1 dimension systems. The theory of domains as dynamically invariant homogeneous spatial configurations is easily generalizable beyond this setting, but practical calculation of configuration invariant sets in more generalized settings presents enormous challenges.
The local causal state approach, in contrast, generalizes well. Both in theory and in practice, under a caveat of computational resource scaling. It is a more direct generalization of computational mechanics from its original temporal setting. The causal equivalence relation over pasts based on predictions of the future is the core feature of computational mechanics from which the generalization follows. Local causal states are built from a local causal equivalence relation over past lightcones based on predictions of future lightcones. Local causal states provide the same powerful tools of domain transducers, and more. Being equivalence classes of past lightcones, which in the deterministic setting are the system's underlying local dynamic, local causal states offer a bridge between emergent structures and the underlying dynamic that generates them.
In both, patterns and structures are discovered rather than simply recognized. No external bias or template is imposed, and structures at all scales may be uniformly captured and represented. These representations greatly facilitate insight into the behavior of a system, insights that are intrinsic to a system and are not artifacts of an analyst's preferred descriptional framework. ECA 54's γ + + Λ + γ − → β interaction exemplifies this.
DPID domain transducers utilize full knowledge of a system's underlying dynamic and, thus, perfectly capture domains and structures. Local causal states are built purely from spacetime fields and not the equations of motion used to produce those fields. Yet, the domains and structures they capture are remarkably close to the dynamical systems benchmark set by DPID. This is highly encouraging as the local causal states can be uniformly applied to a much wider array of systems than the DPID domain transducers, while at the same time providing a more powerful analysis of coherent structures.
Looking beyond cellular automata, recent years witnessed renewed interest in coherent structures in fluid systems [40,43,110]. There has been particular emphasis on Lagrangian methods, which focus on material deformations generated by the flow. The local causal states, in contrast, are an Eulerian approach, as they are built from lightcones taken from spacetime fields and do not require material transport in the system. A frequent objection raised against Eulerian approaches to coherent structures is that such approaches are not "objective"-they are not independent of an observer's frame of reference. This applies for instantaneous Eulerian approaches, however. And so, does not apply to local causal states. In fact, lightcones and the local causal equivalence relation over them are preserved under Euclidean isometries. This can be seen from Eqs. (1) and (2) that define lightcones in terms of distances only and so they are independent of coordinate reference frame. Local causal states are objective in this sense.
Methods in the Lagrangian coherent structure literature fall into two main categories: diagnostic scalar fields and analytic approaches utilizing one or another mathematical coherence principle. Previous approaches to coherent structures using local causal states relied on the local statistical complexity [57,58]. This is a diagnostic scalar field and comes with all the associated drawbacks of such approaches [44]. The coherent structure theory presented here, in contrast, is the first principled mathematical approach to coherent structures using local causal states.
With science producing large-scale, high-dimensional data sets at an ever increasing rate, data-driven analysis techniques like the local causal states become essential. Standard machine learning techniques, most notably deep learning methods, convolutional neural nets, and the like are experiencing increasing use in the sciences [111,112]. Unlike commercial applications in which deep learning has led to surprising successes, scientific data is highly complex and typically unlabeled. Moreover, interpretability and detecting new mechanisms are key to scientific discovery. With these challenges in mind, we offer local causal states as a unique and valuable tool for discovering and understanding emergent structure and pattern in spatiotemporal systems. member. We thank Ryan James and Dmitry Shemetov for help with software development. This material is based upon work supported by, or in part by, the John Templeton Foundation grant 52095, Foundational Questions Institute grant FQXi-RFP-1609, and the U. S. Army Research Laboratory and the U. S. Army Research Office under contract W911NF-13-1-0390, as well as by Intel through its support of UC Davis' Intel Parallel Computing Center.
FIG. 1 .
1Lightcone random variable templates: Past lightcone L − (r0, t0) and future lightcone L + (r0, t0) for present spacetime point X r 0 t 0 in a 1 + 1 D field with nearest-neighbor (or radius-1) interactions.
(d). Unlike ECA 110's domain, it is not clear from x Λ22 what the domain symmetries are. It is not even clear there are symmetries presentFIG. 2. Pure domain spacetime fields for explicit symmetry and hidden symmetry domains shown in (a) and (b) for ECA 110 and ECA 22, respectively. Associated local causal state fields fully display these symmetries in (c) and (d), with each unique color corresponding to a unique local causal state. For ECA 110, lightcone horizons h − = h + = 3 were used and for rule 22 h − = 10 and h + = 4.
FIG. 3 .
3ECA 54 domain: A sample pure domain spacetime field x Λ is shown in (a). This field is repeated with the associated local causal states S Λ = (x Λ ) added in (b). Lightcone horizons h − = h + = 3 were used. The DPID spacetime invariant set language is shown in (c). (Reprinted from Ref. [55] with permission.)
FIG. 4 .
4Overview of ECA 54 structures: (a) A sample spacetime field evolved from a random initial configuration. (b) A filter that outputs white for cells participating in domains and black otherwise, using the DPID definition of domain. (c) The analogous domain-nondomain filter that uses the local causal state definition of domain. Lightcone horizons h − = h + = 3 were used.
FIG. 5 .
5ECA 54's α particle: In both (a) and (b) white (0) and black (1) squares display the underlying ECA spacetime field x α . (a) The DPID domain transducer filter T = τ (x α ) output is overlaid atop the spacetime field values of x α . Blue letters are sites participating in domain and red numbers are particular deviations from domain. (b) The local causal state field S = (x α ). The eight domain states are given by blue letters, all others by red numbers. In both diagrams, the nondomain sites outline the α particle of rule 54, according to the two different semantic filters. Lightcone horizons h − = h + = 3 were used.
Figure 5 (
5b) displays the local causal state field S = (x α ); the eight domain states are given as blue letters, following
FIG. 6 .
6ECA 54's γ + + γ − → β interaction: In both diagrams the white (0) and black (1) squares display the underlying ECA spacetime field x β . (a) The DPID domain transducer filter T = τ (x β ) output is overlaid atop the spacetime field values of x β . Blue letters are sites identified by T as participating in the domain. Colored numbers are sites identified as participating in one of the three remaining structures. The γ + particle is outlined only by red numbers, γ − by yellow numbers, and β by a combination of red, yellow, and orange. (b) The local causal state field S = (x β ) is overlaid atop x β . The eight domain states are in blue, and the other nondomain states are colored the same as in (a). Lightcone horizons h − = h + = 3 were used.
FIG. 7 .
7ECA 110 structures: (a) A sample field evolved from a random initial configuration. (b) A local causal state domainnondomain filter with domain sites in white and nondomain in black. Lightcone horizons h − = h + = 3 were used. onto a cyclic tag system that emulates a Post tag system which, in turn, emulates a universal Turing machine. A domain-nondomain filter reveals several of ECA 110's particles used in the implementation; see Fig. 7. The ECA 110 domain was displayed in Figs. 2(a) and 2(c), as the example for explicit symmetry domains. The domain has a single phase, rather than two phases like ECA 54's, and requires 14 states, as opposed to ECA 54's combined 8. The ECA 110's highly complex behavior surely derives from the heightened complexity of its domain. Exactly how, though, remains an open problem.
FIG. 8 .
8ECA 18 domain: (a) Iterates of a sample pure domain spacetime field x Λ , white and black are values 0 and 1, respectively. (b) The same domain field with the local causal state field S Λ = (x Λ ) overlaid. Lightcone horizons h − = 8 and h + = 3 were used. (c) The finite-state machine M (Λ18) of the DPID invariant set language of the ECA 18 domain Λ18. (Reprinted with permission from Ref [50].)
FIG. 9 .
9ECA 18 structures: (a) Sample spacetime field evolved under ECA 18 from a random initial configuration. (b) Spacetime field after filtering with domain regions in white and coherent structures in blue, using the DPID domain transducer. (c) Spacetime field filtered with domain regions in white and structures in blue, using local causal states. The occasional gap in the structures is an artifact of using finitedepth lightcones during reconstruction of local causal states. Lightcone horizons h − = 8 and h + = 3 were used.
FIG. 10 .
10Comparative analysis of ECA 18's α particle: In all three spacetime diagrams, the underlying ECA field values of 0 and 1 are represented as green and gray squares, respectively. (a) DPID domain transducer filtered field T = τ (x) with bidirectional scan interpolation. Domain sites are identified with white 0s and particle sites with black 1s. (b) A coherent structure causal filter; local causal state field S = (x) with nondomain local causal states states satisfying the coherent structure definition are colored black with all other states colored white. Lightcone horizons h − = 8 and h + = 3 were used. (c) Comparison of the structures from the two methods: The DPID transducer filter of (a) with sites that have local causal states identified as the coherent structure in (b) given a red square label.
ACKNOWLEDGMENTSThe authors thank Bill Collins, Ryan James, Karthik Kashinath, John Mahoney, Mr. Prabhat, Paul Riechers, Anastasiya Salova, and Dmitry Shemetov for helpful discussions and feedback, and the Santa Fe Institute for hospitality during visits. JPC is an SFI External Faculty
Computation: Finite and Infinite Machines. M Minsky, Prentice-HallEnglewood Cliffs, New JerseyM. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, New Jersey, 1967.
Three models for the description of language. N Chomsky, IRE Trans. Info. Th. 2N. Chomsky. Three models for the description of lan- guage. IRE Trans. Info. Th., 2:113-124, 1956.
Nonlinear problems in physics. W Heisenberg, Physics Today. 20W. Heisenberg. Nonlinear problems in physics. Physics Today, 20:23-33, 1967.
Deterministic nonperiodic flow. E N Lorenz, J. Atmos. Sci. 20130E. N. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 20:130, 1963.
The problem of deducing the climate from the governing equations. E N Lorenz, Tellus, XVI:1. E. N. Lorenz. The problem of deducing the climate from the governing equations. Tellus, XVI:1, 1964.
On the nature of turbulence. D Ruelle, F Takens, Comm. Math. Phys. 20D. Ruelle and F. Takens. On the nature of turbulence. Comm. Math. Phys., 20:167-192, 1971.
Geometry from a time series. N H Packard, J P Crutchfield, J D Farmer, R S Shaw, Phys. Rev. Let. 45712N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. Shaw. Geometry from a time series. Phys. Rev. Let., 45:712, 1980.
Detecting strange attractors in fluid turbulence. F Takens, Symposium on Dynamical Systems and Turbulence. D. A. Rand and L. S. YoungBerlinSpringer-Verlag898366F. Takens. Detecting strange attractors in fluid turbu- lence. In D. A. Rand and L. S. Young, editors, Sym- posium on Dynamical Systems and Turbulence, volume 898, page 366, Berlin, 1981. Springer-Verlag.
Low-dimensional chaos in a hydrodynamic system. A Brandstater, J Swift, Harry L Swinney, A Wolf, J D Farmer, E Jen, J P Crutchfield, Phys. Rev. Lett. 511442A. Brandstater, J. Swift, Harry L. Swinney, A. Wolf, J. D. Farmer, E. Jen, and J. P. Crutchfield. Low-dimensional chaos in a hydrodynamic system. Phys. Rev. Lett., 51:1442, 1983.
Equations of motion from a data series. J P Crutchfield, B S Mcnamara, Complex Systems. 1J. P. Crutchfield and B. S. McNamara. Equations of motion from a data series. Complex Systems, 1:417 - 452, 1987.
Inferring statistical complexity. J P Crutchfield, K Young, Phys. Rev. Let. 63J. P. Crutchfield and K. Young. Inferring statistical complexity. Phys. Rev. Let., 63:105-108, 1989.
Computational mechanics: Pattern and prediction, structure and simplicity. C R Shalizi, J P Crutchfield, J. Stat. Phys. 104C. R. Shalizi and J. P. Crutchfield. Computational me- chanics: Pattern and prediction, structure and simplicity. J. Stat. Phys., 104:817-879, 2001.
Between order and chaos. J P Crutchfield, Nature Physics. 8J. P. Crutchfield. Between order and chaos. Nature Physics, 8(January):17-24, 2012.
The calculi of emergence: Computation, dynamics, and induction. J P Crutchfield, Physica D. 75J. P. Crutchfield. The calculi of emergence: Compu- tation, dynamics, and induction. Physica D, 75:11-54, 1994.
More is different. P W Anderson, Science. 1774047P. W. Anderson. More is different. Science, 177(4047):393-396, 1972.
The dreams of theory. J P Crutchfield, WIRES Comp. Stat. 6J. P. Crutchfield. The dreams of theory. WIRES Comp. Stat., 6(March/April):75-79, 2014.
An Album of Fluid Motion. M Van Dyke, Parabolic PressStanford, CaliforniaM. Van Dyke. An Album of Fluid Motion. Parabolic Press, Stanford, California, 1982.
The Self-Made Tapestry: Pattern Formation in Nature. P Ball, Oxford University PressNew YorkP. Ball. The Self-Made Tapestry: Pattern Formation in Nature. Oxford University Press, New York, 1999.
The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation. W G Flake, Bradford BooksNew YorkW. G. Flake. The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Sys- tems, and Adaptation. Bradford Books, New York, 2000.
Snow Crystals: Natural and Artificial. U Nakaya, Harvard University PressBoston, MassachusettsU. Nakaya. Snow Crystals: Natural and Artificial. Har- vard University Press, Boston, Massachusetts, 1954.
Complex bacterial patterns. E Ben-Jacob, I Cohen, O Shochet, I Aranson, H Levine, L Tsimring, Nature. 373E. Ben-Jacob, I. Cohen, O. Shochet, I. Aranson, H. Levine, and L. Tsimring. Complex bacterial patterns. Nature, 373:556-567, 1995.
Crystal Growth for Beginners: Fundamentals of Nucleation. I V Markov, Crystal Growth and Epitaxy. World Scientific. third editionI. V. Markov. Crystal Growth for Beginners: Fundamen- tals of Nucleation, Crystal Growth and Epitaxy. World Scientific, Singapore, third edition, 2017.
Pattern Formation: An Introduction to Methods. R Hoyle, Cambridge University PressNew YorkR. Hoyle. Pattern Formation: An Introduction to Meth- ods. Cambridge University Press, New York, 2006.
Pattern Formation and Dynamics in Nonequilibrium Systems. M C Cross, H S Greenside, Cambridge University PressCambridge, United KingdomM.C. Cross and H.S. Greenside. Pattern Formation and Dynamics in Nonequilibrium Systems. Cambridge University Press, Cambridge, United Kingdom, 2009.
Les Tourbillons Cellulaires dans une nappe Liquide Propageant de la Chaleur par Convection: en Régime Permanent. H Bénard, Gauthier-VillarsH. Bénard. Les Tourbillons Cellulaires dans une nappe Liquide Propageant de la Chaleur par Convection: en Régime Permanent. Gauthier-Villars, 1901.
On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side. Lord Rayleigh, Phil. Mag. (Series. 326Lord Rayleigh. On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side. Phil. Mag. (Series 6), 32(192):529-546, 1916.
Pattern formation and wave-number selection by Rayleigh-Bénard convection in a cylindrical container. V Steinberg, G Ahlers, D S Cannell, Physica Scripta. 997V. Steinberg, G. Ahlers, and D. S. Cannell. Pattern for- mation and wave-number selection by Rayleigh-Bénard convection in a cylindrical container. Physica Scripta, T9:97, 1985.
Stability of a viscous liquid contained between two rotating cylinders. G I Taylor, Phil. Trans. Roy. Soc. Lond. A. 223G.I. Taylor. Stability of a viscous liquid contained be- tween two rotating cylinders. Phil. Trans. Roy. Soc. Lond. A, 223:289-343, 1923.
Dynamical instabilities and the transition to chaotic Taylor vortex flow. P R Fenstermacher, H L Swinney, J P Gollub, J. Fluid Mech. 941P.R. Fenstermacher, H.L. Swinney, and J.P. Gollub. Dy- namical instabilities and the transition to chaotic Taylor vortex flow. J. Fluid Mech., 94(1):103-128, 1979.
A history of chemical oscillations and waves. A M Zhabotinsky, Chaos. 14A.M. Zhabotinsky. A history of chemical oscillations and waves. Chaos, 1(4):379-386, 1991.
Singular filaments organize chemical waves in three dimensions: I. Geometrically simple waves. A T Winfree, S H Strogatz, Physica D. 81A.T. Winfree and S.H. Strogatz. Singular filaments orga- nize chemical waves in three dimensions: I. Geometrically simple waves. Physica D, 8(1):35-49, 1983.
On a peculiar class of acoustical figures; and on certain forms assumed by groups of particles upon vibrating elastic surfaces. M Faraday, Phil. Trans. Roy. Soc. Lond. 121M. Faraday. On a peculiar class of acoustical figures; and on certain forms assumed by groups of particles upon vibrating elastic surfaces. Phil. Trans. Roy. Soc. Lond., 121:299-340, 1831.
Superlattice patterns in surface waves. A Kudrolli, B Pier, J P Gollub, Physica D. 1231-4A. Kudrolli, B. Pier, and J.P. Gollub. Superlattice patterns in surface waves. Physica D, 123(1-4):99-111, 1998.
A Modern Course in Statistical Physics. L E , Wiley-VCHWeinheim, GermanyL.E. Reichl. A Modern Course in Statistical Physics. Wiley-VCH, Weinheim, Germany, 2016.
Optimising principle for non-equilibrium phase transitions and pattern formation with results for heat convection. P Attard, arXiv:1208.5105P. Attard. Optimising principle for non-equilibrium phase transitions and pattern formation with results for heat convection. arXiv:1208.5105.
Information and Self-Organization. H Haken, SpringerNew YorkH. Haken. Information and Self-Organization. Springer, New York, 2016.
Pattern formation outside of equilibrium. M C Cross, P C Hohenberg, Rev. Mod. Phys. 653M. C. Cross and P. C. Hohenberg. Pattern formation outside of equilibrium. Rev. Mod. Phys., 65(3):851-1112, 1993.
The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. M Golubitsky, I Stewart, Birkhäuser200New YorkM. Golubitsky and I. Stewart. The Symmetry Perspec- tive: From Equilibrium to Chaos in Phase Space and Physical Space, volume 200. Birkhäuser, New York, 2003.
Lagrangian based methods for coherent structure detection. M R Allshouse, T Peacock, Chaos. 25997617M. R. Allshouse and T. Peacock. Lagrangian based methods for coherent structure detection. Chaos, 25(9):097617, 2015.
Lagrangian coherent structures. G Haller, Ann. Rev. Fluid Mech. 47G. Haller. Lagrangian coherent structures. Ann. Rev. Fluid Mech., 47:137-162, 2015.
The theory of hurricanes. K A Emanuel, Ann. Rev. Fluid Mech. 231K. A. Emanuel. The theory of hurricanes. Ann. Rev. Fluid Mech., 23(1):179-196, 1991.
Inertial particle dynamics in a hurricane. T Sapsis, G Haller, J. Atmos. Sci. 668T. Sapsis and G. Haller. Inertial particle dynamics in a hurricane. J. Atmos. Sci., 66(8):2481-2492, 2009.
P Holmes, J L Lumley, G Berkooz, C W Rowley, Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge, United KingdomCambridge University PressP. Holmes, J.L. Lumley, G. Berkooz, and C.W. Rowley. Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge University Press, Cambridge, United Kingdom, 2012.
A critical comparison of Lagrangian methods for coherent structure detection. A Hadjighasem, M Farazmand, D Blazevski, G Froyland, G Haller, Chaos. 27553104A. Hadjighasem, M. Farazmand, D. Blazevski, G. Froy- land, and G. Haller. A critical comparison of Lagrangian methods for coherent structure detection. Chaos, 27(5):053104, 2017.
Simulation of interannual variability of tropical storm frequency in an ensemble of GCM integrations. F Vitart, J L Anderson, W F Stern, J. Climate. 104F. Vitart, J.L. Anderson, and W.F. Stern. Simulation of interannual variability of tropical storm frequency in an ensemble of GCM integrations. J. Climate, 10(4):745- 760, 1997.
Tropical cyclone-like vortices in a limited area model: Comparison with observed climatology. K Walsh, I G Watterson, J. Climate. 109K. Walsh and I.G. Watterson. Tropical cyclone-like vor- tices in a limited area model: Comparison with observed climatology. J. Climate, 10(9):2240-2259, 1997.
TECA: Petascale pattern recognition for climate science. S Prabhat, V Byna, E Vishwanath, M Dart, W D Wehner, Collins, International Conference on Computer Analysis of Images and Patterns. SpringerPrabhat, S. Byna, V. Vishwanath, E. Dart, M. Wehner, W. D. Collins, et al. TECA: Petascale pattern recog- nition for climate science. In International Conference on Computer Analysis of Images and Patterns, pages 426-436. Springer, 2015.
C M Bishop, Pattern Recognition and Machine Learning. New YorkSpringerC.M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006.
C R Shalizi, Optimal nonlinear prediction of random fields on networks. Discrete Mathematics & Theoretical Computer Science. C.R. Shalizi. Optimal nonlinear prediction of random fields on networks. Discrete Mathematics & Theoretical Computer Science, 2003.
The attractor-basin portrait of a cellular automaton. J E Hanson, J P Crutchfield, J. Stat. Phys. 66J. E. Hanson and J. P. Crutchfield. The attractor-basin portrait of a cellular automaton. J. Stat. Phys., 66:1415 -1462, 1992.
Discovering coherent structures in nonlinear spatial systems. J P Crutchfield, Nonlinear Ocean Waves. Technology, R. Vilela-MendesSingapore; SingaporeWorld ScientificJ. P. Crutchfield. Discovering coherent structures in nonlinear spatial systems. In A. Brandt, S. Ramberg, and M. Shlesinger, editors, Nonlinear Ocean Waves, pages 190-216, Singapore, 1992. World Scientific. also appears in Complexity in Physics and Technology, R. Vilela- Mendes, editor, World Scientific, Singapore (1992).
Attractor vicinity decay for a cellular automaton. J P Crutchfield, J E Hanson, CHAOS. 32J. P. Crutchfield and J. E. Hanson. Attractor vicinity decay for a cellular automaton. CHAOS, 3(2):215-224, 1993.
Turbulent pattern bases for cellular automata. J P Crutchfield, J E Hanson, Physica D. 69J. P. Crutchfield and J. E. Hanson. Turbulent pattern bases for cellular automata. Physica D, 69:279 -301, 1993.
The evolution of emergent computation. J P Crutchfield, M Mitchell, Proc. Natl. Acad. Sci. 92J. P. Crutchfield and M. Mitchell. The evolution of emergent computation. Proc. Natl. Acad. Sci., 92:10742- 10746, 1995.
Computational mechanics of cellular automata: An example. J E Hanson, J P Crutchfield, Physica D. 103J. E. Hanson and J. P. Crutchfield. Computational mechanics of cellular automata: An example. Physica D, 103:169-189, 1997.
Automated pattern discovery-An algorithm for constructing optimally synchronizing multi-regular language filters. C S Mctague, J P Crutchfield, Theo. Comp. Sci. 3591-3C. S. McTague and J. P. Crutchfield. Automated pattern discovery-An algorithm for constructing optimally syn- chronizing multi-regular language filters. Theo. Comp. Sci., 359(1-3):306-328, 2006.
Automatic filters for the detection of coherent structure in spatiotemporal systems. C R Shalizi, R Haslinger, J.-B Rouquier, K L Klinkner, C Moore, Phys. Rev. E. 73336104C.R. Shalizi, R. Haslinger, J.-B. Rouquier, K.L. Klinkner, and C. Moore. Automatic filters for the detection of coherent structure in spatiotemporal systems. Phys. Rev. E, 73(3):036104, 2006.
Multifield visualization using local statistical complexity. H Jänicke, A Wiebel, G Scheuermann, W Kollmann, IEEE Trans. Vis. Comp. Graphics. 136H. Jänicke, A. Wiebel, G. Scheuermann, and W. Koll- mann. Multifield visualization using local statistical com- plexity. IEEE Trans. Vis. Comp. Graphics, 13(6):1384- 1391, 2007.
Towards automatic feature-based visualization. H Jänicke, G Scheuermann, Dagstuhl Follow-Ups. 1H. Jänicke and G. Scheuermann. Towards automatic feature-based visualization. In Dagstuhl Follow-Ups, volume 1. Schloss Dagstuhl-Leibniz-Zentrum fuer Infor- matik, 2010.
LICORS: Light cone reconstruction of states for non-parametric forecasting of spatio-temporal systems. G Goerg, C R Shalizi, arXiv:1206.2398G.M Goerg and C.R. Shalizi. LICORS: Light cone re- construction of states for non-parametric forecasting of spatio-temporal systems. arXiv:1206.2398.
Mixed LICORS: A nonparametric algorithm for predictive state reconstruction. G M Goerg, C R Shalizi, Artificial Intelligence and Statistics. G.M. Goerg and C.R. Shalizi. Mixed LICORS: A non- parametric algorithm for predictive state reconstruction. In Artificial Intelligence and Statistics, pages 289-297, 2013.
Local information transfer as a spatiotemporal filter for complex systems. J T Lizier, M Prokopenko, A Y Zomaya, Phys. Rev. E. 77226110J.T. Lizier, M. Prokopenko, and A.Y. Zomaya. Local in- formation transfer as a spatiotemporal filter for complex systems. Phys. Rev. E, 77(2):026110, 2008.
Information modification and particle collisions in distributed computation. J Lizier, M Prokopenko, A Zomaya, CHAOS. 20337109J. Lizier, M. Prokopenko, and A. Zomaya. Informa- tion modification and particle collisions in distributed computation. CHAOS, 20(3):037109, 2010.
Partial information decomposition as a spatiotemporal filter. B Flecker, W Alford, J M Beggs, P L Williams, R D Beer, CHAOS. 21337104B. Flecker, W. Alford, J. M. Beggs, P. L. Williams, and R. D. Beer. Partial information decomposition as a spatiotemporal filter. CHAOS, 21(3):037104, 2011.
Towards a synergy-based approach to measuring information modification. J T Lizier, B Flecker, P L Williams, Artificial Life (ALIFE), 2013 IEEE Symposium on. IEEEJ.T. Lizier, B. Flecker, and P.L. Williams. Towards a synergy-based approach to measuring information modi- fication. In Artificial Life (ALIFE), 2013 IEEE Sympo- sium on, pages 43-51. IEEE, 2013.
Information flows? A critique of transfer entropies. R G James, N Barnett, J P Crutchfield, Phys. Rev. Lett. 11623238701R. G. James, N. Barnett, and J. P. Crutchfield. Informa- tion flows? A critique of transfer entropies. Phys. Rev. Lett., 116(23):238701, 2016.
Multivariate dependence beyond shannon information. R G James, J P Crutchfield, Entropy. 19531R. G. James and J. P. Crutchfield. Multivariate depen- dence beyond shannon information. Entropy, 19:531, 2017.
W.-K Tung, Group Representations, and Special Functions in Classical and Quantum Physics. PhilidelphiaWorld Scientific Publishing Co IncGroup Theory in Physics: an Introduction to Symmetry PrinciplesW.-K. Tung. Group Theory in Physics: an Introduction to Symmetry Principles, Group Representations, and Special Functions in Classical and Quantum Physics. World Scientific Publishing Co Inc, Philidelphia, 1985.
Inverse Semigroups: the Theory of Partial Symmetries. M V Lawson, World ScientificM.V. Lawson. Inverse Semigroups: the Theory of Partial Symmetries. World Scientific, 1998.
Semi-groups and graphs. B Kitchens, S Tuncel, Israel. J. Math. 53231B. Kitchens and S. Tuncel. Semi-groups and graphs. Israel. J. Math., 53:231, 1986.
Applications of Automata Theory and Algebraic via the Mathematical Theory of Complexity to. J Rhodes, Psychology, Philosophy, Games, and Codes. University of California. C. NehanivWorld Scientific Publishing CompanyBiologyJ. Rhodes. Applications of Automata Theory and Al- gebraic via the Mathematical Theory of Complexity to Biology, Physics, Psychology, Philosophy, Games, and Codes. University of California, Berkeley, California, 1971. C. Nehaniv, editor, World Scientific Publishing Company, Singapore (2009).
Algebraic Automata Theory. W M L Holcombe, Cambridge University PressCambridgeW. M. L. Holcombe. Algebraic Automata Theory. Cam- bridge University Press, Cambridge, 1982.
Introduction to Automata Theory, Languages, and Computation. J E Hopcroft, R Motwani, J D Ullman, Prentice-HallNew Yorkthird editionJ. E. Hopcroft, R. Motwani, and J. D. Ullman. Introduc- tion to Automata Theory, Languages, and Computation. Prentice-Hall, New York, third edition, 2006.
Analyzing the statistical complexity of spatiotemporal dynamics was announced originally, though, in Ref. 108Analyzing the statistical complexity of spatiotemporal dynamics was announced originally, though, in Ref. [11, p. 108].
An Introduction to Symbolic Dynamics and Coding. D Lind, B Marcus, Cambridge University PressNew YorkD. Lind and B. Marcus. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press, New York, 1995.
Information anatomy of stochastic equilibria. S Marzen, J P Crutchfield, Entropy. 169S. Marzen and J. P. Crutchfield. Information anatomy of stochastic equilibria. Entropy, 16(9):4713-4748, 2014.
Informational and causal architecture of continuous-time renewal processes. S Marzen, J P Crutchfield, J. Stat. Phys. 168S. Marzen and J. P. Crutchfield. Informational and causal architecture of continuous-time renewal processes. J. Stat. Phys., 168(a):109-127, 2017.
Time resolution dependence of information measures for spiking neurons: Scaling and universality. S Marzen, M R Deweese, J P Crutchfield, Front. Comput. Neurosci. 9109S. Marzen, M. R. DeWeese, and J. P. Crutchfield. Time resolution dependence of information measures for spik- ing neurons: Scaling and universality. Front. Comput. Neurosci., 9:109, 2015.
Signatures of infinity: Nonergodicity and resource scaling in prediction, complexity, and learning. J P Crutchfield, S Marzen, Phys. Rev. E. 9150106J. P. Crutchfield and S. Marzen. Signatures of infin- ity: Nonergodicity and resource scaling in prediction, complexity, and learning. Phys. Rev. E, 91:050106(R), 2015.
Structure and randomness of continuous-time discrete-event processes. S Marzen, J P Crutchfield, J. Stat. 1692PhysicsS. Marzen and J. P. Crutchfield. Structure and random- ness of continuous-time discrete-event processes. J. Stat. Physics, 169(2):303-315, 2016.
Spectral simplicity of apparent complexity, Part I: The nondiagonalizable metadynamics of prediction. P Riechers, J P Crutchfield, arxiv.org:1705.08042P. Riechers and J. P. Crutchfield. Spectral simplicity of apparent complexity, Part I: The nondiagonalizable metadynamics of prediction. arxiv.org:1705.08042.
Spectral simplicity of apparent complexity, Part II: Exact complexities and complexity spectra. P Riechers, J P Crutchfield, arxiv.org:1706.00883P. Riechers and J. P. Crutchfield. Spectral simplicity of apparent complexity, Part II: Exact complexities and complexity spectra. arxiv.org:1706.00883.
Theory and Algorithms for Hidden Markov Models and Generalized Hidden Markov Models. D R Upper, Ann Arbor, MichiganUniversity of California, BerkeleyPhD thesisPublished by University Microfilms IntlD. R. Upper. Theory and Algorithms for Hidden Markov Models and Generalized Hidden Markov Models. PhD thesis, University of California, Berkeley, 1997. Published by University Microfilms Intl, Ann Arbor, Michigan.
Nearly maximally predictive features and their dimensions. S E Marzen, J P Crutchfield, Phys. Rev. E. 95551301S. E. Marzen and J. P. Crutchfield. Nearly maximally predictive features and their dimensions. Phys. Rev. E, 95(5):051301(R), 2017.
Exact complexity: Spectral decomposition of intrinsic computation. J P Crutchfield, P Riechers, C J Ellison, Phys. Lett. A. 3809J. P. Crutchfield, P. Riechers, and C. J. Ellison. Exact complexity: Spectral decomposition of intrinsic compu- tation. Phys. Lett. A, 380(9-10):998-1002, 2015.
Spatiotemporal computational mechanics. A Rupe, J P Crutchfield, in preparationA. Rupe and J. P. Crutchfield. Spatiotemporal compu- tational mechanics. 2017. in preparation.
J P Crutchfield, Nonlinear Modeling and Forecasting, volume XII of Santa Fe Institute Studies in the Sciences of Complexity. M. Casdagli and S. EubankReading, MassachusettsAddison-WesleySemantics and thermodynamicsJ. P. Crutchfield. Semantics and thermodynamics. In M. Casdagli and S. Eubank, editors, Nonlinear Modeling and Forecasting, volume XII of Santa Fe Institute Studies in the Sciences of Complexity, pages 317 -359, Reading, Massachusetts, 1992. Addison-Wesley.
Statistical mechanics of cellular automata. S Wolfram, Rev. Mod. Phys. 55601S. Wolfram. Statistical mechanics of cellular automata. Rev. Mod. Phys., 55:601, 1983.
Universality in elementary cellular automata. M Matthew, Complex Systems. 151M. Matthew. Universality in elementary cellular au- tomata. Complex Systems, 15(1):1-40, 2004.
Theory of computation: Formal languages, automata, and complexity. J G Brookshear, Benjamin/Cummings, Redwood City, CaliforniaJ. G. Brookshear. Theory of computation: Formal lan- guages, automata, and complexity. Benjamin/Cummings, Redwood City, California, 1989.
Computation theory of cellular automata. S Wolfram, Comm. Math. Phys. 9615S. Wolfram. Computation theory of cellular automata. Comm. Math. Phys., 96:15, 1984.
Self-Organization in Nonequilibrium Systems. G Nicolis, I Prigogine, WileyNew YorkG. Nicolis and I. Prigogine. Self-Organization in Nonequi- librium Systems. Wiley, New York, 1977.
Synergetics, An Introduction. H Haken, SpringerBerlinthird editionH. Haken. Synergetics, An Introduction. Springer, Berlin, third edition, 1983.
J P Sethna, arXiv:9204009Order parameters, broken symmetry, and topology. J. P. Sethna. Order parameters, broken symmetry, and topology. arXiv:9204009.
Physics: Why symmetry matters. M Livio, Nature. 4907421M. Livio. Physics: Why symmetry matters. Nature, 490(7421):472-473, 2012.
Hydrodynamic and Hydromagnetic Stability. S Chandrasekhar, Clarendon PressOxfordS. Chandrasekhar. Hydrodynamic and Hydromagnetic Stability. Oxford, Clarendon Press, 1968.
Dynamics of defects in rayleigh-bénard convection. E D Siggia, A Zippelius, Physical Review A. 2421036E.D. Siggia and A. Zippelius. Dynamics of defects in rayleigh-bénard convection. Physical Review A, 24(2):1036, 1981.
Endomorphisms and automorphisms of the shift dynamical system. G A Hedlund, Theory of Computing Systems. 3G. A. Hedlund. Endomorphisms and automorphisms of the shift dynamical system. Theory of Computing Systems, 3(4):320-375, 1969.
Cellular Automata and Groups. T Ceccherini-Silberstein, M Coornaert, Springer Science & Business MediaT. Ceccherini-Silberstein and M. Coornaert. Cellular Automata and Groups. Springer Science & Business Media, 2010.
Enumerating finitary processes. B D Johnson, J P Crutchfield, C J Ellison, C S Mctague, arxiv.org:1011.0036B. D. Johnson, J. P. Crutchfield, C. J. Ellison, and C. S. McTague. Enumerating finitary processes. arxiv.org:1011.0036.
Bayesian structural inference for hidden processes. C C Strelioff, J P Crutchfield, Phys. Rev. E. 8942119C. C. Strelioff and J. P. Crutchfield. Bayesian structural inference for hidden processes. Phys. Rev. E, 89:042119, 2014.
Unreconstructible at any radius. J P Crutchfield, Phys. Lett. A. 171J. P. Crutchfield. Unreconstructible at any radius. Phys. Lett. A, 171:52 -60, 1992.
Causal Architecture, Complexity and Self-Organization in Time Series and Cellular Automata. C R Shalizi, Madison, WisconsinUniversity of WisconsinPhD thesisC. R. Shalizi. Causal Architecture, Complexity and Self- Organization in Time Series and Cellular Automata. PhD thesis, University of Wisconsin, Madison, Wiscon- sin, 2001.
Unveiling an enigma: Patterns in elementary cellular automaton 22 and how to discover them. J P Crutchfield, C S Mctague, Santa Fe InstituteTechnical ReportJ. P. Crutchfield and C. S. McTague. Unveiling an enigma: Patterns in elementary cellular automaton 22 and how to discover them. Santa Fe Institute Technical Report, 2002.
New mechanism for deterministic diffusion. P Grassberger, Phys. Rev. A. 283666P. Grassberger. New mechanism for deterministic diffu- sion. Phys. Rev. A, 28:3666, 1983.
Applications of ergodic theory and sofic systems to cellular automata. D A Lind, Physica. 1036D. A. Lind. Applications of ergodic theory and sofic systems to cellular automata. Physica, 10D:36, 1984.
The kink of cellular automaton rule 18 performs a random walk. K Eloranta, E Nummelin, J. Stat. Phys. 69K. Eloranta and E. Nummelin. The kink of cellular automaton rule 18 performs a random walk. J. Stat. Phys., 69:1131-1136, 1992.
Particlelike structures and their interactions in spatiotemporal patterns generated by one-dimensional deterministic cellularautomaton rules. N Boccara, J Nasser, M Roger, Phys. Rev. A. 44866N. Boccara, J. Nasser, and M. Roger. Particlelike struc- tures and their interactions in spatiotemporal patterns generated by one-dimensional deterministic cellular- automaton rules. Phys. Rev. A, 44:866, 1991.
The dynamics of defect ensembles in one-dimensional cellular automata. K Eloranta, J. Stat. Phys. 765/6K. Eloranta. The dynamics of defect ensembles in one-dimensional cellular automata. J. Stat. Phys., 76(5/6):1377-1398, 1994.
Analysis of fluid flows via spectral properties of the Koopman operator. I Mezić, Ann. Rev. Fluid Mech. 45I. Mezić. Analysis of fluid flows via spectral properties of the Koopman operator. Ann. Rev. Fluid Mech., 45:357- 378, 2013.
Application of deep convolutional neural networks for detecting extreme weather in climate datasets. Y Liu, E Racah, J Prabhat, A Correa, D Khosrowshahi, K Lavers, M Kunkel, W Wehner, Collins, arXiv:1605.01156Y. Liu, E. Racah, Prabhat, J. Correa, A. Khos- rowshahi, D. Lavers, K. Kunkel, M. Wehner, and W. Collins. Application of deep convolutional neural net- works for detecting extreme weather in climate datasets. arXiv:1605.01156.
Searching for exotic particles in high-energy physics with deep learning. P Baldi, P Sadowski, D Whiteson, Nature Comm. 54308P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature Comm., 5:4308, 2014.
|
[] |
[
"HAND GESTURE RECOGNITION OF DUMB PERSON USING ONE AGAINST ALL NEURAL NETWORK",
"HAND GESTURE RECOGNITION OF DUMB PERSON USING ONE AGAINST ALL NEURAL NETWORK"
] |
[
"Muhammad Asim Khan \nCollege of Information Engineering\nCollege of Information Engineering\nJiangxi University of Science and Technology\n341000GanzhouChina\n",
"Lan Hong \nDepartment of Computer Science\nJiangxi University of Science and Technology\n341000GanzhouChina\n",
"Sajjad Ahmed \nUniversity of Camerino\n62032CamerinoItaly\n"
] |
[
"College of Information Engineering\nCollege of Information Engineering\nJiangxi University of Science and Technology\n341000GanzhouChina",
"Department of Computer Science\nJiangxi University of Science and Technology\n341000GanzhouChina",
"University of Camerino\n62032CamerinoItaly"
] |
[
"IJCSIS) International Journal of Computer Science and Information Security"
] |
We propose a new technique for recognition of dumb person hand gesture in real world environment. In this technique, the hand image containing the gesture is preprocessed and then hand region is segmented by convergent the RGB color image to L.a.b color space. Only few statistical features are used to classify the segmented image to different classes. Artificial Neural Network is trained in sequential manner using one against all. When the system gets trained, it becomes capable of recognition of each class in parallel manner. The result of proposed technique is much better than existing techniques.
| null |
[
"https://arxiv.org/pdf/2201.12622v1.pdf"
] | 246,430,178 |
2201.12622
|
efe7195671f2b08862111745817ab5b9b3401fe9
|
HAND GESTURE RECOGNITION OF DUMB PERSON USING ONE AGAINST ALL NEURAL NETWORK
April 2020
Muhammad Asim Khan
College of Information Engineering
College of Information Engineering
Jiangxi University of Science and Technology
341000GanzhouChina
Lan Hong
Department of Computer Science
Jiangxi University of Science and Technology
341000GanzhouChina
Sajjad Ahmed
University of Camerino
62032CamerinoItaly
HAND GESTURE RECOGNITION OF DUMB PERSON USING ONE AGAINST ALL NEURAL NETWORK
IJCSIS) International Journal of Computer Science and Information Security
1804April 2020Gesture RecognitionSegmentationMorphology Modified Directional Weighted Median Filter (MDWMF)Random value Impulse Noise (RVIN)
We propose a new technique for recognition of dumb person hand gesture in real world environment. In this technique, the hand image containing the gesture is preprocessed and then hand region is segmented by convergent the RGB color image to L.a.b color space. Only few statistical features are used to classify the segmented image to different classes. Artificial Neural Network is trained in sequential manner using one against all. When the system gets trained, it becomes capable of recognition of each class in parallel manner. The result of proposed technique is much better than existing techniques.
Introduction
In modern age artificial Intelligence is a part of our daily life. In the present days, the use of intelligent computing and efficient human computer interaction (HCI) is becoming a necessity of our daily life [10]. Especially Image processing and machine learning are playing very important role in artificial intelligence. Different difficult jobs are performed by image processing and machine learning technique like face recognition, facial expression recognition and gesture recognition system. Hand gesture recognition has main importance for the dumb person to express their feeling. Informative movement of a body is called gesture define by (Kurtenbach and Hulteen 1990). Compare to other body parts Hand gesture is very convenient method to express the message a very easy way to create understanding between device and Human [11]. In different background field the recognition of real time hand gesture provides high accuracy [12]. The differentiation between the hand gesture give us confidence to create an automated classification system.
Waving hand for good bye and press key action both have large difference, waving good bye have some sign which have meaning to say something and express something but press key is simply action to press key neither observed nor significant [1]. Static and dynamic gestures are the two categories of hand gesture. In the context of sign language, a gesture is build by the combination of both static and dynamic factor [2].
A. Static Hand Gesture Hand position factor are used to differentiate the static hand gesture, by the position of specific finger, thumb and palm pattern are determined. Static gesture is symbolized by a single image.
B. Dynamic Hand Gesture Differentiations of dynamic gestures are done through the starting and last stroke movement of moving hand gesture. Dynamic hand gestures are represented through the number of sequential of frames. In the subjective analysis, sensors instrument attached with the user hand. Magnetic field tracker device, data glove or body suits are used for the data collection to analyze [2]. Vision based approach is normally use as alternative to the glove based technique due to its low instrumental cost because it requires only camera for image acquisition. This technique works on image processing and machine learning algorithms.
Hand gesture recognition has three important parts Segmentation, feature extraction and recognition. Multi scale technique is used for the extraction of homogenous region of hand and mining of 2Directional movement path. This segmentation technique performs better on their data but fail on real images [3]. A technique is proposed in which use geometric moment which based on skin color for segmentation of hand. For the segmentation of hand from remaining area of an image they propose a novel technique. RGB to YCB'Cr space conversion are used. Then seven instant invariant are calculated of the segmented image, the first three moments are used to verify the results different, if image are accurately segmented [4]. Skin color based hand segmentation techniques are proposed. In this methodology, under the different lighting environment they compose efficient color to segment the skin color [5]. An author proposed a technique for hand gesture recognition with the help of hierarchical networks work with image processing. The author of this paper introduced the new dynamic model that was relevant to the HHMM topology of (IJCSIS) International Journal of Computer Science and Information Security, Vol. 18, No. 04, April 2020 DBN that was core relevant to the offline dynamic recognition system. Also, the author of this paper discussed little bit to a new technique that called Markov model and this model was feasible and supportable for cope with so called as this was high level recognition in a dynamic system [6]. Motion Divergence Fields technique are introduced. Also, in this paper the author discussed and presented a new recognition of hand gesture technique. And when we thoroughly study of this technique we have to come to know that sequence converted in hand gesture motion patterns. One of the major benefits of this framework that it is extendable to a large database for this reason it achieves high gesture recognition [7]. For hand gesture recognition researcher use 3D cameras for example Kinect depth sensor [13] and stereo camera [14,15]. Different methods are used to segment the hand region from the back-ground using depth sensor [16].For segmentation using sensor it is consider that the hand region is the closest object in the complete frame in depth camera [17]. Ghaleb proposed a system and use spotting and recognition of hand region. They use Hidden Markov Model (HMM) to recognize hand gesture with the help of both the color and depth information. Using the stereo camera, the depth information is obtained to separate the region of interest from the complex background and also illumination variance exists in the image. In one another proposed method they used Conditional Random File (CRF) and SVM for classification of gesture and spotting [15]. In this technique the segmentation of hade region results is achieved through YCbCr color space and 3D Depth Model. They use The CRF model for gesture recognition with the combination of Fourier and Zernike moment features. Several feature extraction techniques are used to improve the recognition accuracy [20]. Discriminative 2D Zernike moments as feature are used for the color dataset [18]. The architecture of vision-based system are categories into two types. in first part the researcher uses image processing technique or vision-based method to extract the feature from the required frame. The second part is to recognize the extracted feature using machine learning technique [24,27]. After feature extraction different machine learning techniques are used to classify the gesture using the extracted feature. The possible techniques proposed in the literature are template support vector machines (SVMs) [21], matching [19], radial basis function networks, Neural Network (NN), Hidden Markov Model (HMM) Bayesian Tree, Clustering algorithms etc.
PROPOSED WORK
In subjected technique Modified Directional Weighted Median Filter (MDWMF) are used remove Random value Impulse Noise (RVIN) to help for accurate segmentation [8]. Then image are converted from RGB (Red, Green, Blue) to L.a.b color space. Auto thresholding are used then morphological operations are performed. After the morphological operations canny edge detector are used. In proposed method we used 1st
Order histogram features. Linear neural network are used for recognition in parallel mode.
Figure 1: Proposed Method
A. Noise Removal In subjected paper we used MDWMF technique for removal of RVIN. In this technique they used second order derivative for noise detection.
) , ( 4 ) , ( ) , ( y x I j y i x I j y i x I Dn ...........(1)
Where(n,i,j)={(1,2,2),(2,1,2),(3,-2,2)………(n,-2,0)} Where 0<= n <=20
We use 5*5 masks for calculation intensity difference of the central pixel with neighbors applying equation (1). Intensity differences are measured in 21 possible directions and store an array. The differences are comparing with threshold value [T1 T2 T3] = [33 23 16] respectively. If difference of any direction is greater than threshold it indicates the central pixel is noisy so following equation is applied to remove noise.
IDn w W median y x L , ) , (
……. (2) The above equation (2) is applied to remove the noisy pixel.
Where IDn indicate the nth direction difference, W represents the Mask and w weight to the neighbor pixel in subjected direction. Above process have three iterations for completion.
In equation (4) Structure element B is applied on the image
After erode resultant image we apply 6*6 structure element for dilation process.
B b b A B A
……. (5) In equation (5) Structure element B is applied on the image Then image is converted to binary and canny edge detector is used to get finer edges of the segmented region.
FEATURE EXTRACTION (1ST ORDER HISTOGRAM)
We used 1st Order histogram feature to get statistical values of the segmented region. In proposed technique we extract the following features:
RECOGNITION ONE AGAINST ALL (NEURAL NETWORK)
Combination Using different layers to make ANN and major layers there are two input layers through which we enter the data for further processing and output layer are used for the resultant prediction value. Hidden layers are used to help in accurate prediction and weights are used to balance the calculation values to help in accuracy. And activation function is used, through which ANN give decision. In proposed technique we use 6 input neurons, because we use 6 feature values as an input. One hidden layer having 3 neurons, if we increase number of neuron, inaccurate classification percentage is increased. And we use single out put neuron, because our classification is binary. Initially weights are initializing as zero and binary activation function is used.
In one against all technique, the NN algorithm is trained for every class separately. Consider the subjective class 1 and then the remaining 4 assign label 0. In proposed technique NN is trained for each class separately and for testing result are taken from each trained neural network. Result are stored in array and taken the index of maximum number which identify the related class. With the combination of different technique preprocessing, feature extraction and suitable recognition we get accurate result near to the human behavior. For the assessment of proposed technique, we use real data set which is obtained by 7 mega pixel camera in different environment with complex back ground. Although mobile camera images are also used have same result produced by proposed technique. Our targeted gestures are following Hi Drinking Pointing Me Take care 20 images with size 256*256 are taken for the examination of technique. We applied 10 fold techniques for analyzing the accuracy of our proposed technique.
We use 10-fold validation techniques for learning and testing. In 100 images we use 90 images for leaning and 10images for testing to examine the accuracy. In second iteration another 10images are selected for classification and the previous selected images are combine with training data. We have 100 image datasets against each class. MDWMF [8] are used due to highest PSNR in existing noise removal techniques, and image enhancement technique which helped to achieve better segmentation. For recognition of gesture we use one against all approach in which we use binary neural network, trained against each class separately and then perform testing. Percentages of correct classify images is used to measure the classification result.
Conclusion
Our proposed method noise removal played very important role to help in region segmentation. We obtain high accuracy using one against all methods. L.a.b color space with combination of dynamic thresholding accurately segments the region from the complex background. The recognition accuracy is generated from one against all method. Neural network produces high accuracy in binary classification so, we use as a binary algorithm for multiple classes. In future we are planning to work on video-based hand region detection and recognition of gesture.
B. Conversion to l.a.b Space After removal of noise image are converted to L.a.b color space from RGB. It is prove from the analyses that image in L.a.b color space is near to the human visibility as comparing in RGB color space. Luminance of L.a.b color space is more suitable to get detail of an image. In proposed technique we processed third ‗b' auto thresholding image are converted automatically then we performed mapping process on the resultant binary image. For the mapping process equation(3)are used. Where I(x,y) represent pixel of the original image and the C(x,y) shows binary image pixels. D. Morphological operation Morphology is the integrity of different image processing operation. These techniques deal with the shape of image. Structure elements are used in all morphological operations. In proposed technique we use only erosion and dilation. We used 5
of artificial neurons (programming which have same properties of biological neurons) make a neural network. Without creating the model of biological neural network artificial neural network are used to solve the problem and act like a real neural network, Biological neural network is like the real one. Prediction ability shows the accuracy of an artificial neural network of strength of the NN.
Fig 4 :
4One
image (b) Random Value Impulse Noise image (40%) (c) After MDWMF applied image (d) lab color space converted image (e) A(:,:,3) third color component of Lab image (f) binary Image.(g)
Figure 5 :
5Graph of Recognition Result of proposed Method
Figure 7 :
7Result Comparison Graph
Table 1 :
1Segmentation Result of proposed MethodVisual Results under the complex background and light
variation:
Recognition results of proposed methods are shown below. Using 10-fold cross validation, neural network recognition algorithm with the combination of histogram Features.# Gesture
Accuracy %
1 Hi
98
2 Pointing
95
3 self-pointing
96
4 Drinking
97
5 Take Care
98
Table 2 :
2Recognition Result of proposed Method
Comparison :
ComparisonAashni et. al.,2017 proposed a technique to recognize the hand gesture for Human Computer Interaction. They use Simple thresholding method to segment the hand region from the background. Drawback of this technique is it not working correctly under poor lighting and with complex back ground. S o our proposed method covers these drawbacks and achieve improved result under poor luminance and in complex background. The visual results of segmentation accuracy are shown above table 2. They use Contour Extraction and convex hull as a feature. Haar Cascade Classifier is used to recognize the gesture. For validation of proposed technique, the same dataset is used and compare the accuracy results in the following method. Consider the accuracy of segmented image.Gesture
Aashni et. al.
(2017)
ProposedAlgorithm
2 finger gesture
94
96
3 finger gesture
93
95
Palm
96
96
Fist
95
96
Swipe (dynamic) 85
93
Table 3 :
3Comparison of the resultsFigure 6: Result Comparison Graph Because we use one against all technique so we divided the gesture group in 5 combinations.Gesture
Aashni et. al.
(2017)
Proposed
Algorithm
4 finger gesture
92
95
5 finger gesture
92
93
Palm
95
96
Fist
95
96
Swipe (dynamic)
85
94
Table 4 :
4Comparison of the results(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 04, April 2020
https://sites.google.com/site/ijcsis/ ISSN 1947-5500
. He Wu Yueming, Hanwu, Wu Yueming, He Hanwu;
Hand Segmentation for Augmented Reality System. Ru Tong, Zheng Detao, Digital Media and its Application in Museum & Heritages, Second Workshop. Ru Tong, Zheng Detao, "Hand Segmentation for Augmented Reality System," Digital Media and its Application in Museum & Heritages, Second Workshop, vol., no., pp. 395-401, 10-12 Dec. 2007.
sushmitamitra, tinku acharya -2007 -ieee transactions on systems, man, and cybernetics-part c: applications and reviews. sushmitamitra, tinku acharya -2007 -ieee transactions on systems, man, and cybernetics-part c: applications and reviews.
A Real-time Continuous Gesture Recognition System for Sign Language‖ Issue Date. Rung-Huei Liang, Ming Ouhyoung, ISBN:0-8186-8344-9Citedby:20INSPECAccessionNumber:5920468DigitalObjectIdentifier:10.1109/AFGR.1998.671007DateofCurrentVersion:06Nara , Japan Printpage(s): 558 -567Meeting DateRung-Huei Liang and Ming Ouhyoung (1998) -A Real-time Continuous Gesture Recognition System for Sign Language‖ Issue Date: 14-16 Apr 1998On page(s): 558 -567Meeting Date: 14 Apr 1998 -16 Apr 1998Location: Nara , Japan Print ISBN: 0-8186-8344-9Cited by : 20INSPEC Accession Number: 5920468Digital Object Identifier: 10.1109/AFGR.1998.671007 Date of Current Version: 06 August 2002
A hand gesture recognition system based on local linear embedding‖. Xiaolong Teng, Bian Wu, Weiwei Yu, Chongqing Liu, Journal of Visual Languages and Computing. 16Xiaolong Teng, Bian Wu and Weiwei Yu, Chongqing Liu (2005) -A hand gesture recognition system based on local linear embedding‖ Journal of Visual Languages and Computing 16 (2005) 442-454, www.elsevier.com/locate/jvlc 1045-926X/$
Junaidi Ahmad Yahya Dawod, Md. Jahangir Abdullah, Alam, Adaptive Skin Color Model for Hand Segmentation‖ 2010 International Conference on Computer Applications and Industrial Electronics (ICCAIE 2010). Kuala Lumpur, MalaysiaAhmad Yahya Dawod, Junaidi Abdullah and Md. Jahangir Alam (2010) -Adaptive Skin Color Model for Hand Segmentation‖ 2010 International Conference on Computer Applications and Industrial Electronics (ICCAIE 2010), December 5-7, 2010, Kuala Lumpur, Malaysia.
Wei-Hua, Dynamic Hand Gesture Recognition Using Hierarchical Dynamic Bayesian Networks Through Low-Level Image Processing‖ Proceedings of the Seventh International Conference on Machine Learning and Cybernetics. KunmingRewwang and chunliangtungWei-hua, Rewwang and chunliangtung (2008) -Dynamic Hand Gesture Recognition Using Hierarchical Dynamic Bayesian Networks Through Low-Level Image Processing‖ Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, 12-15 July 2008.
Motion Divergence Fields for Dynamic Hand Gesture Recognition‖ This paper appears in: Automatic Face & Gesture Recognition and Workshops. Xiaohui Shen, Gang Hua, Lance Williams, Ying Wu, ISBN:978-1-4244-9140-7ReferencesCited:23INSPECAccessionNumber:12007775DigitalObjectIdentifier:10.1109/FG.2011.5771447DateofCurrentVersion:19IEEE. Xiaohui Shen, Gang Hua, Lance Williams and Ying Wu (2011) -Motion Divergence Fields for Dynamic Hand Gesture Recognition‖ This paper appears in: Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on Issue Date: 21-25 March 2011On page(s): 492 -499Location: Santa Barbara, CA Print ISBN: 978-1-4244-9140-7References Cited: 23INSPEC Accession Number: 12007775Digital Object Identifier:10.1109/FG.2011.5771447 Date of Current Version: 19 May 2011.
Modified Directional Weighted Median Filter'' Information Systems Analysis and Synthesis (ISAS). Ayyaz Hussain, Muhammad Asimkhan, Zia-ul-qayyumAyyaz Hussain, Muhammad AsimKhan, Zia-ul-qayyum (2011) ''Modified Directional Weighted Median Filter'' Information Systems Analysis and Synthesis (ISAS). .
Game Interface using Hand Gesture Recognition‖ Computer Sciences and Convergence Information Technology (ICCIT). Doe Hyung Lee, sKwang Seok Hong, s10.1109/ICCIT.2010.5711226DateofCurrentVersion:10Seoul Print ISBN: 9781-4244-8567-3References Cited: 10INSPEC Accession Number: 11833251Digital Object Identifier. 5th International Conference on Issue DateDoe Hyung Lee and Kwang Seok Hong (2011) -Game Interface using Hand Gesture Recognition‖ Computer Sciences and Convergence Information Technology (ICCIT), 2010 5th International Conference on Issue Date: Nov. 30 2010-Dec. 2 2010On page(s): 1092 -1097Location: Seoul Print ISBN: 9781- 4244-8567-3References Cited: 10INSPEC Accession Number: 11833251Digital Object Identifier:10.1109/ICCIT.2010.5711226 Date of Current Version: 10 February 2011.
‗A Framework for Vision-based Static Hand Gesture Recognition. D K Ghosh, NIT, RourkelaPhD thesisGhosh, D.K. ‗A Framework for Vision-based Static Hand Gesture Recognition'. PhD thesis, NIT, Rourkela, 2016
H Cheng, L Yang, Z Liu, ‗Survey on 3d hand gesture recognition', IEEE Transactions on Circuits and Systems for Video Technology. Cheng, H., Yang, L., Liu, Z.: ‗Survey on 3d hand gesture recognition', IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26, (9), pp. 1659-1673
‗Enabling consistent hand-based interaction in mixed reality by occlusions handling. F Narducci, S Ricciardi, R Vertucci, Multimedia Tools and Applications. 75Narducci, F., Ricciardi, S., Vertucci, R.: ‗Enabling consistent hand-based interaction in mixed reality by occlusions handling', Multimedia Tools and Applications, 2016,75, (16), pp. 9549- 9562
. J Suarez, R R Murphy, ‗Handgesturerecognitionwithdepthimages:Areview'. In: Roman. IEEESuarez,J.,Murphy,R.R. ‗Handgesturerecognitionwithdepthimages:Areview'. In: Ro- man, 2012 IEEE, 2012, pp. 411-417
‗Hand gesture spotting and recognition in stereo color image sequences based on generative models. F Ghaleb, E Youness, M Elmezain, F S Dewdar, International Journal of Engineering Science and Innovative Technology. 3Ghaleb, F., Youness, E., Elmezain, M., Dewdar, F.S.: ‗Hand gesture spotting and recognition in stereo color image sequences based on generative models', International Journal of Engineering Science and Innovative Technology, 2014, 3, (1), pp. 78-88
‗Vision-based hand gesture spotting and recognition using crf and svm. F F Ghaleb, E A Youness, M Elmezain, F S Dewdar, Journal of Software Engineering and Applications. 8313Ghaleb, F.F., Youness, E.A., Elmezain, M., Dewdar, F.S.: ‗Vision-based hand gesture spotting and recognition using crf and svm', Journal of Software Engineering and Applications, 2015,8, (07), pp. 313
‗Static and dynamic hand gesture recognition in depth data using dynamic time warping. G Plouffe, A M Cretu, IEEE transactions on instrumentation and measurement. 652Plouffe, G., Cretu, A.M.: ‗Static and dynamic hand gesture recognition in depth data using dynamic time warping', IEEE transactions on instrumentation and measurement, 2016,65, (2), pp. 305-316
‗Enhanced computer vision with microsoftkinect sensor: A review. J Han, L Shao, D Xu, J Shotton, IEEE transactions on cybernetics. 43Han, J., Shao, L., Xu, D., Shotton, J.: ‗Enhanced computer vision with microsoftkinect sensor: A review', IEEE transactions on cybernetics, 2013, 43, (5), pp. 1318-1334
‗Static hand gesture recognition using discriminative 2d zernikemoments. M A Aowal, A S Zaman, S M Rahman, D Hatzinakos, TENCON 2014-2014 IEEE Region 10Aowal, M.A., Zaman, A.S., Rahman, S.M., Hatzinakos, D. ‗Static hand gesture recognition using discriminative 2d zernikemoments'. In: TENCON 2014-2014 IEEE Region 10
. Conference, Conference, 2014, pp. 1-5
‗Superpixel-based hand gesture recognition with kinect depth camera. C Wang, Z Liu, S C Chan, IEEE transactions on multimedia. 171Wang, C., Liu, Z., Chan, S.C.: ‗Superpixel-based hand gesture recognition with kinect depth camera', IEEE transactions on multimedia, 2015,17, (1), pp. 29-39
‗Vision-based hand pose estimation through similarity search using the earth mover's distance', IET computer vision. H A De Villiers, L Van Zijl, T R Niesler, 6de Villiers, H.A., van Zijl, L., Niesler, T.R.: ‗Vision-based hand pose estimation through similarity search using the earth mover's distance', IET computer vision, 2012,6, (4), pp. 285- 295
D K Ghosh, S Ari, Classifier, Communication Systems and Network Technologies (CSNT). Fifth International Conference onGhosh,D.K.,Ari,S. ‗Statichandgesturerecognitionusingmixtureoffeaturesandsvm classifier'. In: Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, 2015, pp. 1094-1099
. IJCSIS) International Journal of Computer Science and Information Security. 1804(IJCSIS) International Journal of Computer Science and Information Security, Vol. 18, No. 04, April 2020
Hand Gesture Recognition for Human Computer Interaction. Archanasrisubramaniana Aashnihariaa, Nivedhithaasokkumara, Shristipoddara, S Jyothi, Nayaka, 7th International Conference on Advances in Computing & Communications, ICACC-2017. Cochin, India115AashniHariaa, ArchanasriSubramaniana, NivedhithaAsokkumara, ShristiPoddara , Jyothi S Nayaka. Hand Gesture Recognition for Human Computer Interaction. 7th International Conference on Advances in Computing & Communications, ICACC-2017, 2224 August 2017, Cochin, India. AashniHaria et al. / Procedia Computer Science 115
. H Aashni, S Archanasri, A Nivedhitha, P Shristi, S Jyothi, Nayak, Procedia Computer Science. 00Aashni H, Archanasri S, Nivedhitha A, Shristi P, Jyothi S Nayak/ Procedia Computer Science 00 (2017) 000- 000.
Static Sign Language Recognition Using Deep Learning. Lean Karlo Tolentino, Ronnie Serfa Juan, August Thio-Ac, Maria , Abigail B Pamahoy, International Journal of Machine Learning and Computing. 96Lean Karlo Tolentino, Ronnie Serfa Juan, August Thio-ac, Maria Abigail B. Pamahoy (2019). Static Sign Language Recognition Using Deep Learning. International Journal of Machine Learning and Computing, Vol. 9, No. 6, December 2019.
E P Cabalfin, L B Martinez, R C L Guevara, P C , Filipino sign language recognition using manifold learning,‖ in Proc. TENCON 2012 IEEE Region 10. E. P. Cabalfin, L. B. Martinez, R. C. L. Guevara, and P. C. Naval, -Filipino sign language recognition using manifold learning,‖ in Proc. TENCON 2012 IEEE Region 10
. Conference, Conference, 2012, pp. 1-5.
P Mekala, Y Gao, J Fan, A Davari, Real-time sign language recognition based on neural network architecture,‖ in Proc. 2011 IEEE 43rd Southeastern Symposium on System Theory. P. Mekala, Y. Gao, J. Fan, and A. Davari, -Real-time sign language recognition based on neural network architecture,‖ in Proc. 2011 IEEE 43rd Southeastern Symposium on System Theory, 2011, pp. 195-199.
Recognizing non-manual signals in filipino sign language. J P Rivera, C Ong, Proc. Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Eleventh International Conference on Language Resources and Evaluation (LREC 2018)J. P. Rivera and C. Ong, -Recognizing non-manual signals in filipino sign language,‖ in Proc. Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018, pp. 1-8.
Facial expression recognition in filipino sign language: Classification using 3D Animation units. J P Rivera, C Ong, ‖ in Proc. the 18th Philippine Computing Science Congress (PCSC 2018). 1J. P. Rivera and C. Ong, -Facial expression recognition in filipino sign language: Classification using 3D Animation units,‖ in Proc. the 18th Philippine Computing Science Congress (PCSC 2018), 2018, pp. 1-
|
[] |
[
"Unsupervised Learning of Mixture Regression Models for Longitudinal Data",
"Unsupervised Learning of Mixture Regression Models for Longitudinal Data"
] |
[
"Peirong Xu \nCollege of Mathematics and Sciences\nShanghai Normal University\nShanghaiChina\n",
"Heng Peng \nDepartment of Mathematics\nHong Kong Baptist University\nHong KongChina\n",
"Tao Huang [email protected]. \nSchool of Statistics and Management\nSchool of Statistics and Man-agement\nShanghai University of Finance and Economics\nShanghaiChina\n\nShanghai University of Finance and Economics\n200433ShanghaiChina\n",
"Associate Professor,Tao Huang "
] |
[
"College of Mathematics and Sciences\nShanghai Normal University\nShanghaiChina",
"Department of Mathematics\nHong Kong Baptist University\nHong KongChina",
"School of Statistics and Management\nSchool of Statistics and Man-agement\nShanghai University of Finance and Economics\nShanghaiChina",
"Shanghai University of Finance and Economics\n200433ShanghaiChina"
] |
[] |
This paper is concerned with learning of mixture regression models for individuals that are measured repeatedly. The adjective "unsupervised" implies that the number of mixing components is unknown and has to be determined, ideally by data driven tools. For this purpose, a novel penalized method is proposed to simultaneously select the number of mixing components and to estimate the mixture proportions and unknown parameters in the models. The proposed method is capable of handling both continuous and discrete responses by only requiring the first two moment conditions of the model distribution. It is shown to be consistent in both selecting the number of components and estimating the mixture proportions and unknown regression parameters. Further, a modified EM algorithm is developed to seamlessly integrate model selection and estimation. Simulation studies are conducted to evaluate the finite sample performance of the proposed procedure. And it is further illustrated via an analysis of a primary biliary cirrhosis data set.
|
10.1016/j.csda.2018.03.012
|
[
"https://arxiv.org/pdf/1703.06277v2.pdf"
] | 46,920,190 |
1703.06277
|
bdab37134870fc40fd4c42cd98dbb984605e3d95
|
Unsupervised Learning of Mixture Regression Models for Longitudinal Data
8 Jan 2018
Peirong Xu
College of Mathematics and Sciences
Shanghai Normal University
ShanghaiChina
Heng Peng
Department of Mathematics
Hong Kong Baptist University
Hong KongChina
Tao Huang [email protected].
School of Statistics and Management
School of Statistics and Man-agement
Shanghai University of Finance and Economics
ShanghaiChina
Shanghai University of Finance and Economics
200433ShanghaiChina
Associate Professor,Tao Huang
Unsupervised Learning of Mixture Regression Models for Longitudinal Data
8 Jan 2018arXiv:1703.06277v2 [stat.ME] 1Unsupervised learningModel selectionLongitudinal data analy- sisQuasi-likelihoodEM algorithm * Corresponding
This paper is concerned with learning of mixture regression models for individuals that are measured repeatedly. The adjective "unsupervised" implies that the number of mixing components is unknown and has to be determined, ideally by data driven tools. For this purpose, a novel penalized method is proposed to simultaneously select the number of mixing components and to estimate the mixture proportions and unknown parameters in the models. The proposed method is capable of handling both continuous and discrete responses by only requiring the first two moment conditions of the model distribution. It is shown to be consistent in both selecting the number of components and estimating the mixture proportions and unknown regression parameters. Further, a modified EM algorithm is developed to seamlessly integrate model selection and estimation. Simulation studies are conducted to evaluate the finite sample performance of the proposed procedure. And it is further illustrated via an analysis of a primary biliary cirrhosis data set.
Introduction
In many medical studies, the marker of disease progression and a variety of characteristics are routinely measured during the patients' follow-up visit to decide on future treatment actions. Consider a motivating Mayo Clinic trial with primary biliary cirrhosis (PBC), wherein a number of serological, clinical and histological parameters were recorded for each of 312 patients from 1974 to 1984. This longitudinal study had a median follow-up time of 6.3 years as some patients missed their appointments due to worsening medical condition of some labs. It is known that PBC is a fatal chronic cholesteric liver disease, which is characterized histopathologically by portal inflammation and immune-mediated destruction of the intrahepatic bile ducts (Pontecorvo, Levinson, and Roth, 1992). It can be divided into four histologic stages, but with nonuniformly affected liver. The diagnosis of PBC is important for the medical treatment with Ursodiol has been shown to halt disease progression and improve survival without need for liver transplantation (Talwalkar and Lindor, 2003). Therefore, one goal of the study was the investigation of the serum bilirubin level, an important marker of PBC progression, in relation to the time and to potential clinical and histological covariates. Another issue that should be accounted for is the unobservable heterogeneity between subjects that may not be explained by the covariates. The changes in inflammation and bile ducts occur at different rates and with varying degrees of severity in different patients, so the heterogeneous patients could potentially belong to different latent groups. To address these problems, there is a demand for mixture regression modeling for subjects on the basis of longitudinal measurements.
There are various research works on mixture regression models for longitudinal out- Compared with heuristic methods such as the k-means method (Genolini and Falissard, 2010), issues like the selection of the number of clusters (or components) can be addressed in a principled way. However, most of them assume a parametric mixture distribution, which may be too restrictive and invalid in practice when the true data-generating mechanism indicates otherwise.
A key concern for the performance of mixture modeling is the selection of the number of components. A mixture with too many components may overfit the data and result in poor interpretations. Many statistical methods have been proposed in the past few decades by using the information criteria. For example, see Leroux (1992), Roeder and Wasserman (1997), Hennig(2004) However, the full likelihood is often difficult to specify in formulating a mixture model for longitudinal data, particularly for correlated discrete data.
Instead of specifying the form of distribution of the observations, a quasi-likelihood method (Wedderburn, 1974) gives consistent estimates of parameters in mixture regression models that only needs the relation between the mean and variance of each obser-vation. Inspired by its nice property, in this paper, we propose a new penalized method based on quasi-likelihood for mixture regression models to deal with the above mentioned problems simultaneously. This would be the first attempt to handle both balanced and unbalanced longitudinal data that only requires the first two moment conditions of the model distribution. By penalizing the logarithm of mixture proportions, our approach can simultaneously select the number of mixing components and estimate the mixture proportions and unknown parameters in the semiparametric mixture regression model. The number of components can be consistently selected. And given the number of components, the estimators of mixture proportions and regression parameters can be root-n consistent and asymptotically normal. By taking account of the within-component dispersion, we further develop a modified EM algorithm to improve the classification accuracy. Simulation results and the application to the motivating PBC data demonstrate the feasibility and effectiveness of the proposed method.
The rest of the paper is organized as follows. In Section 2, we introduce a new penalized method for learning semiparametric mixture regression models with longitudinal data. Section 3 presents the corresponding theoretical properties and Section 4 provides a modified EM algorithm for implementation. In Section 5, we assess the finite sample performance of the proposed method via simulation studies. We apply the proposed method to the PBC data in Section 6, and conclude the paper with Section 7. All technical proofs are provided in Appendix.
2 Learning semiparametric mixture of regressions
Model specification
In a longitudinal study, suppose Y ij is the response variable measured at the jth time point for the ith subject, and X ij is the corresponding p × 1 vector of covariates, i = 1, . . . , n, j = 1, . . . , m i . Let Y i = (Y i1 , . . . , Y im i ) T and X i = (X i1 , . . . , X im i ) T . In general, the observations for different subjects are independent, but they may be correlated within the same subject. We assume that the observations of each subject belong to one of K classes (components) and u i ∈ {1, . . . , K} is the corresponding latent class variable.
Assume that u i has a discrete distribution P(u i = k) = π k , where π k , k = 1, . . . , K, are the positive mixture proportions satisfying K k=1 π k = 1. Given u i = k and X ij , suppose the conditional mean of Y ij is
µ ijk ≡ E(Y ij | X ij , u i = k) = g(X T ij β k ), (2.1)
where g is a known link function, and β k is a p−dimensional unknown parameter vector.
The corresponding conditional variance of Y ij is given by
σ 2 ijk ≡ var(Y ij | X ij , u i = k) = φ k V (µ ijk ), (2.2)
where V is a known positive function and φ k is a unknown dispersion parameter. In other words, conditioning on X ij , the response variable Y ij follows a mixture distribution
Y ij | X ij ∼ K k=1 π k f k (Y ij | X T ij β k , φ k ), where f k (Y ij | X T ij β k , φ k )'s are the components' distributions.
To avoid identifiability issues, we assume that K is the smallest integer such that π k > 0 for k = 1, · · · , K, and (β a , φ a ) = (β b , φ b ) for 1 ≤ a < b ≤ K. Denote θ = (β T 1 , . . . , β T K , φ T , π T ) T with β k = (β k1 , . . . , β kp ) T and π = (π 1 , . . . , π K−1 ) T , and φ = (φ 1 , . . . , φ K ) T .
Under the working independence correlation, the (log) quasi-likelihood of the Kcomponent marginal mixture regression model is
Q(θ) = n i=1 log K k=1 π k exp m i j=1 q(g(X T ij β k ); Y ij ) ,(2.3)
where function q(µ; y) (McCullagh and Nelder, 1989) satisfies ∂q(µ;y) ∂µ = y−µ V (µ) . It is known that, for a generalized linear model with independent data, the quasi-likelihood estimator of the regression coefficient has the same asymptotic properties as the maximum likelihood estimator. While for longitudinal data, it is equivalent to the GEE estimator (Liang and Zeger, 1986), which is consistent even when the working correlation structure is misspecified. Therefore, estimation consistency is expected to hold for the K-component marginal mixture regression model (2.1)-(2.2), and this will be validated in Section 3. 5
Penalized quasi-likelihood method
For a fixed number of K components, we can maximize the quasi-likelihood function (2.3) by an expectation-maximization (EM) algorithm, which in the E-step computes the posterior probability of the class memberships and in the M-step estimates the mixture proportions and unknown parameters. However, in practice, the number of components is usually unknown and needs to be inferred from the data itself.
For the proposed marginal mixture regression model, the selection of the number of mixing components can be viewed as a model selection problem. Various conventional methods have been proposed based on the likelihood function and some information theoretic criteria. In particular, the Bayesian information criterion (BIC; Schwarz, 1978) is recommended as a useful tool for selecting the number of components (Dasgupta and Raftery, 1998;Fraley and Rafetery, 2002). Therefore, a natural idea is to propose a BIC-type criterion for selecting the number of mixing components, where the likelihood function is replaced by the quasi-likelihood function (2.3). But our simulation experience shows that it couldn't perform as well as the traditional BIC, since (2.3) is no longer a joint density with integral equals to one.
To avoid calculating the normalizing constant, the penalization technique is preferred. By (2.3), intuitively, the kth component would be eliminated if π k = 0. But in implementation of (2.3), the quasi-likelihood function for the complete data (u ik , Y i , X i ) involves log π k rather than π k , where u ik denotes the indicator of whether ith subject belongs to the kth component (see (4.1) defined in Section 4 for details). Therefore, it is natural to penalize the logarithm of mixture proportions log π k , k = 1, . . . , K. Moreover, note that the gradient of log π k increases very fast when π k is close to zero, and it would dominate the gradient of nonzero π l > 0. Consequently, the popular L q types of penalties may not able to set insignificant π k to zero. In the spirit of penalization in Huang et al. (2016), we propose the following penalized quasi-likelihood function
Q P (θ) = Q(θ) − nλ K k=1 {log(ǫ + π k ) − log(ǫ)}, (2.4) 6
where λ is a tuning parameter and ǫ is a very small positive constant. Note that log(ǫ + π k ) − log(ǫ) is an increasing function of π k and is shrunk to zero as the mixing proportion π k goes to zero. Therefore, the proposed method (2.4) can simultaneously determine the number of mixture components and estimate mixture proportions and unknown parameters.
Remark 1. The small constant ǫ is introduced to ensure the continuity of the objective function when some of mixture proportions are shrunk continuously to zero.
Remark 2. The penalty nλ K k=1 {log(ǫ + π k ) − log(ǫ)} in (2.4) would over penalize large π k and result in a biased estimator. A more general but slightly more complicated approach is to use n K k=1 {log(ǫ + p λ (π k )) − log(ǫ)}, where p λ (·) is a penalty function that gives estimators with sparsity, unbiasedness and continuity as discussed in Fan and Li (2001).
Asymptotic properties
In this section, we first study the asymptotic property of the maximum quasi-likelihood estimator θ of (2. For a fixed number of K components, denote the true value of parameter vector by θ 0 . The components of θ 0 are denoted with a subscript, such as π 0k . We assume that the number of subjects n increases to infinity, while the number of observations {m i } is a bounded sequence of positive integers. Let
Ψ(θ; Y i |X i ) = K k=1 π k exp m i j=1 q(g(X T ij β k ); Y ij ) (3.1) and ψ(θ; Y i | X i ) = log(Ψ(θ; Y i | X i )).
We assume the following regularity conditions to derive the asymptotic properties.
C1
The function g(·) has two bounded and continuous derivatives.
7
C2 The random variables X ij 's are bounded on the compact support A uniformly. For θ ∈ Ω, the density function of X T ij β k is positive and satisfies Lipschitz condition of order 1 on U k = {u = X T ij β k : X ij ∈ A, i = 1, . . . , n, j = 1, . . . , m i }, k = 1, . . . , K.
C3 Ω is compact and θ 0 is an interior point in Ω.
C4 For each θ ∈ Ω, ψ(θ; Y i | X i ) admits third order partial derivatives with respect to θ. And there exist functions M l (X i , Y i ), l = 0, 1, 2, 3 such that for θ in
a neighborhood of θ 0 , |ψ(θ; Y i | X i ) − ψ(θ 0 ; Y i | X i )| ≤ M 0 (X i , Y i ), |∂ψ(θ; Y i | X i )/∂θ j | ≤ M 1 (X i , Y i ), |∂ 2 ψ(θ; Y i | X i )/∂θ j ∂θ k | ≤ M 2 (X i , Y i ), and |∂ 3 ψ(θ 0 ; Y i | X i )/∂θ j ∂θ k ∂θ l | ≤ M 3 (X i , Y i ) with E{M l (X i , Y i )} < ∞, for all i = 1, . . . , n.
C5 θ 0 is the identifiably unique maximizer of E{Q(θ)}. Theorem 1. Under conditions C1-C6, the maximum quasi-likelihood estimator θ of (2.3) given the number of components is consistent and has the asymptotic normality
C6 Let
A = var{∂ψ(θ 0 ; Y i | X i )/∂θ}. The second derivative matrix B = E{−∂ 2 ψ(θ 0 ; Y i | X i )/∂θ∂θ T } is positive definite.√ n( θ − θ 0 ) L −→ N(0, B −1 AB −1 ).
Next, we study the model selection consistency of the proposed method (2.4) for the marginal mixture regression model (2.1)-(2.2). We assume that there are K 0 mixture components, K 0 ≤ K with π l = 0, for l = 1, . . . , K − K 0 and π l = π 0k for l = K − K 0 + 1, . . . , K, k = 1, . . . , K 0 . In the spirit of locally conic parametrization (Dacunha-Castelle and Gassiat, 1997), define π l = λ l η, l = 1, . . . , K − K 0 and π l = π 0k + ρ k η, l = K − K 0 + 1, . . . , K, k = 1, . . . , K 0 . Then, the function (3.1) can be rewritten as
Ψ(η, γ; Y i | X i ) = K−K 0 l=1 λ l ηf (β l ; Y i | X i ) + K 0 k=1 (π 0k + ρ k η)f (β 0k + ηδ k ; Y i | X i ), where f (β l ; Y i | X i ) = exp m i j=1 q(g(X T ij β l ); Y ij ) and γ = (λ 1 , . . . , λ K−K 0 , ρ 1 , . . . , ρ K 0 , β T 1 , . . . , β T K−K 0 , δ T 1 , . . . , δ T K 0 ) T with restrictions λ l ≥ 0, β l ∈ R p , l = 1, . . . , K − K 0 , δ k ∈ R p , ρ k ∈ R, k = 1, . . . , K 0 , K−K 0 l=1 λ l + K 0 k=1 ρ k = 0 and K−K 0 l=1 λ 2 l + K 0 k=1 ρ 2 k + K 0 k=1 δ k 2 = 1. By the permutation,
such a parametrization is locally conic and identifiable. And then, the penalized quasilikelihood function (2.4) can be rewritten as
Q P (η, γ) ≡ n i=1 log{Ψ(η, γ; Y i | X i )} − nλ K k=1 {log(ǫ + π k ) − log(ǫ)}. (3.2)
To establish the model selection consistency of the proposed method, we need the following additional conditions:
C7 There exists a positive constant ε such that g(X T ij β k ) and V (g(X T ij β k )) are bounded on B = {β : β − β 0 ≤ ε} uniformly in i = 1, . . . , n, j = 1, . . . , m i , k = 1, . . . , K.
C8 Let Σ ik = cov(Y i | X i , u i = k), and V ik be a m i × m i diagonal matrix with jth element σ 2
ijk . Both the eigenvalues of Σ ik and V ik are uniformly bounded away from 0 and infinity.
Condition C7 is analogous to conditions (A2) and (A6) in Wang (2011), which is generally satisfied for marginal models. For example, when the marginal model follows
a Poisson regression, V (g(X T ij β k )) = g(X T ij β k ) = exp(X T ij β k )'
Implementation and tuning parameter selection
In this section, we propose an algorithm to implement the proposed method (2.4) and a procedure to select the tuning parameter λ.
Modified EM Algorithm
Since the membership of each subject is unknown, it is natural to use EM algorithm to implement (2.4). But note that the criterion (2.4) is a function unrelated to different dispersion parameters φ k , k = 1, . . . , K, therefore, the naive EM algorithm may decrease the classification accuracy for the observations by ignoring the within-component dispersion. Therefore, we here propose a modified EM algorithm in consideration of different component dispersion.
Let u ik denote the indicator of whether the ith subject is in the kth class. That is, u ik = 1 if the ith subject belongs to the kth component, and u ik = 0 otherwise. If the missing data {u ik , i = 1, . . . , n, k = 1, . . . , K} were observed, the penalized quasilikelihood function for the complete data is given by
n i=1 K k=1 u ik log π k + m i j=1 q(g(X T ij β k ); Y ij ) − nλ K k=1 {log(ǫ + π k ) − log(ǫ)}. (4.1) Denote Θ = (π T , β T , φ T ) T as the vector of all parameters in the K-component marginal mixture regression model (2.1)-(2.2) with β = (β T 1 , . . . , β T K ) T .
In the E-step, given the current estimate Θ (t) = (π (t)T , β (t)T , φ (t)T ) T , we impute values for the unobserved u ik by
u (t+1) ik = π (t) k exp m i j=1q (g(X T ij β (t) k ), φ (t) k ; Y ij ) K l=1 π (t) l exp m l j=1q (g(X T ij β (t) l ), φ (t) l ; Y ij ) , whereq(µ, φ; y) = µ y y−t φV (t) dt.
Plugging them into (4.1), we obtain the function
n i=1 K k=1 u (t+1) ik log π k + m i j=1 q(g(X T ij β k ); Y ij ) − nλ K k=1 {log(ǫ + π k ) − log(ǫ)}. (4.2)
In the M-step, the goal is to update π (t) and β (t) by maximizing (4.2) with the constraint K k=1 π k = 1 and update φ (t) by the residual moment method. Specifically, to update π (t) , we solve the following equations
∂ ∂π n i=1 K k=1 u (t+1) ik log π k − nλ K k=1 {log(ǫ + π k ) − log(ǫ)} − ξ( K k=1 π k − 1) = 0,
where ξ is the Lagrange multiplier. Then, when ǫ is very close to zero, it gives
π (t+1) k = max 0, 1 1 − λK 1 n n i=1 u (t+1) ik − λ , k = 1, . . . , K. (4.3) β (t)
k can be updated by solving the following equations
n i=1 m i j=1 u (t+1) ik g ′ (X T ij β k )X ij Y ij − g(X T ij β k ) V (g(X T ij β k )) = 0,
where g ′ (·) is the first derivative of g, k = 1, . . . , K. And using the residual moment method, we update φ (t) as follows
φ (t+1) k = n i=1 u (t+1) ik n i ′ =1 m i ′ u (t+1) i ′ k m i j=1 {Y ij − g(X T ij β (t) k )} 2 V (g(X T ij β (t) k )) , k = 1, . . . , K.
Remark 3. In the initial step, we pre-specify a large number of components, and once a mixing proportion is shrunk to zero by (4.3), the corresponding parameters in this component are set to zero and fewer components are kept for the remaining EM iterations. Here we use the same notation K for the whole process. In practice, during the iterations, K becomes smaller and smaller until the algorithm converges.
Turning Parameter Selection and Classification Rule
In terms of selecting the tuning parameter λ, we follow the suggestion in Wang, Li, and
Tsai (2007) and use a BIC-type criterion:
BIC(λ) = −2 n i=1 log K k=1 π k exp m i j=1q (g(X T ij β k ), φ k ; Y ij ) + K(p + 2) log n, (4.4)
where K and β are estimators of K 0 and β 0 by maximizing (2.4) for a given λ.
Let K, π, β, and φ be the final estimators of the number of components, the mixture proportions and unknown parameters, respectively. Then, in the sense of clustering, a subject can be assigned to the class whose empirical posterior is the largest. For example, a subject (Y * , X * ) with m times observations is assigned to the class
k * = arg max 1≤k≤ K π k exp m j=1q (g(X * T ij β k ), φ k ; Y * ij ) . (4.5)
Consequently, a nature predictor of Y * is given by g(X T β k * ).
Remark 5. One may claim that β would loss some efficiency if the within-subject correlation is strong. It would be better to incorporate correlation information to gain estimation efficiency. However, a correlation analysis would lead to additional computational cost and increase the chance of the convergence problem for the proposed modified EM algorithm. In practice, we suggest to estimate β once again given the component information derived from (2.4). Specifically, we first fit the mixture regression model (2.1)-(2.2) and
cluster samples into K classes by (4.5); then, in each class, the marginal generalized linear model is estimated by applying GEE with a working correlation structure. It is expected that this two-step technique may improve the estimation efficiency if the correlation of the longitudinal data is strong and the working structure is correctly specified.
Simulation studies
In this section, we conduct a set of Monte Carlo simulation studies to assess the finite sample performance of the proposed method. The maximum initial number of clusters is set to be ten, the initial value for the modified EM algorithm is estimated by K-means clustering and the tuning parameter λ is selected by the proposed BIC criterion (4.4).
To test the classification accuracy and estimation accuracy, we conduct 1000 replications and compare the method (2.4) with the two-step method mentioned in Remark 5 and the QIFC method proposed by Wang and Qu (2014). QIFC is a supervised classification technique for longitudinal data. To permit comparison, we assume that the true number of components, the true class label and the true within-subject correlation are known for the QIFC method. We denote the proposed method and the two-step method as PQL and PQL2 in the following, respectively. Example 1. Motivated by the real data application, we simulate PBC data from a twocomponent normal mixture as follows. We set n = 300, K = 2, m i = 6, and π 1 = π 2 = 0.5.
For kth component, the mean structure of each response is set as
E(Y ij ) = β k1 X i1 + β k2 X i2 + β k3 X i3 + β k4 X ij4 ,
and the marginal variance is assumed as σ 2 k . The true values of the regression parameters β kj 's and the marginal variances σ 2 k 's are given in Table 2. Covariates X i1 are generated independently from Bernoulli distribution B(1, 0.5) with 0 for placebo and 1 for D-penicillamine. Covariates X i2 , representing the age of ith patient at entry in years, are generated independently from uniform distribution U (30,80). Covariates X i3 are ran- When the number of components is correctly identified, Table 1(a) reports the median and the 95% confidence interval of the misclassification error rate from the model-based clustering. We can see that the proposed methods perform better than QIFC with relatively smaller misclassification error rate. Since QIFC is proved asymptotically optimal in terms of misclassification error rate (see Theorem 1 in Wang and Qu, 2014), the observations in Table 1(a) imply the optimality of the proposed methods in terms of misclassification error rate numerically. Further, in terms of parameter estimation, we summarize the estimation of mixture proportions and regression parameters in Table 2. The means of the PQL estimators seem to provide consistent estimates of the regression parameters.
It is not surprising that, for regression parameters, the PQL approach performs not as well as the QIFC method with larger bias (in absolute value) and MSE, since QIFC estimators are oracle by assuming the known class memberships and the true within-subject correlation structure. It implies that ignoring the working correlation would affect the efficiency of parameter estimation. However, we can improve the estimating efficiency if the correct correlation information is incorporated. This is reflected in the PQL2 estimators that have much smaller biases (in absolute value) and MSEs compared with the PQL estimators. Indeed, the PQL2 method performs similarly to the QIFC approach.
In addition, combining Table 1(a) and Table 2, we can observe that the two-step technique is able to improve the estimation efficiency for the mean regression parameters without reducing the classification accuracy, which validate our guess in Remark 5 numerically. In general, when the within-subject correlation is strong, it is recommended to use PQL2 to provide more predictive power by utilizing the within-subject correlation information.
Example 2. By design, the application of the proposed method is not restricted to continuous responses, and we next evaluate the performance of PQL and PQL2 on count responses. We generate correlated count outcomes from a two-component overdispersed Poisson mixture with mixture proportions π 1 = 1/3 and π 2 = 2/3. For component 1, the provide more predictive power, especially for large within-subject correlation. With respect to bias and MSE in the estimation of parameters, Table 5 in Appendix B indicates . This is a more challenging example with more components but having different correlation structures. Specifically, we generate 500 samples with mixture proportions π 1 = π 2 = 0.25, π 3 = π 4 = 0.15 and π 5 = 0.2.
mean function of repeated measurements Y ij is log(µ ij1 ) = 3X ij1 − X ij2 + X ij3 , i = 1,log(µ ij2 ) = 4 − 2X ij1 + X ij3 , i = 1,
Conditional on the class label u i , the response vector Y i is generated from five multivariate 18 normal distributions:
Y i | u i = k ∼ MV N(β k0 + X i β k , σ 2 k R (k) i ), k = 1, · · · , 5
where the within-subject correlation structures are set as R
(1) i = R iAR(1) (0.6), R (2) i = R iAR(1) (0.6), R(3)i = R iCS (0.3), R(4)i = R iCS (0.3), R(5)
i = R iIN D , and the true values of the regression parameters (β k0 , β k )'s and the variance parameters σ 2 k 's are given in Table 6. The number of repeated measurements m i and the covariates are generated as in Example Table 1(c) shows that PQL gives more accurate prediction of the class's membership compared with QIFC, which is oracle by assuming the known class memberships and the true different withinsubject correlation structures. Table 6 in Appendix B indicates, across different finite mixture correlation models, PQL estimators are still consistent. It may loss some efficiency, but can be improved by PQL2.
Application to primary biliary cirrhosis data
In this section, we apply the proposed method to study a doubled-blinded randomized We first standardize data so that there is no intercept term in model (6.1). Then, we apply the proposed method to simultaneously select the number of components and to estimate the mixture proportions and unknown parameters. As in the simulation studies, the maximum initial number of clusters is set to be ten and the initial value for the groups, which is same as the clinical classification, while LMM favors 3 groups. Figure 5 in Appendix B depicts the boxplots of residuals in these three groups. The boxplots exhibit the heavy-tailed phenomenon for residuals, especially for those patients in Group 1. It implies that the normality assumptions for the random effects and errors appear inappropriate for modeling this data set. A misspecified distribution of random quantities in the model can seriously influence parameter estimates as well as their standard errors, subsequently leading to invalid statistical inferences. Therefore, it is better to use the proposed semiparametric mixture regression model that only requiring the first two moment conditions of the model distribution. To check the stability of the proposed method, we run our method 100 replications. To be specific, the variable "status" is a triple variable with 0 for censored, 1 for liver transplanted and 2 for dead. It describes the status of a patient at the endpoint of the cohort study. For each run, we randomly draw 80% of patients for each of these three status without replacement. Figure 1(f) shows that our proposed method selects two groups with high probability.
The resulting estimators of parameters and mixture proportions along with the standard deviations are shown in Table 3. One scientific question of this cohort study is that whether the drug D-pencillamine has effective impact on slowing the rate of increase in serum bilirubin level. According to the estimates and standard deviations with respect to covariate "Trt" in Table 3, it implies that there is little benefit of D-pencillamine to lowering the rate of increase in serum bilirubin level even harmful effect, which is in accordance with findings in other literatures (eg Hoofnagle, David, Schafer, Peters, Avigan, Pappas,
Hanson, Minuk, Dusheiko, and Campbell, 1986; Pontecorvo, Levinson, and Roth, 1992).
Another goal of this study is to identify groups of patients with similar characteristics by using the values of the marker serum bilirubin and to see how the bilirubin levels evolve over time. Figure 6 in Appendix B depicts the fitted mean profiles in identified two groups, showing the increasing trend of bilirubin levels in both groups. According to the estimates and standard deviations of parameters in Table 3, it implies that the Table 4. For comparison purposes, the fitting results and classification results of the QIFC method are presented in Tables 3 and 4, respectively. It can be observed that the proposed method provides 23 more accurate classification performance than the QIFC.
Conclusion
In this paper, we have proposed a penalized method for learning mixture regression models from longitudinal data which is able to select the number of components in an unsuper- Numerical studies show it works well, while the theoretical consistency deserves a further study.
Another issue is the consideration of the within-subject correlation. The proposed penalized approach is introduced under the working independence correlation. Simulation results have implied that it may lose some estimation efficiency, especially when the within-subject correlation is large. Therefore, we suggest a two-step technique to refine the estimates. Simulations show that the efficiency improvement is significant if the correlation information is incorporated and the working structure is correctly specified.
It would be worthwhile to systematically study the unsupervised learning of mixtures by incorporating correlations.
Finally, in the presence of missing data at some time points, our implicit assumption is missing completely at random, under which the quasi-likelihood method yield consistent estimates (Liang and Zeger, 1986). Such an assumption is applicable to our motivating example, as patients missed their measurements due to administrative reasons. However, when the missing values are informative, the proposed method has to be modified so as to incorporate missing mechanisms. This is beyond the current scope of the work and would warrant further investigations.
25 m i j=1 q(g(X T ij β k ); Y ij ) , and ψ(θ; Y i | X i ) = log(Ψ(θ; Y i | X i )). Under Condition C5, θ 0 is a maximizer of n −1 n i=1 E{ψ(θ; Y i | X i )−ψ(θ 0 ; Y i | X i )}.
Then, θ 0 is identifiability unique. Therefore, in the spirit of Theorem 17 in Ferguson (1996) and Theorem 2.1 in Bollerslev and Wooldridge (1992), θ is weak consistent under
Conditions C1-C5. Let θ * = √ n( θ − θ 0 ). Then, θ * maximizes Q n ( θ * ) = n i=1 {ψ(n −1/2 θ * + θ 0 ; Y i | X i ) − ψ(θ 0 ; Y i | X i )}.
An application of Taylor expansion yields that For a data pair (Y, X) with m times observations, define
Q n ( θ * ) = 1 √ n n i=1 ∂ψ(θ 0 ; Y i | X i ) ∂θ θ * + 1 2 θ * T 1 n n i=1 ∂ 2 ψ(θ 0 ; Y i | X i ) ∂θ∂θ T θ * + o p (1) ≡ D n θ * + 1 2 θ * T B n θ * + o p (1),(7.Ψ 0 (Y | X) = K 0 k=1 π 0k f (β 0k ; Y | X).
Let D be the subset of functions of form
d(γ; Y | X) = K 0 k=1 π 0k p j=1 δ kj D 1 j f (β 0k ; Y | X) Ψ 0 (Y | X) + K−K 0 l=1 λ l f (β l ; Y | X) Ψ 0 (Y | X) + K 0 k=1 ρ k f (β 0k ; Y | X) Ψ 0 (Y | X) ,
where D 1 j f (β 0k ; Y | X) is the first derivative of f (β 0k ; Y | X) for the jth component of β 0k . when η = C/ √ n. Let η = C/ √ n, and note that
Q P (η, γ) − Q P (0, γ) = n i=1 {log Ψ(η, γ; Y i | X i ) − log Ψ 0 (Y i | X i )} −nλ K l=K−K 0 +1 {log(ǫ + π l ) − log(ǫ + π 0(l−K+K 0 ) )} − nλ K−K 0 k=1 {log(ǫ + π k ) − log(ǫ)}.
Then,
Q P (η, γ) − Q P (0, γ) ≤ n i=1 {log Ψ(η, γ; Y i | X i ) − log Ψ 0 (Y i | X i )} −nλ K l=K−K 0 +1
{log(ǫ + π l ) − log(ǫ + π 0(l−K+K 0 ) )} := S 1 + S 2 .
For S 1 , an application of Taylor expansion yields
S 1 = n i=1 Ψ(η, γ; Y i | X i ) − Ψ 0 (Y i | X i ) Ψ 0 (Y i | X i ) − 1 2 n i=1 Ψ(η, γ; Y i | X i ) − Ψ 0 (Y i | X i ) Ψ 0 (Y i | X i ) 2 + 1 3 n i=1 t i Ψ(η, γ; Y i | X i ) − Ψ 0 (Y i | X i ) Ψ 0 (Y i | X i ) 3
for η = C/ √ n, where |t i | ≤ 1. By Taylor expansion again for Ψ(η, γ; Y | X) at η = 0,
we have Ψ(η, γ; Y | X) = Ψ 0 (Y | X) + ηΨ ′ (0, γ; Y | X) + η 2 2 Ψ ′′ (θ, γ; Y | X)
, for aθ ≤ θ. Then, by conditions C1-C5, we have
S 1 = n i=1 η Ψ ′ (0, γ; Y i | X i ) Ψ 0 (Y i | X i ) − 1 2 n i=1 η 2 Ψ ′ (0, γ; Y i | X i ) Ψ 0 (Y i | X i ) 2
(1 + o p (1)). By Lemma 7.1 for the class D, we know that 1
√ n n i=1 Ψ ′ (0,γ;Y i |X i ) Ψ 0 (Y i |X i ) converges uniformly in distribution to a Gaussian process and n i=1 Ψ ′ (0,γ;Y i |X i ) Ψ 0 (Y i |X i ) 2
= O p (n) by the law of large numbers. Therefore,
S 1 = C √ n O p ( √ n) − C 2 n O p (n).
For S 2 , we know that |π l − π 0(l−K+K 0 ) | = |ηρ l−K+K 0 | ≤ C √ n , l = K − K 0 + 1, . . . , K, by the restriction condition on ρ k , k = 1, . . . , K 0 . Thus, by Taylor expansion, we have
|S 2 | = nλ K l=K−K 0 +1 π l − π 0(l−K+K 0 ) ǫ + π 0(l−K+K 0 ) {1 + o(1)} = O( √ n) CK 0 √ n {1 + o(1)} = O(C), if √ nλ → a.
Therefore, when C is large enough, the second term in S 1 dominates S 2 and other terms in S 1 . Consequently, we have Q P (η, γ) − Q P (0, γ) < 0 with probability tending to one. Hence, there exists a maximizer (η, γ) such that η = O p (n −1/2 ) with probability tending to one.
Then, we show that K = K 0 or π k = 0, k = 1, . . . , K − K 0 when the maximizer (η, γ) satisfies η = O p (n −1/2 ). We first show that, for any maximizer Q P (η * , γ * ) with |η * | ≤ Cn −1/2 , if there is a k ≤ K − K 0 such that Cn −1/2 ≥ π * k > n −1/2 / log n, there exists another maximizer of Q P (η, γ) in the area of |η| ≤ Cn −1/2 . It is equivalent to show that Q P (η * , γ * ) < Q P (0, γ * ) holds with probability tending to one for any such kind maximizer Q P (η * , γ * ) with |η * | ≤ Cn −1/2 . For any k < K − K 0 + 1, we have
Q P (η * , γ * ) − Q P (0, γ * ) ≤ n i=1 {log Ψ(η * , γ * ; Y i | X i ) − log Ψ 0 (Y i | X i )} −nλ K l=K−K 0 +1 {log(ǫ + π * l ) − log(ǫ + π 0(l−K+K 0 ) )} − nλ{log(ǫ + π * k ) − log ǫ} := S 1 + S 2 + S 3 .
As shown before, we have S 1 + S 2 = O p (C 2 ). For S 3 , because ǫ = o(n −1/2 / log n) and π k < n −1/2 / log n, we have
|S 3 | = O(n · C √ n ) log π * k ǫ = O(n 1/2 ),
which implies that S 3 dominates S 1 and S 2 , and hence Q P (η * , γ * ) < Q P (0, γ * ). So, in the following step, we only need to consider the maximizer Q P ( η, γ) with | η| ≤ Cn −1/2 and π k < n −1/2 / log n for k ≤ K − K 0 .
Let Q * (θ) = Q P (θ) − ξ( K k=1 π k − 1), where ξ is a Lagrange multiplier. Then, it is sufficient to show that, for the maximizer (η, γ),
∂Q * (θ) ∂ π k < 0 for π k < 1 √ n log n , k ≤ K − K 0 (7.2)
with probability tending to one. For k = 1, . . . , K, note that π k satisfies
∂Q * (θ) ∂ π k = n i=1 f k (β k ; Y i | X i ) K l=1 π l f l (β l ; Y i | X i ) − nλ 1 ǫ + π k − ξ = 0, (7.3) where f l (β l ; Y i | X i ) = exp m i j=1 q(g(X T ij β l ); Y ij )
. By the law of large numbers, the first term of (7.3) is of order O p (n). If k > K − K 0 and η = O p (n −1/2 ), we have that
π k = π 0(k−K+K 0 ) + O p (n −1/2 ) > 1 2 min{π 01 , . . . , π 0K 0 }.
Hence, the second term of (7.3) is of order O p (nλ) = o p (n). Thus, ξ = O p (n). If k ≤ K − K 0 , since π k = O p (n −1/2 / log n), λ = a/ √ n and ǫ = o(n −1/2 / log n), we have
nλ 1 ǫ + π k /n = λ 1 ǫ + π k = O p (λ · n 1/2 log n) → ∞
with probability tending to one. Hence, the second term in (7.3) dominates the first and third terms when k ≤ K − K 0 and π k < n −1/2 / log n, which implies that (7.2) holds or equivalently, π k = 0, k = 1, . . . , K − K 0 with probability tending to one. This completes the proof of Theorem 2.
B. Tables and Graphs
come data, particularly in the context of model-based probabilistic clustering (Fraely and Raftery, 2002). For example, De la Cruz-Mesía et. al. (2008) proposed a mixture of non-linear hierarchical models with Gaussian subdistributions; McNicholas and Murphy (2010) extended the Gaussian mixture models with Cholesky-decomposed group covariance structure; Komárek and Komárková (2013) introduced a generalized linear mixed model for components' densities under the Gaussian mixture framework; Heinzl and Tutz (2013) considered linear mixed models with approximate Dirichlet process mixtures. Other relevant work includes Celeux et. al. (2005), Booth et. al. (2008), Pickles and Croudace (2010), Maroutti (2011), Erosheva et. al. (2014) and some of the references therein.
, De la Cruz-Mesía et al. (2008) and many others. However, these methods are all based on the complete model search algorithm, which result in heavy computation burden. To improve the computational efficiency, data-driven procedures are much more preferred. Recently, Chen and Khalili (2008) used the SCAD penalty (Fan and Li, 2001) to penalize the difference of location parameters for mixtures of univariate location distributions; Komárek and Lesaffre (2008) suggested to penalize the reparameterized mixture weights in the generalized mixed model with Gaussian mixtures; Heinzl and Tutz (2014) constructed a group fused lasso penalty in linear-mixed models; Huang et. al. (2016) proposed a penalized likelihood method in finite Gaussian mixture models. Most of them are developed for independent data or based on the full likelihood.
3) given the number of mixing components. And then, we establish the model selection consistency of the proposed method (2.4) for the general semiparametric marginal mixture regression model (2.1)-(2.2).
Conditions C1 -
C1C2 are typical assumptions in the estimation literature, which are also found in Xu and Zhu (2012) and Xu et. al. (2016). Conditions C3-C6 are mild conditions in the literature of mixture models, which are used for the proof of weak consistency and asymptotic normality.
s are uniformly bounded around on B. Condition C8 is similar to conditions (C3) and (C4) in Huang et. al. (2007), which ensures the non-singularity of the covariance matrices and the working covariance matrices.Theorem 2. Under conditions C1-C8, if lim n→∞ √ nλ = a and ǫ = o(n −1/2 / log n), where a is a constant, there exists a local maximizer (η, γ) of (3.2) such that η = O p (n that by choosing an appropriate tuning parameter λ and a small constant ǫ, the proposed method (2.4) can select the number of mixing components consistently.
Remark 4 .
4Although in theory we require ǫ = o(n −1/2 / log n), we can update π using (4.3) without choosing ǫ in practice.
domly sampled from Bernoulli distribution B(1, 0.5) with 0 for male and 1 for female. For each subject, m i = 6 visit times Z ij 's are generated, with the first time being equal to 0 and the remaining five visit times being generated from uniform distributions on intervals (350, 390), (710, 770), (1080, 1160), (1450, 1550), and (1820, 1930) days, respectively. Then, let X ij4 = Z ij /30.5 be the jth visit time of ith subject in months. Further, for each subject, we assume the within correlation structure is AR(1) with correlation coefficient 0.6. To measure the performance of the proposed tuning parameter selector (4.4), we show the histograms of the estimated component numbers and report the percentage of selecting 13 the correct number of components. To check the convergence of the proposed modified EM algorithm, we draw the evolution of the penalized quasi-likelihood (2.4) in one run.With respect to classification, we generate 100 new subjects from each component with the same setting as in each configuration and measure the performance in terms of the misclassification error rate. We summarize the median and the 95% confidence interval of misclassification error rate from a model with correctly identified K 0 for PQL and PQL2 and report these quantities for QIFC as well. To measure the performance of the proposed estimators, the mean values of the estimators, the means of the biases, and the mean squared errors (MSE) for the mixture proportions and regression parameters are reported when the number of components K 0 is correctly identified. Correspondingly, the mean values, the means of biases, and the MSE of the estimated QIFC estimators are also summarized as a benchmark for comparison. Note that the label switching might arise in practice.Yao (2015) andZhu and Fan (2016) proposed many feasible labeling methods and algorithms. In our simulation studies, we solve the label switching by putting an order constraint on components' mean parameters.
Figure 1 (
1a) draws the histogram of the estimated component numbers. It shows that the proposed PQL method with the BIC tuning parameter selector can identify the correct number of components at least with probability 0.962, which is in accordance with the model selection consistency in Theorem 2. Figure 1(b) depicts the evolution of the penalized quasi-likelihood function (2.4) for the simulated data set in one run, showing that how our proposed modified EM algorithm converges numerically.
. . . , n 1 , j = 1, . . . , m i , and the marginal variance is φ 1 µ ij1 = var(Y ij ) = 2µ ij1 . The correlation structure within a subject is AR(1) with correlation coefficient ρ. For component 2, Y i has the
Figure 1 :
1Histograms of estimated numbers of components by the proposed PQL method. (a) Example 1, (c) Example 2 with ρ = 0.3, (d) Example 2 with ρ = 0.6, (e) Example 3. The value on the top of each bar is the percentage of selecting the corresponding number of components. (b) is the evolution of the penalized quasi-likelihood function for the simulated data set in Example 1 in one typical run. (f) is the histogram of estimated number of components based on 1000 replications in PBC data. 16
. . . , n 2 , j = 1, . . . , m i , with dispersion parameter φ 2 = 1. The number of repeated measurements m i is randomly drawn from a Poisson distribution with mean 3 and increased by 2, and the sample size is n = 150. Covariates X ijp are generated independently from uniform distribution U(0, 1). Two values of ρ are considered, ρ = 0.3 and 0.6, to represent different correlation magnitude.
Figure 1 (Figure 3
13c) and (d) depict the histograms of the estimated component numbers with different correlation magnitude. It shows that our proposed PQL method can identify the correct model in more than 95% cases. Even with large within-subject correlation, (b) in Appendix B shows that the modified EM algorithm converges numerically with the maximum number of components as 10. Once the model is correctly selected, the classification accuracy is quite satisfactory.
of the parameter estimates; (c) means of the biases for the mixture proportions and mixture parameters; (d) mean squared errors (MSE) for the mixture proportions and mixture parameters. The values of bias and MSE are times 100. For the proposed PQL and PQL2 methods, the results below are summarized based on the models with correctly specified K 0 in 1000 replications. modified EM algorithm gives consistent estimates for parameters and mixture proportions by considering the within-class dispersions. Similar to that in Example 1, when the within-subject correlation is large, the PQL2 approach enhances the estimation efficiency by incorporating the correlations within each subject while retaining the class membership prediction accuracy.
Example 3 .
3In the third example, we consider a five-component Gaussian mixture of AR(1), exchangeable (CS), and independence (IND)
Figure 1 (
1e) draws the histogram of estimated numbers of components andFigure 4depicts the evolution of the penalized quasi-likelihood function (2.4) in one run. Though PQL uses a single correlation structure (IND), it is able to identify the correct number of components with high probability, and the corresponding modified EM algorithm converges numerically. Further, the classification results summarized in
trail in primary biliary cirrhosis (PBC) conducted by the Mayo Clinic between 1974 and 1984(Dickson, Grambsch, Fleming, Fisher, and Langworthy, 1989).This data set consists of 312 patients who consented to participate in the randomized placebo-controlled trial with D-penicillamine for treating primary biliary cirrhosis until April 1988. Each patient was supposed to have measurements taken at 6 months, 1 year, and annually thereafter. However, 125 of the original 312 patients had died at updating of follow-up in July 1986. Of the remainder, a sizable portion of patients missed their measurements because of worsening medical condition of some labs, which resulted in an unbalanced data structure. A number of variables were recorded for each patient including ID number, time variables such as age and number of months between enrollment and this visit date, categorical variables such as drug, gender and status, continuous measurement variables such as the serum bilirubin level. PBC is a rare but fatal chronic cholestatic liver disease, with a prevalence of about 50-cases-per-millon-population. Affected patients are typically middle-aged women. As in this data set, the sex ratio is 7.2 : 1 (women to men), where the median age of women patients is 49 years old. Identification of PBC is crucial to balancing the need for medical treatment to halt disease progression and extend survival without need for liver transplantation, while minimizing drug-induced toxicities. Biomedical research indicates that serum bilirubin concentration is a primary indicator to help evaluate and track the absence of liver diseases. It is generally normal at diagnosis (0.1∼1 mg/dl) but rise with histological disease progression (Talwalkar and Lindor, 2003). Therefore, we concentrate on modeling the relationship between marker serum bilirubin and other covariates of interest. We set the log-transformed serum bilirubin level (lbili) as the response variable, since the original level has positive observed values (Murtaugh, Dickson, van Dam, Malinchoc, Grambsch, Langworthy, and Gips, 1994). Figure 2(a) depicts the plot of a set of observed transformed longitudinal profiles of serum bilirubin marker. It shows that the trend of profiles vary over time and the variability may be large for different patients. The median age of 312 patients is 50 years, but varies between 26 and 79 years. The two sample t-test indicates that there exists significant difference in means of age between male and female groups (p-value = 0.001). Therefore, we consider the marginal semiparametric mixture regression model (2.1)-(2.2) with the identity link. The mean structure in the kth component takes the formE(Y ij ) = β k1 Trt ij + β k2 Age ij + β k3 Sex ij + β k4 Time ij ,(6.1)and the marginal variance is assumed as var(Y ij ) = σ 2 k , i = 1, . . . , 312, j = 1, . . . , m i , variable Sex is binary with 0 for male and 1 for female, and variable Time is the number of months between enrollment and this visit date.
Figure 2 :
2(a): Observed transformed longitudinal profiles of serum bilirubin marker. The red lines are profiles of two selected patients (id 2 and 34). (b): The Kaplan-Meier estimate of survival curves for two classes (class 1: red, class 2: green).
covariate "Time" is significant and bilirubin level increases over time in both treatment and control arms. Moreover, note that β 14 = 0.068 < β 24 = 0.313, which implies that the bilirubin level increases more slowly over time in Group 0. Therefore, from the clinical point of view, Group 0 should correspond to patients with a better prognosis compared to Group 1. To confirm this conclusion, Kaplan-Meier estimates of the survival probabilities are calculated based on data from patients classified in each group. We can see that, fromFigure 2(b), the survival prognosis of Group 0 is indeed much better than that of Group 1 with the estimated 5-year survival probability in Group 0 of 0.926 compared to 0.729 in Group 1, and the 10-year survival probabilities 0.771 and 0.310 in Groups 0 and 1, respectively. The p-value of the log rank test is near 0, which implies that the survival distributions corresponding to identified groups are quite different. Further, according to the variable "status", the group levels for 312 patients are predefined. At the endpoint of the cohort study, 140 of the patients had died, Group 1, while 172 were known to be alive, Group 0. Therefore, it is of interest to compare the classification results using the fitted semiparametric two-component mixture models shown in
vised way. The proposed method only requires the first two moment conditions of the model distribution, and thus is suitable for both the continuous and discrete responses. It penalizes the logarithm of mixing proportions, which allows one to simultaneously select the number of components and to estimate the mixture proportions and unknown parameters. Theoretically, we have shown that our proposed approach can select the number of components consistently for general marginal semiparametric mixture regression models. And given the number of components, the estimators of mixture proportions and regression parameters are root-n consistent and asymptotically normal.To improve the classification accuracy, a modified EM algorithm has been proposed by considering the within-component dispersion. Simulation results and the real data analysis have shown its convergence, but further theoretical investigation is needed. And we have introduced a BIC-type method to select the tuning parameter automatically.
is a pK + (K − 1) dimensional all-ones vector. It can be shown that B n P −→ −B. Then, by (7.1) and quadratic approximation lemma, we have θ * = B −1 D n + o p (1). Note that var(D n ) = A. And under the regularity conditions, we have D n L −→ N(0, A). Hence, θ * L −→ N(0, B −1 AB −1 ). In order to establish Theorem 2, we need the following lemma first, which can be derived using similar arguments as the proof of Proposition A.1 of Huang et al. (2016).
Under conditions C1-C6, it is straightforward to show that D satisfies conditions P0 and P1 in Dacunha-Castelle and Gassiat (1999) as in Keribin (2000). Then, there exists a Ψ 0 ν-square integrable envelope functiond(·) such that |d(γ; Y |X)| ≤d(Y |X). On the other hand, the sequences of coefficients of d(γ; Y |X) are bounded under the restrictions imposed on γ. Hence, similar to the proof of Proposition 3.1 in Dacunha-Castelle and Gassiat (1999), we can show that D has the Donsker property with the bracketing number N(ε) = ε −pK . Proof of Theorem 2 In the spirit of proof of Theorem 3.2 in Huang et al. (2016), we divide our proof into two parts. First, we show that there exists a maximizer (η, γ) such that η = O p (n −1/2 )when λ = a/ √ n. It is sufficient to show that, for a large constant C, Q P (η, γ) < Q P (0, γ)
Figure 3 :
3Evolutions of the penalized quasi-likelihood function for the simulated data set in Example 2 in one typical run: (a) ρ = 0.3, (b) ρ = 0.6.
Figure 4 :
4The Evolution of the penalized quasi-likelihood function for the simulated data set in Example 3 in one typical run.
Figure 5 :
5The boxplots of residulas for lbili under the fitted LMMs.
Figure 6 :
6Trajectories plots for the PBC data. Observed evolution of lbili marker for 312 patients. The red lines show the fitted mean profiles in two groups.
5 :
5Estimation results in Example 2: (a) true values of mixture proportions and mixture parameters; (b) means of the parameter estimates; (c) means of the biases for the mixture proportions and mixture parameters; (d) mean squared errors (MSE) for the mixture proportions and mixture parameters. The values of bias and MSE are times 100. For the proposed PQL and PQL2 methods, the results below are summarized based on the models with correctly specified K 0 in 1000 replications. 1.101 0.907 0.518 0.491 0.083 0.100 0.071 0.089 4.489 1.410 --
Table 1 :
1The median and the 95% confidence interval (CI) of total misclassification error rate in simulation studies. The values of median in Examples 1 and 2 are times 100. For the proposed PQL and PQL2 methods, the results below are summarized based on the models with correctly specified K 0 in 1000 replications.(a)Example 1
Criterion
PQL
PQL2
QIFC
median
0.000
0.000
0.058
CI
(0.000, 0.000) (0.000, 0.000) (0.000, 0.000)
(b)Example 2
Criterion
PQL
PQL2
QIFC
ρ = 0.3
median
0.234
0.232
0.235
CI
(0.000, 0.010) (0.000, 0.010) (0.000, 0.010)
ρ = 0.6
median
0.247
0.246
0.670
CI
(0.000, 0.010) (0.000, 0.010) (0.000, 0.020)
(c)Example 3
Criterion
PQL
PQL2
QIFC
median
0.209
0.209
0.214
CI
(0.202, 0.218) (0.202, 0.218) (0.204, 0.226)
correlation matrix as in component 1, except that
Table 1 (
1b) implies that PQL and PQL2
Table
Table 3 :
3Parameter estimates for primary biliary cirrhosis data.PQL
QIFC
Parameters
Group 0
Group 1
Group 0
Group 1
mixture proportions
0.512
0.487
-
-
(0.129)
(0.129)
-
-
Trt
0.084
-0.097
0.055
-0.076
(0.183)
(0.534)
(0.037)
(-0.113)
Age
-0.016
-0.051
-0.272
0.104
(0.075)
(0.418)
(-0.261)
(0.262)
Sex
-0.366
3.220
-0.125
-0.204
(0.097)
(1.219)
(-0.113)
(-0.064)
Time
0.068
0.313
0.093
-0.106
(0.029)
(0.113)
(0.119)
(-0.108)
σ 2
0.523
0.781
0.832
2.641
(0.289)
(0.146)
(0.833)
(2.689)
Table 4 :
4Agreements and differences between the clinical and model classifications using
the PQL and QIFC methods.
PQL
QIFC
Total
Classify to
0
1
0
1
True
Group 0
118
54
69
103
172
Group 1
42
98
21
119
140
Total
160
152
90
222
312
Table
Table 6 :
6Estimation results in Example 3: (a) true values of mixture proportions and mixture parameters; (b) means of the parameter estimates; (c) means of the biases for the mixture proportions and mixture parameters; (d) mean squared errors (MSE) for the mixture proportions and mixture parameters. The values of bias and MSE are times 100.For the proposed PQL and PQL2 methods, the results above are summarized based on the models with correctly specified K 0 in 1000 replications.True
PQL
PQL2
QIFC
values Mean
Bias
MSE Mean
Bias
MSE Mean
Bias
MSE
β 10
2
1.992 -0.752 0.493 1.993 -0.698 0.368 1.996 -0.360 0.341
β 11
1
0.998 -0.194 0.603 1.001 -0.014 0.327 1.000
0.107 0.357
β 12
-1
-0.991 0.944 0.804 -0.989 1.113 0.439 -0.996 0.415 0.481
β 13
1.5
1.496 -0.392 0.626 1.496 -0.436 0.296 1.499 -0.213 0.330
β 14
1
0.998 -0.359 0.348 0.998 -0.234 0.179 0.999 -0.082 0.191
β 20
-4
-3.988 1.219 0.277 -3.991 0.693 0.211 -3.999 0.071 0.215
β 21
2
1.994 -0.621 0.388 1.995 -0.453 0.215 1.998 -0.350 0.227
β 22
1
1.001
0.294 0.535 1.002
0.162 0.291 1.001
0.246 0.312
β 23
Appendix A. Proofs of TheoremsProof of Theorem 1
Quasi-maximum likelihood estimation and inference in dynamic models with time-varying covariances. T Bollerslev, J M Wooldridge, Econometric Reviews. 11Bollerslev, T. and Wooldridge, J.M. (1992). Quasi-maximum likelihood estimation and inference in dynamic models with time-varying covariances. Econometric Reviews 11, 143-172.
Clustering using objective functions and stochastic search. J G Booth, G Casella, J P Hobert, Journal of the Royal Statistical Society. 70Booth, J.G., Casella, G., and Hobert, J.P. (2008). Clustering using objective functions and stochastic search. Journal of the Royal Statistical Society B70, 119-139.
Mixture of linear mixed models for clustering gene expression profiles from repeated microarray experiments. G Celeux, O Martin, C Lavergne, Statistical Modelling. 5Celeux, G., Martin, O., and Lavergne, C. (2005). Mixture of linear mixed models for clustering gene expression profiles from repeated microarray experiments. Statistical Modelling 5, 243-267.
Order selection in finite mixture models with a nonsmooth penalty. J Chen, A Khalili, Journal of the American Statistical Association. 104Chen, J. and Khalili, A. (2008). Order selection in finite mixture models with a non- smooth penalty. Journal of the American Statistical Association 104, 187-196.
Testing in locally conic models and application to mixture models. D Dacunha-Castelle, K Gassiat, ESAIM: Probability and Statistics. 1Dacunha-Castelle, D. and Gassiat, K. (1997). Testing in locally conic models and application to mixture models. ESAIM: Probability and Statistics 1, 285-317.
Testing the order of a model using locally conic parametrization: population mixtures and stationary ARMA processes. D Dacunha-Castelle, K Gassiat, The Annals of Statistics. 27Dacunha-Castelle, D. and Gassiat, K. (1999). Testing the order of a model using locally conic parametrization: population mixtures and stationary ARMA processes. The Annals of Statistics 27, 1178-1209.
Detecting features in spatial point processes with clutter via model-based clustering. A Dasgupta, A E Raftery, Journal of the American Statistical Association. 93Dasgupta, A. and Raftery, A.E. (1998). Detecting features in spatial point processes with clutter via model-based clustering. Journal of the American Statistical Association 93, 294-302.
Model-based clustering for longitudinal data. R De La Cruz-Mesía, F A Quintana, Marshall , G , Computational Statistics & Data Analysis. 52De la Cruz-Mesía, R., Quintana, F.A., and Marshall, G. (2008). Model-based cluster- ing for longitudinal data. Computational Statistics & Data Analysis 52, 1441-1457.
Prognosis in primary biliary cirrhosis: Model for decision making. E R Dickson, P M Grambsch, T R Fleming, L D Fisher, A Langworthy, Hepatology. 10Dickson, E.R., Grambsch, P.M., Fleming, T.R., Fisher, L.D., and Langworthy, A. (1989). Prognosis in primary biliary cirrhosis: Model for decision making. Hepatology 10, 1-7.
Breadking bad: two decades of life-course data analysis in criminology, developmental psychology, and beyond. Annual Review of. E A Erosheva, R L Matsueda, D Telesca, Statistics and Its Application. 1Erosheva, E.A., Matsueda, R.L., and Telesca, D. (2014). Breadking bad: two decades of life-course data analysis in criminology, developmental psychology, and beyond. An- nual Review of Statistics and Its Application 1, 301-332.
Variable selection via nonconcave penalized likelihood and its oracle properties. J Fan, R Li, Journal of the American Statistical Association. 96Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96, 1348-1360.
A course in large sample theory. T S Ferguson, Chapman & HallFerguson, T.S. (1996). A course in large sample theory. Chapman & Hall.
Model-based clustering discriminant analysis and density estimation. C Fraley, A E Raftery, Journal of the American Statistical Association. 97Fraley, C. and Raftery, A.E. (2002). Model-based clustering discriminant analysis and density estimation. Journal of the American Statistical Association 97, 611-631.
KmL: k-means for longitudinal data. C Genolini, B Falissard, Computational Statistics. 25Genolini, C. and Falissard, B. (2010). KmL: k-means for longitudinal data. Compu- tational Statistics 25, 317-328.
Clustering in linear mixed models with approximate Dirichlet process mixtures using EM algorithm. F Heinzl, G Tutz, Statistical Modelling. 13Heinzl, F. and Tutz, G. (2013). Clustering in linear mixed models with approximate Dirichlet process mixtures using EM algorithm. Statistical Modelling 13, 41-67.
Clustering in linear-mixed models with a group fused lasso penalty. F Heinzl, G Tutz, Biometrical Journal. 56Heinzl, F. and Tutz, G. (2014). Clustering in linear-mixed models with a group fused lasso penalty. Biometrical Journal 56, 44-68.
Breakdown points for maximum likelihood estimators of locationscale mixtures. C Hennig, The Annals of Statistics. 32Hennig, C. (2004). Breakdown points for maximum likelihood estimators of location- scale mixtures. The Annals of Statistics 32, 1313-1340.
Randomized trial of chlorambucil for primary biliary cirrhosis. J H Hoofnagle, G L David, D F Schafer, M Peters, M I Avigan, S C Pappas, R G Hanson, G Y Minuk, G M Dusheiko, G Campbell, Gastroenterology. 91Hoofnagle, J.H., David, G.L., Schafer, D.F., Peters, M., Avigan, M.I., Pappas, S.C., Hanson, R.G., Minuk G.Y., Dusheiko, G.M., and Campbell, G. (1986). Randomized trial of chlorambucil for primary biliary cirrhosis. Gastroenterology 91, 1327-1334.
Efficient estimation in marginal partially linear models for longitudinal/clustered data using splines. J Z Huang, L Zhang, L Zhou, Scandinavian Journal of Statistics. 34Huang, J.Z., Zhang, L., and Zhou, L. (2007). Efficient estimation in marginal partially linear models for longitudinal/clustered data using splines. Scandinavian Journal of Statistics 34, 451-477.
Model selection for Gaussian mixture models. T Huang, H Peng, K Zhang, Statistica Sinica. in pressHuang, T., Peng, H., and Zhang, K. (2016). Model selection for Gaussian mixture models. Statistica Sinica, in press.
Consistent estimation of the order of mixture models. C Keribin, Sankhyā. 62Keribin, C. (2000). Consistent estimation of the order of mixture models. Sankhyā 62, 49-66.
Clustering for multivariate continuous and discrete longitudinal data. A Komárek, L Komárková, The Annals of Applied Statistics. 7Komárek, A. and Komárková, L. (2013). Clustering for multivariate continuous and discrete longitudinal data. The Annals of Applied Statistics 7, 177-200.
Generalized linear mixed model with a penalized Gaussian mixture as a random effects distribution. A Komárek, E Lesaffre, Computational Statistics & Data Analysis. 52Komárek, A. and Lesaffre, E. (2008). Generalized linear mixed model with a penalized Gaussian mixture as a random effects distribution. Computational Statistics & Data Analysis 52, 3441-3458.
Consistent estimation of a mixing distribution. B Leroux, The Annals of Statistics. 20Leroux, B. (1992). Consistent estimation of a mixing distribution. The Annals of Statistics 20, 1350-1360.
Longitudinal data analysis using generalised linear models. K Y Liang, S L Zeger, Biometrika. 73Liang, K.Y. and Zeger, S.L. (1986). Longitudinal data analysis using generalised linear models. Biometrika 73, 12-22.
Mixed hidden Markov models for longitudinal data: an overview. A Maruotti, International Statistical Review. 79Maruotti, A. (2011). Mixed hidden Markov models for longitudinal data: an overview. International Statistical Review 79, 427-454.
Model-based clustering of longitudinal data. P D Mcnicholas, T B Murphy, The Canadian Journal of Statistics. 38McNicholas, P.D. and Murphy, T.B. (2010). Model-based clustering of longitudinal data. The Canadian Journal of Statistics 38, 153-168.
Primary biliary cirrhosis: prediction of shortterm survivial based on repeated patient visits. P A Murtaugh, E R Dickson, G M Van Dam, M Malinchoc, P M Grambsch, A L Langworthy, C H Gips, Hepatology. 20Murtaugh, P.A., Dickson, E.R., van Dam, G.M., Malinchoc, M., Grambsch, P.M., Langworthy, A.L., and Gips, C.H. (1994). Primary biliary cirrhosis: prediction of short- term survivial based on repeated patient visits. Hepatology 20, 126-134.
Latent mixture models for multivariate and longitudinal outcomes. A Pickles, T Croudace, Statistical Methods in Medical Reseaerch. 19Pickles, A. and Croudace, T. (2010). Latent mixture models for multivariate and longitudinal outcomes. Statistical Methods in Medical Reseaerch 19, 271-289.
A patient with primary biliary cirrhosis and multiple sclerosis. M J Pontecorvo, J D Levinson, J A Roth, The American Journal of Medicine. 92Pontecorvo, M.J., Levinson J.D., and Roth, J.A. (1992). A patient with primary biliary cirrhosis and multiple sclerosis. The American Journal of Medicine 92, 433- 436. 28
Practical density estimation using mixtures of normals. K Roeder, L Wasserman, Journal of the American Statistical Association. 92Roeder, K. and Wasserman, L. (1997). Practical density estimation using mixtures of normals. Journal of the American Statistical Association 92, 894-902.
Estimating the dimension of a model. G Schwarz, Annals of Statistics. 6Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics 6, 461-464.
Primary biliary cirrhosis. J A Talwalkar, K D Lindor, Lancet. 362Talwalkar, J.A. and Lindor, K.D. (2003). Primary biliary cirrhosis. Lancet 362, 53- 61.
A linear mixed-effects model with heterogeneity in the random-effects population. G Verbeke, E Lesaffre, Journal of the American Statistical Association. 91Verbeke, G. and Lesaffre, E. (1996). A linear mixed-effects model with heterogeneity in the random-effects population. Journal of the American Statistical Association 91, 217-221.
Tuning parameter selelctors for the smoothly clipped absolute deviation method. H Wang, R Li, C L Tsai, Biometrika. 94Wang, H., Li, R., and Tsai, C.L. (2007). Tuning parameter selelctors for the smoothly clipped absolute deviation method. Biometrika 94, 553-568.
GEE analysis of clustered bianry data with diverging number of covariates. L Wang, The Annals of Statistics. 39Wang, L. (2011). GEE analysis of clustered bianry data with diverging number of covariates. The Annals of Statistics 39, 389-417.
Efficient classification for longitudinal data. X Wang, A Qu, Computatioinal Statistics & Data Analysis. 78Wang, X. and Qu, A. (2014). Efficient classification for longitudinal data. Computa- tioinal Statistics & Data Analysis 78, 119-134.
Efficient estimation of marginal generalized partially linear single-index models with longitudinal data. P Xu, J Zhang, X Huang, Wang , T , TEST. 25Xu, P., Zhang, J., Huang, X., and Wang, T. (2016). Efficient estimation of marginal generalized partially linear single-index models with longitudinal data. TEST 25, 413- 431.
Estimation for a marginal generalized single-index longitudinal model. P Xu, L Zhu, Journal of Multivariate Analysis. 105Xu, P. and Zhu, L. (2012). Estimation for a marginal generalized single-index longi- tudinal model. Journal of Multivariate Analysis 105, 285-299.
Label switching and its solutions for frequentist mixture model. W Yao, Journal of Statistical Computation and Simulation. 85Yao, W. (2015). Label switching and its solutions for frequentist mixture model. Journal of Statistical Computation and Simulation 85, 1000-1012.
Relabelling algorithms for mixture models with applications for large data sets. W Zhu, Y Fan, Journal of Statistical Computation and Simulation. 86Zhu, W. and Fan, Y. (2016). Relabelling algorithms for mixture models with ap- plications for large data sets. Journal of Statistical Computation and Simulation 86, 394-413. 29
|
[] |
[
"Tracking Multiple Audio Sources with the von Mises Distribution and Variational EM",
"Tracking Multiple Audio Sources with the von Mises Distribution and Variational EM"
] |
[
"Yutong Ban ",
"Xavier Alameda-Pineda ",
"Member, IEEEChristine Evers ",
"Senior Member, IEEERadu Horaud "
] |
[] |
[] |
In this paper we address the problem of simultaneously tracking several audio sources, namely the problem of estimating source trajectories from a sequence of observed features. We propose to use the von Mises distribution to model audio-source directions of arrival (DOAs) with circular random variables. This leads to a multi-target Kalman filter formulation which is intractable because of the combinatorial explosion of associating observations to state variables over time. We propose a variational approximation of the filter's posterior distribution and we infer a variational expectation maximization (VEM) algorithm which is computationally efficient. We also propose an audio-source birth method that favors smooth source trajectories and which is used both to initialize the number of active sources and to detect new sources. We perform experiments with a recently released dataset comprising several moving sources as well as a moving microphone array.
|
10.1109/lsp.2019.2908376
|
[
"https://arxiv.org/pdf/1812.08246v1.pdf"
] | 56,517,474 |
1812.08246
|
0d0a697f8ca17924dc94002372e8053b53d8afe8
|
Tracking Multiple Audio Sources with the von Mises Distribution and Variational EM
Yutong Ban
Xavier Alameda-Pineda
Member, IEEEChristine Evers
Senior Member, IEEERadu Horaud
Tracking Multiple Audio Sources with the von Mises Distribution and Variational EM
1Index Terms-Multiple target trackingaudio-source trackingspeaker trackingBayesian filteringvon Mises distributionvariational approximationexpectation maximization
In this paper we address the problem of simultaneously tracking several audio sources, namely the problem of estimating source trajectories from a sequence of observed features. We propose to use the von Mises distribution to model audio-source directions of arrival (DOAs) with circular random variables. This leads to a multi-target Kalman filter formulation which is intractable because of the combinatorial explosion of associating observations to state variables over time. We propose a variational approximation of the filter's posterior distribution and we infer a variational expectation maximization (VEM) algorithm which is computationally efficient. We also propose an audio-source birth method that favors smooth source trajectories and which is used both to initialize the number of active sources and to detect new sources. We perform experiments with a recently released dataset comprising several moving sources as well as a moving microphone array.
I. INTRODUCTION
In this paper we address the problem of simultaneously tracking several audio sources, namely the problem of estimating source trajectories from a sequence of observed features. Audio-source tracking is useful for a number of tasks such as audio-source separation, spatial filtering, speech enhancement and speech recognition, which in turn are essential for robust voice-based home assistants. Audio source tracking is difficult because audio signals are adversely affected by noise, reverberation and interferences between acoustic signals.
Single-source tracking methods are often based on observing the time differences of arrival (TDOAs) between two microphones. Since the mapping between TDOAs and the space of source locations is non-linear, sequential Monte Carlo particle filters are used, e.g. [1]- [3]. Alternatively, microphone arrays can be used to estimate DOAs of audio sources. The problem can then be cast into a linear dynamic model, e.g. the adaptive Kalman filter [4]. In this case source directions should however be modeled as circular random variables, e.g. the wrapped Gaussian distribution [5], or the von Mises distribution [6], [7].
Multiple-source tracking is more challenging since it raises additional difficulties: (i) the number of active audio sources is unknown and it varies over time, (ii) multiple DOAs must be simultaneously estimated, and (iii) DOA-to-source assignments must be computed over time. The problem of tracking an unknown number of sources is specifically addressed in the framework of random finite sets [8]. Since the probability density function (pdf) is computationally intractable, its first order approximation can be propagated in time using the probability hypothesis density (PHD) filter [8], [9]. In [10] the PHD filter was applied to audio recordings to track multiple sources from TDOA estimates. In [11] the wrapped Gaussian distribution is incorporated within a PHD filter. A mixture of von Mises distributions is combined with a PHD filter in [12]. The main drawback of PHD-based filters is that they cannot provide explicit observation-to-source associations without recourse to ad-hoc post-processing. Providing such associations is crucial when several audio sources must be identified and tracked.
Multiple target tracking is also formulated as a variational approximation of Bayesian filtering [13]. Observation-to-target associations are modeled as discrete latent variables and their realizations are estimated within a compact and efficient VEM solver. Moreover, the problem of tracking a varying number of targets is addressed via track-birth and track-death processes. The variational approximation of [13] was recently extended to track multiple audio sources using a mixture of Gaussian distributions [14].
This paper builds on [6], [13] and [14] and proposes to use the von Mises distribution to model DOAs with circular random variables. This leads to a multi-target Kalman filter which is intractable because of the combinatorial explosion of associating observations to state variables over time. We propose a variational approximation of the filter's posterior distribution and we infer a VEM algorithm which is computationally efficient. We also propose an audio-source birth method that favors smooth source trajectories and which is used both to initialize the number of active sources and to detect new sources. We perform experiments with a recently released dataset comprising several moving sources as well as a moving microphone array.
The paper is organized as follows. Section II describes the probabilistic model and Section III describes a variational approximation of the filtering distribution and the VEM algorithm. Section IV briefly describes the source birth method. Experiments and comparisons with other methods are arXiv:1812.08246v1 [cs.SD] 19 Dec 2018 described in Section V. Supplemental materials (mathematical derivations, software and videos) can be found at the site 1 .
II. THE FILTERING DISTRIBUTION
Let N denote the unknown number of audio sources and the state s tn ∈ (−π, π] be the DOA of source n ∈ {1, . . . , N } at t, and s t = (s t1 , . . . , s tN ). Furthermore, let y tm ∈ (−π, π] denote the m-th DOA observation at time t, where m ∈ {1, . . . , M t } and M t is the number of observations at t. Each observation is accompanied by its corresponding confidence ω tm ∈ [0, 1]. Let z tm denote the observation-tosource assignment variable, where z tm = n means that y tm is an observation of source n, and z tm = 0 means that y tm is an observation of a dummy source. Within a Bayesian model, multiple target tracking can be formulated as the estimation of the filtering distribution p(s t , z t |y 1:t ). We assume that the state variables s tn follow a first-order Markov model, and that the observations depend only on the current state and on the assignment variables. Under these two hypotheses the posterior pdf is given by:
p(s t , z t |y 1:t ) ∝ p(y t |z t , s t )p(z t )p(s t |y 1:t−1 ),(1)
where p(y t |z t , s t ) is the observation likelihood, p(z t ) is the prior pdf of the assignment variables and p(s t |y 1:t−1 ) is the predictive pdf of the state variables.
1) Observation likelihood:
Assuming that DOA estimates are independently and identically distributed (i.i.d.), the observation likelihood can be written as:
p(y t |z t , s t ) = Mt m=1 p(y tm |z t , s t ).
(
The likelihood that a DOA corresponds to a source is modelled by a von Mises distribution [6], whereas the likelihood that a DOA corresponds to a dummy source is modelled by a uniform distribution:
p(y tm |z tm = n, s tn ) = M(y tm ; s tn , κ y ω tm ) n = 0 U(y tm )
n = 0 ,(3)
where M(y ; s, κ) = (2πI 0 (κ)) −1 exp{κ cos(y − s)} denotes the von Mises distribution with mean s and concentration κ, I p (·) denotes the modified Bessel function of the first kind of order p, κ y denotes the concentration of audio observations, and U(y tm ) = (2π) −1 denotes the uniform distribution along the support of the unit circle.
2) Prior pdf of the assignment variables: Assuming that the assignment variables are i.i.d., the prior pdf is given by:
p(z t ) = Mt m=1 p(Z tm = n),(4)
and we denote with π n = p(Z tm = n), N n=0 π n = 1, the prior probability that source n is associated with y tm . 1 https://team.inria.fr/perception/research/audiotrack-vonm/ 3) Predictive pdf of the state variables: The predictive pdf extrapolates information inferred in the past to the current time step using a dynamical model of the source motion, i.e., p(s t |y 1:t−1 ) = p(s t |s t−1 )p(s t−1 |y 1:t−1 )ds t−1 .
where p(s t |s t−1 ) denotes the prior pdf modelling the source motion and p(s t−1 |y 1:t−1 ) corresponds to the filtering distribution at time t − 1. The source motion model is assumed source-independent and follows a von Mises distribution:
p(s t |s t−1 ) = N n=1 M(s tn ; s t−1,n , κ d ),(6)
where κ d is the concentration of the state dynamics. Θ = {κ y , κ d , π 0 , . . . , π N } denotes the set of model parameters.
As already mentioned in Section I, the filtering distribution corresponds to a mixture model whose number of components grows exponentially along time, therefore solving (1) directly is computationally intractable. Below we infer a variational approximation of (1) which drastically reduces the explosion of the number of mixture components; consequently, it leads to a computationally tractable algorithm.
III. VARIATIONAL APPROXIMATION AND ALGORITHM
Since solving (1) is computationally intractable, we propose to approximate the conditional independence between the states and the assignment variables, more precisely
p(s t , z t |y 1:t ) ≈ q(s t )q(z t ).(7)
The proposed factorization leads to a VEM algorithm [15], where the posterior distribution of the two variables are found by two variational E-steps:
q(z t ) ∝ exp E q(st) log p(s t , z t |y 1:t ) ,(8)q(s t ) ∝ exp E q(zt) log p(s t , z t |y 1:t ) .(9)
The model parameters Θ are estimated by maximizing the expected complete-data log-likelihood:
Q(Θ,Θ) = E q(st)q(zt) log p(y t , s t , z t |y 1:t−1 , Θ) . (10)
whereΘ are the old parameters. To illustrate the impact of the proposed approximation on the filtering distribution, we observe that (i) the posterior pdf of the assignment is observation-independent, and that (ii) the posterior pdf of the state variables is source-independent, i.e.,
q(z t ) = Mt m=1 q(z tm ), q(s t ) = N n=1 q(s tn ).(11)
Therefore, the predictive pdf is also separable:
p(s tn |y 1:t−1 ) = p(s tn |s t−1,n )p(s t−1,n |y 1:t−1 )ds t−1,n .
Moreover, assuming that the filtering pdf at time t − 1 follows a von Mises distribution, i.e. q(s t−1,n ) = M(s t−1,n ; µ t−1,n , κ t−1,n ), then the predictive pdf is approximately a von Mises distribution (see [6], [16, (3.5.43)]):
p(s tn |y 1:t−1 ) ≈ M(s tn ; µ t−1,n ,κ t−1,n ),(12)
where the predicted concentration parameter,κ t−1,n , is:
κ t−1,n = A −1 (A(κ t−1,n )A(κ d )),(13)
and where A(a) = I 1 (a)/I 0 (a), and
A −1 (a) ≈ (2a − a 3 )/(1 − a 2 )
. Using (8), (9) and (10), the filtering distribution is therefore obtained by iterating through three steps, i.e. the E-S, E-Z and M steps, provided below (detailed mathematical derivations can be found in the appendices).
1) E-S step: Inserting (1) and (12) in (9), q(s tn ) reduces to a von Mises distribution, M(s tn ; µ tn , κ tn ). The mean µ tn and concentration κ tn are given by:
µ tn = tan −1 (14) κ y Mt m=1 α tmn ω tm sin(y tm ) +κ t−1,n sin(µ t−1,n ) κ y Mt m=1 α tmn ω tm cos(y tm ) +κ t−1,n cos(µ t−1,n ) , κ tn = (κ y ) 2 Mt m=1 (α tmn ω tm ) 2 +κ 2 t−1,n(15)+ 2(κ y ) 2 Mt m=1 Mt l=m+1 α tmn ω tm α tln w tl cos(y tm − y tl ) +2κ yκt−1,n Mt m=1 (α tmn ω tm cos(y tm − µ t−1,n )) 1/2 ,
where α tmn = q(Z tm = n) denotes the variational posterior probability of the assignment variable. Therefore, the expressibility of the posterior distribution as a mixture of von Misses propagates over time, and only needs to be assumed at t = 1. Please consult Appendix A for more details.
2) E-Z step: By computing the expectation over s t in (8), the following expression is obtained:
α tmn = q(z tm = n) = π n β tmn N l=0 π l β tml(16)
where β tmn is given by (please consult Appendix B for a detailed derivation):
β tmn = ω tm κ y A(ω tm κ y ) cos(y tm − µ tn ) n = 0 1/2π n = 0,
3) M step: The parameter set Θ are evaluated by maximizing (10). The priors (4) are obtained using the conventional update rule [15]: π n ∝ Mt m=1 α tnm . The concentration parameters, κ y and κ d , are evaluated using gradient descent, namely (please consult Appendix C): where B(κ d ) = ∂κt−1,n ∂κ d .
Based on the E-S, E-Z and M step formulas above, the proposed VEM algorithm iterates until convergence at each time step, in order to estimate the posterior distributions and to update the values of the model parameters.
IV. AUDIO-SOURCE BIRTH PROCESS
We now describe in detail the proposed birth process which is essential to initialize the number of audio sources as well as to detect new sources at any time. The birth process gathers all the DOAs that were not assigned to a source, i.e. assigned to n = 0, at current frame t as well over the L previous frames (L = 2 in all our experiments). From this set of DOAs we build DOA/observation sequences (one observation per frame) and letŷ j t−L:t be such a sequence of DOAs, where j is the sequence index. We consider the marginal likelihood:
τ j = p(ŷ j t−L:t ) = p(ŷ j t−L:t , s t−L:t )ds t−L:t .(17)
Using (12) and the harmonic sum theorem, the integral (17) becomes (please consult Appendix D):
τ j = L l=0 I 0 (κ j t−l ) 2πI 0 (κ yω j t−l )I 0 (κ j t−l ) ,(18)
whereω t is the confidence associated withŷ t . The concentration parameters, κ j t−l andκ j t−l+1 , depend on the observations and are recursively computed for each sequence j:
κ j t−l = (κ j t−l ) 2 + (κ yω j t−l ) 2 +κ j t−l κ yω j t−l cos(ŷ j t−l −μ j t−l ), µ j t−l+1 = tan −1 κ j t−l sin(μ j t−l ) + κ yω j t−l sin(ŷ j t−l ) κ j t−l cos(μ j t−l ) + κ yω j t−l cos(ŷ j t−l ) , κ j t−l+1 = A −1 (A(κ j t−l )A(κ d ))
. The sequence j * with the maximal marginal likelihood (18), namely j * = argmax j (τ j ), is supposed to be generated from a not yet known audio source only if τ j * is larger than a threshold: a new sourceñ is created in this case and q(s tñ ) = M(s tñ ;μ tj * ,κ tj * ).
V. EXPERIMENTAL EVALUATION
The proposed method was evaluated using the audio recordings from Task 6 of the IEEE-AASP LOCATA challenge development dataset [17], which involve multiple moving sound sources and moving microphone arrays. The LOCATA dataset consists of real-life recordings with ground-truth source locations provided by an optical tracking system. The size of the recording room is 7.1 × 9.8 × 3 m, with T 60 ≈ 0.55 s. Task 6 contains three sequences of of a total duration of 188.4 s and two moving speakers. In our experiments we used microphones 5, 8, 11, and 12, embedded in a robot head and located on a horizontal plane. The online sound-source localization method [14] was used to provide DOA estimates at each STFT frame. The STFT uses the Hamming window with To evaluate the method quantitatively, the estimated source trajectories are compared with the ground-truth trajectories over audio-active frames. Ground-truth audio-active frames are obtained using the voice activity detection (VAD) of [18]. The permutation problem between the detected trajectories and the ground-truth trajectories is solved by means of a greedy gating algorithm: the error between all possible pairs of estimated and ground-truth trajectories is evaluated. Minimum-error pairs are selected for further comparison. A DOA estimate that is 15 • away from the ground-truth is treated as false detection. Sources that are not associated with a trajectory correspond to missed detections (MDs). For performance evaluation, the percentage of MDs and false alarms (FAs) are evaluated over voice-active frames. The mean absolute error (MAE) denotes the average absolute error of all the correct associations between all the sources and all the voice-active frames.
The observation-to-source assignment posteriors and the DOAs confidence weights are used to estimate voice-active frames: where D = 2 and δ = 0.025 is a VAD threshold. Once an active source is detected, we output its trajectory.
The MAEs, MDs and FAs values, averaged over all recordings, are summarized in Table I. We compared the proposed von Mises VEM algorithm (vM-VEM) with three multispeaker trackers: the von Mises PHD filter (vM-PHD) [12] and two versions the multiple speaker tracker of [14] based on Gaussians models (GM). [14] uses a first-order dynamic model whose effect is to smooth the estimated trajectories. We compared with both first-order (GM-FO) and zero-order (GM-ZO) dynamics. The proposed vM-VEM tracker yields the lowest false alarm (FA) rate of 5.9% and MAE of 3.2, and the second lowest MD rate of 23.9%. The GM-FO variant of [14] yields an MD rate of 22.3% since it uses velocity information to smooth the trajectories. This illustrates the advantage of the von-Mises distribution to model directional data (DOA). The proposed von-Mises model uses a zero-order dynamics; nevertheless it achieves performance comparable with the Gaussian model that uses first-order dynamics.
The results for recordings #1 and #2 in Task 6 are shown in Fig. 1. Note that the PHD-based filter method [12] has [12], GM-FO [14], vM-VEM (proposed) and groundtruth trajectories. Different colors represent different audio sources. Note that vM-PHD is unable to associate sources with trajectories. two caveats. First, observation-to-source assignments cannot be estimated (unless a post-processing step is performed), and second, the estimated source trajectories are not smooth. This stays in contrast with the proposed method which explicitly represents assignments with discrete latent variables and estimates them iteratively with VEM. Moreover, the proposed method yields smooth trajectories similar with those estimated by [14] and quite close to the ground truth.
VI. CONCLUSION
We proposed a multiple audio-source tracking method using the von Mises distribution and we inferred a tractable solver based on a variational approximation of the posterior filtering distribution. Unlike the Gaussian distribution, the von Mises distribution explicitly models the circular variables associated with audio-source localization and tracking based on source DOAs. Using the recently released LOCATA dataset, we empirically showed that the proposed method compares favorably with two recent methods.
APPENDIX A DERIVATION OF THE E-S STEP
In order to obtain the formulae for the E-S step, we start from its definition in (9): q(s t ) ∝ exp E q(zt) log p(s t , z t |y 1:t ) .
(20)
We now use the decomposition in (1) to write:
q(s t ) ∝ exp E q(zt) log p(y t |s t , z t ) p(s t |y 1:t−1 ). (21)
Let us know develop the expectation: where st = denotes the equality up to an additive constant that does not depend on s t . Such a constant would become a multiplicative constant after the exponentiation in (21), and therefore can be ignored.
E q(zt) log p(y t |s t , z t ) = E q(zt)
By replacing the developed expectation together with (12) we obtain: This is important since it demonstrates that the a posteriori distribution on s t is separable on n and therefore independent for each speaker. In addition, it allows us to rewrite the a posteriori distribution for each speaker, i.e. on s tn as a von Mises distribution by using the harmonic addition theorem, thus obtaining
q(s t ) ∝ expq(s t ) = N n=0 q(s tn ) = N n=0 M(s tn ; µ tn , κ tn ),(22)
with µ tn and κ tn defined as in (14) and (15).
APPENDIX B DERIVATION OF THE E-Z STEP
Similarly to the previous section, and in order to obtain the closed-form solution of the E-Z step, we start from its definition in (8):
q(z t ) ∝ exp E q(st) log p(s t , z t |y 1:t ) ,(23)
and we use the decomposition in (1) to write:
q(z t ) ∝ exp E q(st) log p(y t |s t , z t ) p(z t ).(24)
Since both the observation likelihood and the prior distribution are separable on z tm , we can write:
q(z t ) ∝ Mt m=1 exp E q(st) log p(y tm |s t , z tm ) p(z tm ),(25)
proving that the a posteriori distribution is also separable on m.
We can thus analyze the posterior of each z tm separately, by computing q(z tm = n): q(z tm = n) ∝ exp E q(st) log p(y tm |s t , z tm = n) p(z tm = n)
Let us first compute the expectation for n = 0: E q(st) log p(y tm |s t , z tm = n) = E q(stn) log p(y tm |s tn , z tm = n) = E q(stn) log M(y tm ; s tn , ω tm κ y ) ztm = 2π 0 q(s tn )ω tm κ y cos(y tm − s tn )ds tn = ω tm κ y 2πI 0 (ω tm κ y ) 2π 0 exp cos(s tn − µ tn ) cos(s tn − y tm )ds tn = ω tm κ y A(ω tm κ y ) cos(y tm − µ tn ),
where for the last line we used the following variable changē s = s tn − µ tn and the definition of I 1 and A.
The case n = 0 is even easier since the observation distribution is a uniform: E q(stn) log p(y tm |s tn , z tm = n) = E q(stn) − log 2π = − log(2π).
By using the fact that the prior distribution on z tm is denoted by p(z tm = n) = π n , we can now write the a posteriori distribution as q(z tm = n) ∝ π n β tmn with: β tmn = ω tm κ y A(ω tm κ y ) cos(y tm − µ tn ) n = 0 1/2π n = 0 , and finally obtaining the results in (16) and (3).
APPENDIX C DERIVATION OF THE M STEP
In order to derive the M step, we need first to compute the Q function in (10), Q(Θ,Θ) = E q(st)q(zt) log p(y t , s t , z t |y 1:t−1 , Θ) = E q(st)q(zt) log p(y t |s t , z t , Θ)
κy + = + log p(z t |Θ) π n s + log p(s t |y 1:t−1 , Θ) κ d ,
where each parameter is show below the corresponding term of the Q function. Let us develop each term separately.
A. Optimizing κ y Q κy = E q(st)q(zt) log α tmn ω tm κ y cos(y tm − µ tn )A(κ tn ) − log(I 0 (ω tm κ y )) , and by taking the derivative with respect to κ y we obtain: α tmn log π n This is the very same formulae that for any mixture model, and therefore the solution is standard and corresponds to the one reported in the manuscript.
C. Optimizing κ d Q κ d = E q(st)q(zt) log N n=1 p(s tn |y 1:t−1 ) = N n=1 E q(stn) log M(s tn ; µ t−1,n ,κ t−1,n ) = N n=1 E q(stn) − log I 0 (κ t−1,n ) +κ t−1,n cos(s tn − µ t−1,n ) = N n=1 − log I 0 (κ t−1,n ) +κ t−1,n cos(µ tn − µ t−1,n )A(κ tn ),
where the dependency on κ d is implicit inκ t−1,n = A −1 (A(κ t−1,n )A(κ d )).
By taking the derivative with respect to κ d we obtain:
∂Q ∂κ d = N n=1 A(κ tn ) cos(µ tn − µ t−1,n ) − A(κ t−1,n ) ∂κ t−1,n ∂κ d with ∂κ t−1,n ∂κ d =Ã(A(κ t−1,n )A(κ d ))A(κ t−1,n ) I 2 (κ d )I 0 (κ d ) − I 2 1 (κ d ) I 2 0 (κ d ) ,
whereÃ(a) = dA −1 (a)/da = (2 − a 2 + a 4 )/(1 − a 2 ) 2 .
By denoting the previous derivative as B(κ d ) = ∂κt−1,n ∂κ d , we obtain the expression in the manuscript.
APPENDIX D DERIVATION OF THE BIRTH PROBABILITY
In this section we derive the expression for τ j by computing the integral (17). Using the probabilistic model defined, we can write (the index j is omitted): Since we have already seen that p(s t−L+1 ) is also a von Mises distribution, we can use the same reasoning to marginalize with respecto to s t−L+1 . This strategy yields to the recursion presented in the main text.
αtnmωtm A(κtn) cos(µtn − ytm) − A(κyωtm) ∂Q ∂κ d = N n=1A(κtn) cos(µtn − µt−1,n) − A(κt−1,n) B(κ d ).
Fig. 1 :
1Results obtained with recordings #1 (left) and #2 (right) from Task 6 of the LOCATA dataset. Top-to-down: vM-PHD
Eα
q(ztm) log p(y tm |s t , z tm ) = Mt m=1 E q(ztm) log p(y tm |s t , z tm ) tm = n) log p(y tm |s t , z tm = n) log p(y tm |s tn , z tm = n) log M(y tm ; s tn , ω tm κ y ) tnm ω tm κ y cos(y tm − s tn ),
α
tnm ω tm κ y cos(y tm − s tn ) tnm ω tm κ y cos(y tm − s tn ) +κ t−1,n cos(s tn − µ t−1,n ) .
(st)q(ztm) log p(y tm |s t , z tm ) log p(y tm |s t , z tm = n) tmn E q(stn) log M(y tm ; s tn , ω tm κ y s tn )(ω tm κ y cos(y tm − s tn ) − log(I 0 (ω tm κ y )))ds tn
αα
tmn ω tm cos(y tm −µ tn )A(κ tn )−A(ω tm κ y ) ,which corresponds to what was announced in the manuscript.B. Optimizing π n 's Q πn = E q(st)q(zt) tmn log p(z tm = n)
s t+τ |s t+τ −1 )p(s t−L )ds t−L:t We will first marginalize s t−L . To do that, we notice that if p(s t−L ) follows a von Mises with meanμ t−L and concentrationκ t−L , then we can write:p(ŷ t−L |s t−L )p(s t−L ) = M(ŷ t−L ; s t−L ,ω t−L κ y )M(s t−L ;μ t−L ,κ t−L ) = M(s t−L ;μ t−L ,κ t−L ) I 0 (κ t−L ) 2πI 0 (ω t−L κ y )I 0 (κ t−L ) with µ t−L = tan −1 ω t−L κ y sinŷ t−L +κ t−L sinμ t−L ω t−L κ y cosŷ t−L +κ t−L cosμ t−L , κ 2 t−L = (ω t−L κ y ) 2 +κ 2 t−L + 2ω t−L κ yκt−L cos(ŷ t−L −μ t−L ),where we used the harmonic addition theorem. Now we can effectively compute the marginalization. The two terms involving s t−L are:M(s t−L+1 ; s t−L , κ d )M(s t−L ;μ t−L ,κ t−L ) ≈ M(s t−L+1 ;μ t−L+1 ,κ t−L+1 ) withμ t−L+1 =μ t−L , κ t−L+1 = A −1 (A(κ t−L )A(κ d )).Therefore, the marginalization with respect to s t−L yields the following result:p(ŷ t−L:t , s t−L:t )ds t−s t+τ |s t+τ −1 )p(s t−L )ds t−L:t = I 0 (κ t−L ) 2πI 0 (ω t−L κ y )I 0 (κ t−L ) s t+τ |s t+τ −1 )p(s t−L+1 )ds t−L+1:t .
TABLE I :
IMethod evaluation with the LOCATA dataset.a length of 16 ms and 8 ms shifts. Peak detection is applied
with a threshold of 0.3. Both DOA estimates and associated
confidence measures, w tm , are provided at each frame.
Y. Ban, X. Alameda-Pineda and R. Horaud are with Inria Grenoble Rhône-Alpes, Montbonnot Saint-Martin, France. E-mail: [email protected] 2 C. Evers is with Dept. Electrical and Electronic Engineering, Imperial College London, Exhibition Road, SW7 2AZ, UK. Email: [email protected] This work was supported by the ERC Advanced Grant VHIA #340113 and the UK EPSRC Fellowship grant no. EP/P001017/1.
Nonlinear filtering for speaker tracking in noisy and reverberant environments. J Vermaak, A Blake, IEEE International Conference on Acoustics, Speech, and Signal Processing. 5J. Vermaak and A. Blake, "Nonlinear filtering for speaker tracking in noisy and reverberant environments," in IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, 2001, pp. 3021- 3024.
Particle filtering algorithms for tracking an acoustic source in a reverberant environment. D B Ward, E A Lehmann, R C Williamson, IEEE Transactions on speech and audio processing. 116D. B. Ward, E. A. Lehmann, and R. C. Williamson, "Particle filtering algorithms for tracking an acoustic source in a reverberant environment," IEEE Transactions on speech and audio processing, vol. 11, no. 6, pp. 826-836, 2003.
Particle filtering for TDOA based acoustic source tracking: Nonconcurrent multiple talkers. X Zhong, J R Hopgood, Signal Processing. 96X. Zhong and J. R. Hopgood, "Particle filtering for TDOA based acoustic source tracking: Nonconcurrent multiple talkers," Signal Processing, vol. 96, pp. 382-394, 2014.
Speaker tracking with a microphone array using Kalman filtering. D Bechler, M Grimm, K Kroschel, Advances in Radio Science. 1D. Bechler, M. Grimm, and K. Kroschel, "Speaker tracking with a microphone array using Kalman filtering," Advances in Radio Science, vol. 1, no. B. 3, pp. 113-117, 2003.
A wrapped Kalman filter for azimuthal speaker tracking. J Traa, P Smaragdis, IEEE Signal Processing Letters. 2012J. Traa and P. Smaragdis, "A wrapped Kalman filter for azimuthal speaker tracking," IEEE Signal Processing Letters, vol. 20, no. 12, pp. 1257-1260, 2013.
DoA reliability for distributed acoustic tracking. C Evers, E A Habets, S Gannot, P A Naylor, IEEE Signal Processing Letters. C. Evers, E. A. Habets, S. Gannot, and P. A. Naylor, "DoA reliability for distributed acoustic tracking," IEEE Signal Processing Letters, 2018.
Bearing-only tracking with a mixture of von Mises distributions. I Marković, I Petrović, IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEI. Marković and I. Petrović, "Bearing-only tracking with a mixture of von Mises distributions," in IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 707-712.
Multitarget Bayes filtering via first-order multitarget moments. R P S Mahler, IEEE Trans. Aerosp. Electron. Syst. 394R. P. S. Mahler, "Multitarget Bayes filtering via first-order multitarget moments," IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 4, pp. 1152- 1178, Oct. 2003.
The Gaussian mixture probability hypothesis density filter. B.-N Vo, W.-K Ma, IEEE Transactions on Signal Processing. 5411B.-N. Vo and W.-K. Ma, "The Gaussian mixture probability hypothesis density filter," IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4091-4104, 2006.
Efficient voice activity detection algorithm using long-term spectral flatness measure. Y Ma, A Nishihara, EURASIP Journal on Audio, Speech, and Music Processing. 20131Y. Ma and A. Nishihara, "Efficient voice activity detection algorithm using long-term spectral flatness measure," EURASIP Journal on Audio, Speech, and Music Processing, vol. 2013, no. 1, pp. 1-18, 2013.
Acoustic SLAM. C Evers, P A Naylor, Speech, and Language Processing. 26C. Evers and P. A. Naylor, "Acoustic SLAM," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 9, pp. 1484- 1498, 2018.
Von Mises mixture PHD filter. I Marković, J Ćesić, I Petrović, IEEE Signal Processing Letters. 2212I. Marković, J.Ćesić, and I. Petrović, "Von Mises mixture PHD filter," IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2229-2233, 2015.
An on-line variational Bayesian model for multi-person tracking from cluttered scenes. S Ba, X Alameda-Pineda, A Xompero, R Horaud, Computer Vision and Image Understanding. 153S. Ba, X. Alameda-Pineda, A. Xompero, and R. Horaud, "An on-line variational Bayesian model for multi-person tracking from cluttered scenes," Computer Vision and Image Understanding, vol. 153, pp. 64- 76, 2016.
Online localization and tracking of multiple moving speakers in reverberant environment. X Li, Y Ban, L Girin, X Alameda-Pineda, R Horaud, ArXiV preprint. X. Li, Y. Ban, L. Girin, X. Alameda-Pineda, and R. Horaud, "Online localization and tracking of multiple moving speakers in reverberant environment," ArXiV preprint, 2018.
C Bishop, Pattern Recognition and Machine Learning. SpringerC. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
K V Mardia, P E Jupp, Directional statistics. John Wiley & Sons494K. V. Mardia and P. E. Jupp, Directional statistics. John Wiley & Sons, 2009, vol. 494.
The LOCATA challenge data corpus for acoustic source localization and tracking. H W Löllmann, C Evers, A Schmidt, H Mellmann, H Barfuss, P A Naylor, W Kellermann, IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM). Sheffield, UKH. W. Löllmann, C. Evers, A. Schmidt, H. Mellmann, H. Barfuss, P. A. Naylor, and W. Kellermann, "The LOCATA challenge data corpus for acoustic source localization and tracking," in IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), Sheffield, UK, July 2018.
Voice activity detection based on statistical likelihood ratio with adaptive thresholding. X Li, R Horaud, L Girin, S Gannot, IEEE International Workshop on Acoustic Signal Enhancement. X. Li, R. Horaud, L. Girin, and S. Gannot, "Voice activity detection based on statistical likelihood ratio with adaptive thresholding," in IEEE International Workshop on Acoustic Signal Enhancement, 2016, pp. 1-5.
|
[] |
[
"Elasticizing Linux via Joint Disaggregation of Memory and Computation",
"Elasticizing Linux via Joint Disaggregation of Memory and Computation"
] |
[
"Ehab Ababneh \nDepartment of Computer Science\nUniversity of Colorado Boulder\n\n",
"Zaid Al-Ali \nDepartment of Computer Science\nUniversity of Colorado Boulder\n\n",
"Sangtae Ha \nDepartment of Computer Science\nUniversity of Colorado Boulder\n\n",
"Richard Han \nDepartment of Computer Science\nUniversity of Colorado Boulder\n\n",
"Eric Keller \nDepartment of Computer Science\nUniversity of Colorado Boulder\n\n"
] |
[
"Department of Computer Science\nUniversity of Colorado Boulder\n",
"Department of Computer Science\nUniversity of Colorado Boulder\n",
"Department of Computer Science\nUniversity of Colorado Boulder\n",
"Department of Computer Science\nUniversity of Colorado Boulder\n",
"Department of Computer Science\nUniversity of Colorado Boulder\n"
] |
[] |
In this paper, we propose a set of operating system primitives which provides a scaling abstraction to cloud applications in which they can transparently be enabled to support scaled execution across multiple physical nodes as resource needs go beyond that available on a single machine. These primitives include stretch, to extend the address space of an application to a new node, push and pull, to move pages between nodes as needed for execution and optimization, and jump, to transfer execution in a very lightweight manner between nodes. This joint disaggregation of memory and computing allows for transparent elasticity, improving an application's performance by capitalizing on the underlying dynamic infrastructure without needing an application re-write. We have implemented these primitives in a Linux 2.6 kernel, collectively calling the extended operating system, ElasticOS. Our evaluation across a variety of algorithms shows up to 10x improvement in performance over standard network swap.
| null |
[
"https://arxiv.org/pdf/1806.00885v1.pdf"
] | 46,934,537 |
1806.00885
|
0ae6dfd63ee60c95fdbe6c3049949b0a9cf37a55
|
Elasticizing Linux via Joint Disaggregation of Memory and Computation
Ehab Ababneh
Department of Computer Science
University of Colorado Boulder
Zaid Al-Ali
Department of Computer Science
University of Colorado Boulder
Sangtae Ha
Department of Computer Science
University of Colorado Boulder
Richard Han
Department of Computer Science
University of Colorado Boulder
Eric Keller
Department of Computer Science
University of Colorado Boulder
Elasticizing Linux via Joint Disaggregation of Memory and Computation
In this paper, we propose a set of operating system primitives which provides a scaling abstraction to cloud applications in which they can transparently be enabled to support scaled execution across multiple physical nodes as resource needs go beyond that available on a single machine. These primitives include stretch, to extend the address space of an application to a new node, push and pull, to move pages between nodes as needed for execution and optimization, and jump, to transfer execution in a very lightweight manner between nodes. This joint disaggregation of memory and computing allows for transparent elasticity, improving an application's performance by capitalizing on the underlying dynamic infrastructure without needing an application re-write. We have implemented these primitives in a Linux 2.6 kernel, collectively calling the extended operating system, ElasticOS. Our evaluation across a variety of algorithms shows up to 10x improvement in performance over standard network swap.
Introduction
We are in the midst of a significant transition in computing, where we are consuming infrastructure rather than building it. This means that applications have the power of a dynamic infrastructure underlying them, but many applications struggle to leverage that flexibility. In this paper, we propose supporting this at the operating system level with new primitives to support scaling.
To gain some context on the challenge with scaling, we first discuss how it is predominantly handled today. The most straight forward option, which required no changes to applications, is to simply get a bigger (virtual) machine as load for an application increases. Cloud providers, such as Amazon [2], offer a wide range of machine sizes which cost anywhere from less than a penny per hour to a few dollars per hour. For cost efficiency, companies wish to use the "right" size machine, which might change over time. But, transitioning from one VM size to another can pose challenges. In some cases, we can take snapshots (e.g., with CRIU [8]) and migrate the application to a bigger/smaller VM, but this can be disruptive, and the management of the application needs scripts and other infrastructure to trigger scaling.
An alternative is to re-write the applications with scaling in mind. To leverage the scaling, commonly applications are built around frameworks such as Hadoop [12], Apache Spark [26], MPI [7] or PGAS [1]. These frameworks are designed with the flexibility of being able to execute tasks on a varying amount of distributed resources available. The problem here is two fold. First, to leverage this, the application needs to be built for this -a significant challenge (requiring a re-write) for any existing application, and forcing application developers to evaluate and become fluent in the latest frameworks and potentially adapt the application as the frameworks change. Second, and perhaps more challenging, is that not every application fits into one of these frameworks.
Another approach to scaling is to replicate VMs/containers when an application becomes popular and requires more resources. This too introduces burdens on the programmer in order to synchronize shared data and state across multiple replicas, as well as to script their applications to spawn/delete replicas depending on load.
In short, in each case, the burden of scaling is placed on programmers. We argue that developers shouldn't need to be experts in cloud management and other frameworks, in addition to also needing to be fluent in programming and their application domain. Instead, the operating system should provide more support. Broadly speaking, the job of an operating system is to make the life of an application developer easier (through abstraction). A modern OS provides virtual memory abstractions, so developers do not have to coordinate memory use among applications, network socket abstractions, so developers can send messages without needing to be intimately familiar with the underlying network protocols, and many other abstractions (file system, device, multitasking) all to support developers.We propose that scaling should be an OS abstraction.
Related Work: We are not the first to propose that operating systems should support scaling. Scaling of memory approaches are popular and include efforts such as RAMCloud [22], which requires refactoring in user space to utilize its memory scaling capabilities. An early approach to sharing memory called DSM [21,19,17,16,14,6] suffered from scaling issues, but more recently disaggregation-based approaches towards memory have emerged that are centered around transparent scaling of memory behind the swap interface, such as NSwap, Infiniswap, X-Swap and Memx [20,10,29,4]. Scaling of computation approaches include process migration to machines with more resources [8,13,3,25,18,27], in addition to the scaling frameworks and replication methods mentioned previously. Approaches to accelerate process migration [24,15] have been proposed to hide the latency of migration by copying most of the process state in the background and only copying a small delta to the new machine after halting the process. Single system image (SSI) OSs such as Kerrighed, MOSIX, Sprite and Amoeba [15,3,5,28] have been created to support operation across a distributed cluster of machines. These approaches typically employ a process migration model to move computation around cluster nodes and require applications to be recompiled for these specialized OSs.
These prior efforts in OS scaling suffer from a variety of limitations. Network swap-based approaches, while being a step in the right direction of disaggregation in the data center, miss the opportunity to exploit joint disaggregation of computation and memory for improved performance. Execution is typically assumed to be pinned on one machine, while memory pages are swapped back and forth remotely across the network. This can result in excessive swapping of pages over the network. In these cases, movement of computation to a remote machine towards a cluster of locality stored in the remote machine's memory would result in substantially faster execution and lower network overhead, as we will show later.
Combining current network swap approaches with existing process migration techniques to alleviate excessive network swapping overhead would suffer two major limitations. First, each decision to move computation would incur the overhead of copying the entire address space. This is a significant amount of overhead to impose on the network. Second, even with accelerated process migration, there is a substantial delay between the time the decision is made to migrate and when that is completed, at which time the conditions that triggered the original migration decision may be obsolete due to the length of time needed to copy all the state.
Introducing ElasticOS: In response to these shortcomings, we introduce four primitives to realize the scaling OS abstraction -stretch, jump, push, and pull. These scaling abstractions are designed to be transparent, efficient, and practically useful. Our approach is inspired by an early work that hypothesized elasticizing operating systems as a hot research topic, but did not build a working implementation of the proposed concept [11]. Stretch is used when an application becomes overloaded (e.g., a lot of thrashing to disk is occurring), so the operating system stretches the application's address space to another machine -extending the amount of memory available to the application. Push and pull allow memory pages to be transferred between machines which the application has been stretched to, whether proactively to optimize placement, or reactively to make it so the data is available where it is needed. Jump allows program execution to transfer to a machine which the application has been stretched to. Unlike heavyweight process migration, our jump primitive is a lightweight transfer of execution that only copies the small amount of state needed to begin execution immediately on the remote machine, such as register state and the top of the stack. Any additional state that is needed is faulted in using pulls from the rest of the distributed address space. Having both jumping and push/pull allows for the OS to choose between moving the data to be where the execution needs it, and moving the execution to be where the data is. This supports the natural, but not necessarily perfect locality that exists in applications.
To demonstrate the feasibility of this scaling approach, we extended the Linux kernel with these four primitives, and call the extended Linux, ElasticOS. Figure 1 provides a high level view of ElasticOS. We see that an instance of ElasticOS is capable of spanning a number of nodes in the data center, and that the number of spanned nodes can elastically scale up or down depending upon application demand. The application is executed within ElasticOS, and the scaling primitives are used to support this execution across a distributed collection of resources. Figure 2: Illustration of ElasticOS abstractions. Each box labeled with a number above is a compute node, with the shaded boxes within represent individual pages. Starting with execution on a single machine in (0), when memory nears being filled, we stretch to two nodes in (1) and balance the pages in (2). We then push and pull pages in (3), with the red shaded pages going from node 1 to 2 (push) and from node 2 to 1 (pull). Finally, in (4) and (6) we are seeing too many page faults (resulting in pull), so decide to jump from node 1 to 2 in (5) and from node 2 to 1 in (7), respectively.
To demonstrate the desirability of these four primitives, we evaluated a set of applications with large memory footprints and compared against network swap, which supports the pull and push primitives, and itself has shown performance improvements of being able to scale memory resources transparently across multiple machines. We illustrate the additional benefit of also transparently scaling computing resources across multiple machines, forming a system with joint disaggregation of memory and computation. Our evaluation shows up to 10x speedup over network swap, as well as a reduction of network transfer between 2x and 5x.
In summary, we make the following contributions.
• Introduce scaling as a new OS abstraction, specifically with four primitives: stretch, push, pull, and jump.
• Provide an architecture and implementation of these abstractions in Linux.
• Demonstrate through an evaluation on Emulab servers that ElasticOS achieves up to 10x speed up over network swap across a range of applications, and up to 5X reduction in network overhead.
ElasticOS Primitives in Action
In this section, we describe the four primitives through an illustration of a running program. Figure 2 graphically presents each of the primitives. In this figure, we can see nodes 1 and 2, with pages inside of each nodethis represents the physical memory and whether a given page is used (shaded) or unused (unshaded) in physical memory. As a starting point, an application is running on a single machine. Over time, this application grows in memory use to nearly the size of the amount of memory in the entire node (label 0 in the figure). This is when ElasticOS decides to stretch the process, that is to scale out by using memory on a second node (label 1). At this point, the memory available to the application has grown (doubled in the figure, since it is now on two nodes with equal memory, which is not required in ElasticOS). Elas-ticOS can choose to balance the pages at this point, to transfer pages to the (new) remote node (label 2). These can be chosen by a means, such as least recently used. Once the process is stretched, this means that the process is effectively running on multiple machines, but each node only hosts some of the pages. At this point, execution continues on the original machine. As not all of the pages are on this machine (which would have naturally happened over time, even if we didn't balance pages), when the process tries to access a page, it might trigger a page fault. In ElasticOS, the page fault handler is modified to handle this situation. At this point, we perform a pull, where a page from a remote machine (that caused the fault), is transferred to the local machine and the process is resumed. The process will be able to make progress, as the page that is being accessed (and caused a fault) is now local.
If space is needed to perform a pull, we can perform a push to free up memory for the incoming page by transferring a page to a remote node (that we have stretched the application to). Push (and pull) is more versatile, as they can be performed proactively as well -moving pages around, in the background, to optimize the placement for locality (label 3).
The idea of locality is important, especially in regards to our final primitive, jump. Assuming that programs have locality, there is a certain point at which, when we transition into a new pocket of locality, that the amount of data that forms that locality is high. It is therefore advantageous to jump execution to the data, rather than pull it all into the local node (as is done in network swap). In the figure, in steps (4 and 6), the area highlighted in red represents an island of locality that would be more advantageous to jump to rather than pulling the entire group of pages to the local machine. When to jump is an important decision -jumping too much can hurt performance (constantly transferring execution, without making progress), but not jumping enough can also hurt performance (transferring lots of data back and forth between machines). As such, we created an initial algorithm, and implemented it as a flexible module within which new decision making algorithms can be integrated seamlessly.
ElasticOS Architecture
In this section, we describe the main components of the ElasticOS architecture. ElasticOS can be built as a service integrated into existing and commercially-available operating systems. Figure 3 illustrates the main functional elements that enable a process (e.g., a.out) to be stretched for distributed execution over two ElasticOS nodes. For clarity purposes, we depict the perspective of pushing and pulling from the perspective of node 1, but in reality all nodes have symmetric capabilities to enable pushing, pulling, and jumping in all directions.
In the remainder of this section, we will provide a more detailed architectural overview focusing on mechanisms that are roughly OS-independent in order to achieve stretching (3.1), pushing (3.2), pulling (3.3), and jumping (3.4). The discussion of OS-dependent elements specific to the Linux implementation is reserved for Section 4.
Stretching
Stretching is responsible for enabling a process to span multiple nodes. This consists of an initial stretch operation, as well as on going synchronization.
Initial stretch operation: In order for a process to span multiple nodes, it needs a process shell on each machine. In this way, stretching resembles previous Checkpoint/Restore (C/R) works [8,3], except that less information needs to be written into the checkpoint. Here we will need to create a process shell that will remain in a suspended state rather than wholly-independent runnable replica. This makes stretching faster than standard C/R. It requires kernel-space process meta-data. These include virtual memory mappings (mmaps), the file descriptor table, scheduling class, and any other meta-data which is not updated frequently. Other information that is typically modified at a high rate such as pending signals, register state, and stack frames need not be in the checkpoint and will be carried over from the running process whenever it jumps (3.4).
As shown in Figure 4, stretching is triggered by the EOS manager, which continuously monitors process' memory usage and issues a newly-created signal (SIGSTRETCH) whenever it detects a process that is too big to fit into the node where it is running. Our special kernel-space handler (eos sig handler) intercepts the signal and instructs the process-export module (p export) to send the checkpoint using a pre-created TCP socket to a process-import module (p import) waiting in the other node. The latter will, then, create a shell process by allocating the necessary kernel-space structures and filling them in with checkpoint data.
State Synchronization: After the process has been stretched, and its replica has been created on another machine, additional changes in process state on the first machine will need to be propagated to the replica. This is handled in two ways. Rapid changes in state are handled using the jumping mechanism, as explained later. Changes in state at a more intermediate time scale such as mapping new memory regions and opening or closing files are handled using multicast sockets to listeners on each participating node.
One of the pitfalls to avoid here is that the operating system scheduler may delay flushing all such synchronization messages until after a jump is performed. If this happens, the system may arrive at an incorrect state or even crash. So, it is crucial to flush all synchronization message before a jump is performed.
Pushing
Now that the process has presence on more than one machine, its memory pages are pushed between nodes in order to balance the load among participating nodes. Our page pusher piggybacks on existing OS's swap management (See Figure 5). Typically, the swap daemon scans least-recently used (LRU) lists to select least recently used page frames for swapping. Our page balancer modifies this page scanner in order to identify pages mapped by elasticized processes (shaded pages in Figure 5) using reverse mapping information associated with the page. These are then sent to a virtual block device client (VBD), similar to the one described in [10], after updating the respective page table entries (PTEs) in the elastic page table. The VBD then forwards the page along with relevant information such as process ID, and the page's virtual starting address to the page injection module (pg inject) on the node, which will then allocate a new page, fill it with the proper content, and update the replicas elastic page table.
Maintaining accurate information in the elastic page tables when pushing pages is very crucial to correct execution. As we will see later, jumping depends on this information for locating pages in the system.
Pulling
Partitioning the process's memory footprint will, inevitably, result in references to remote pages. These are handled by our modified page fault handler ( Figure 6). On a page fault, the handler will consult the elastic page table to identify the page's location. If it happened to be on a remote node, the page's starting virtual address and process ID is forwarded to the VBD, which will then contact the page extraction module (pg extract) on the respective node to pull the page. Once it receives the page's content, the VBD client, then restores the process's access to the page.
Whenever a remote page fault is handled as described above, page fault counters are updated. This is required by ElasticOS's jumping policy (Section 3.4), which will always try to co-locate execution with its mostreferenced memory.
Jumping
Jumping is the act of transferring execution from one node to another. For this, there is both a jumping mechanism that performs a lightweight process migration, and the jumping policy to determine when to jump.
Jumping mechanism: Jumping is an lightweight mechanism similar to checkpoint/restore systems. In contrast to stretching, with jumping, the process does actually transfer execution, and only carries in the checkpoint the information that changes at a high rate. This includes CPU state, the top stack frames, pending signals, auditing information, and I/O context. The overall size of jumping checkpoint data is dominated by the stack frames, so it is very important to include only the top-most stack memory pages that are necessary for correct execution.
As shown in Figure 7, whenever a jump is deemed necessary by the jumping policy in the EOS Manager, it sends a special signal (SIGJUMP) to the process, which is then routed to the eos sig handler which will then instruct the p export module to checkpoint the process and send the information to the other node's p import module. The latter will fill in the appropriate kernel-space structures and set the process's state to runnable. Notice here that when jumping, no new structures need to be allocated since the process has been already stretched to the target node. Also, notice that the process at the source node will remain in a suspended state. In essence, jumping resembles rescheduling a process from one CPU to another across the boundaries of a single machine.
Jumping Policy Algorithm: Maximizing locality is crucially important to the application's performance. A naive approach to moving execution and memory pages around in the system will, inevitably, increase the rate of remote page faults leading to poor performance. Thus, a good policy for moving processes close to their most frequently used memory is of critical importance. Elasti-cOS can achieve this goal by overcoming two challenges, namely having a good sense of how to group interdependent memory pages together on the same node, and detecting which of those groups is the most frequently accessed one.
The first challenge can be overcome by taking advantage of the natural groupings memory pages belonging to an application tend to form due to recency of reference. This property is already evident in the wide adoption of the LRU algorithm for page replacement in most modern OSs. Thus, we can extend LRU algorithms to work in a multi-node system, where pages evicted from one node's RAM are immediately shipped to another node via our pushing mechanism.
The second challenge can be addressed by implementing a jumping policy that: 1) monitors the process's page accesses to find the "preferred" node, and 2) reschedules the process to the preferred node if it is running on any of the other ones.
Bear in mind that accurately tracking memory references for a particular process can be a challenging task since CPUs do not report every memory access to the OS for performance reasons. This leaves us with options that provide the "next best thing", such as counting the number of time the CPU sets PG ACCESSED flag for a particular page frame when it is accessed in the X86 64 architecture or tracking handled page faults.
ElasticOS Implementation
We implemented ElasticOS as a fork of the Linux kernel v2.6.38.8. We chose the 2.6 kernel because it contains the key features that we needed to demonstrate elasticity, e.g. support for 64-bit x86 architectures and a reasonably featured virtual memory manager, while avoiding unnecessary complexity and instability in later kernels.
System Startup: Whenever a machine starts, it sends a message on a pre-configured port announcing its readiness to share its resources. The message includes two groups of information. First, connectivity parameters such as IP addresses and port numbers. Second, information noting the machine's available resources, which includes total and free RAM. Next, each participating node records the information received about the newlyavailable node and initiates network connections for the various clients. Finally, EOS manager is started, which will periodically scan processes and examines their memory usage searching for opportunities for elasticity.
Identifying such opportunities can be achieved by examining the per-process counters Linux maintains to keep track of memory usage. They include: 1) task_size inside each process's memory descriptor (i.e., struct mm_struct) which keeps track of the size of mapped virtual memory, 2) total_vm inside the same structure to track the process's mapped RAM pages, 3) rss_stat of type struct mm_rss_stat which contains an array of counters that further breaks down task_size into different categories (i.e., anonymous and file-mapped RAM pages used, and swap entries), and 4) maj_flt variable inside the struct task_struct which counts the number of swap-ins triggered by the process.
Linux also maintains memory utilization indicators called watermarks. There are three levels of watermarks: min, low, and high. These levels drive the kernel swap daemon's (kswapd) activity. When memory usage reaches the high watermark, page reclaim starts, and when it goes down to low watermark, page reclaim stops.
ElasticOS leverages these watermarks and the level of kswapd's activity to detect periods of memory pressure. Further, it identifies specific memory-intensive processes using the counters mentioned above and marks them for elasticity.
Stretching Implementation: The Linux kernel forces each process to handle pending signals upon entering the CPU. This is when our in-kernel signal handler, the p export module, checks for pending ElasticOS-specific signals. This design choice of checkpoint creation logic placement gives us access to register state, while preventing the process from updating its own memory while a checkpoint is in progress.
The handler, then, accesses the process information while writing them to a socket initialized during system startup. At the other end of the socket, the p import module collects the information and uses it to create the new shell process.
The key items that are included in this checkpoint consist of:
contents of the process descriptor (struct task struct), memory descriptor and (struct mm struct) virtual memory mappings (struct vm area struct), open files information (struct files struct), scheduling class information (struct sched class), signal handling information (struct sighand struct), and few others. The overall size of the this checkpoint in our experiments averages around nine kilobytes, which are dominated by the size of the process's data segment which is also included in the checkpoint.
Note, that we do not need to copy memory pages containing the code, since our implementation assumes that the same file system is available on all participating nodes. Instead, we carry over with the checkpoint data the mapped file names. Our p import module will locate and map the same files at the appropriate starting addresses.
P import handle the process creation the same way as if it were forked locally while substituting missing values with others from the local machine. For example, it assigns the newly created process a "baby sitter" to replace the real parent from the home node.
Finally, the p import module leaves the newly created process in a suspended state and informs the p export module that it can allow the original process in the source node to resume execution.
Pushing and Pulling Implementation: We extend Linux's second-chance LRU page replacement algorithm by adding multi-node page distribution awareness to it. In this version, pages selected for swapping out belong to elasticized processes and are pushed to another node and injected into the address space of the process duplicate there. Second-chance LRU groups pages in referencebased chronological order within the pages list. So, it is most likely that pages at the rear of the queue, which are typically considered for eviction, are related in terms of locality of reference.
One challenge that needed to be solved to implement page balancing is identifying pages belonging to an elasticized process and what virtual address they are mapped to. Luckily, Linux maintains a functionality called reverse mapping, which links anonymous pages to their respective virtual area map. By walking this chain of pointers and then finding which process owns that map, we can tell them apart from other pages owned by other processes in the system. Then, with simple calculations we can find the starting virtual address of that page. As for moving pages from one machine to another, we created a virtual block device (VBD) that sends page contents using a socket connected to a page server on the other machine (VBD Server) rather than storing it to a storage medium. This was shown in Figure 6. This virtual block device is added to the system as a swap device. All pages belonging to an elasticized process sent to the other machine are allocated swap entries from this device. This swap entry is inserted into the page table of the elasticized process where the page is mapped. As a result, if that page needs to be faulted in later on, the swap entry will route the page fault to our VBD. This design choice allows us to reuse Linux's page eviction and faulting code.
Jumping: Whenever a remote page fault occurs, a remote page counter is incremented. We keep track of the number of remote page faults to use it later on for jumping. As the page remote fault counter builds up, it will show the tendency of where page faults are "going". If the remote faults count value hits a predetermined threshold, then the system could determine that the process would better exploit locality of reference if it jumps to the remote node. Jumping starts by sending a special signal to the target process, which is handled by an in-kernel checkpoint module. This module will, then, copy only the necessary information for the process to resume on the other node. This information includes: 1) the thread context, which contains the register state and other flags, 2) pending signals (i.e., struct sigpending contents inside struct task struct), 3) auditing counters, and 4) the stack page frames (i.e., RAM pages mapped by the vm area struct with the flag VM GROWSDOWN set). In our tests, the checkpoint size was roughly 9KBs and was dominated by the two stack page frames (4KBs each). Other information about the process will be synchronized using a special module described next. These pieces of information are sent to the restart module in the remote node via a pre-established TCP connection. In its turn, the restart module updates the process information with the checkpoint data, and sends a (SIGCONT) to the process. This will inform the scheduler that it is ready to run again. The process on the source machine will remain in an interruptible wait state (i.e., suspended). This will guarantee that only one clone of the process is running at any given instance.
State Synchronization: The state synchronization component is built as a collection of user-space programs and a kernel module. The user space portion simply sets up the network connections and then passes their socket descriptors to the kernel module, which exposes hook functions to the main kernel.
When an elasticized process issues a system call that modifies its in-kernel data structures (e.g., mmap), the appropriate kernel module hook function is called (e.g., sync new mmap), which will then multi-cast a message to all participating nodes. The message will contain all necessary information (e.g., region's starting address, its length, mapping flags, and file name) to apply the same operation on all process replicas. Multi-cast listeners, then, relay the message to the appropriate hook functions, who will apply the change (i.e., call mmap on the process replica).
Performance Evaluation
In this section, we focus on evaluation of the performance of ElasticOS. Specifically, we look to quantify the benefit of joint disaggregation (memory and computation) by comparing against network swap, which is a one dimensional (scaling memory), which has previously been shown to have performance benefits over not scaling memory [20,10]. We note that we do not explicitly compare against just process migration, as the use cases are different, where process/VM migration is commonly used to move execution permanently and triggered by contention for resources or for other operational reasons (e.g., planned maintenance) -making it heavier weight and not well suited for comparison.
Experimental Setup
We evaluated ElasticOS on the Emulab testbed [9]. We used Emulab D710 nodes with 64-bit Quad Core Xeon processor, 12 gigabytes RAM, and a gigabit NIC. We choose Emulab D710 nodes because they support Linux kernel 2.6. Our experimental setup for each experiment consists of two nodes connected via gigabit Ethernet ports, transported through a network switch.
To evaluate, we ran tests on a variety of algorithms representing the type of processing that would be a target use case for ElasticOS -large graphs or lists to be processed. Shown in Table 1 is a summary of these applications, and the footprint of each application -note that the footprint goes beyond the limits of a single node in Emulab. Specifically, these algorithms typically use 11GB of memory on the first machine, and stretch to a remote machine for the additional memory. In our experimental setup, we employed a basic jumping algorithm to trigger transfer of execution. A simple remote page fault counter is updated for each remote pull, and whenever a counter threshold value is reached, then a process will jump its execution to the remote machine. In addition, the counter is then reset. We tested the algorithms with different counter threshold values (32 up to 4M).
For each algorithm, we measure its execution time as well as network traffic generated, and compare results of ElasticOS and network swap. To provide a comparison with network swap, hereafter termed Nswap, in a manner which isolated the gains to simply the benefit of jumping and not any implementation differences, we use Elasti-cOS code, but disable jumping. In this way, Nswap tests pin a process on one machine, but use the memory of a remote machine as a swap space. In our experiments, both ElasticOS and Nswap spanned two machines. Emulab provides isolation of networking and execution in the testbed from external disturbances.
Micro-benchmarks
An important metric when evaluating ElasticOS is the performance of each individual primitive. These are summarized in Table 2, based on our measurements on Emulab D710 nodes. We'll note that jumping is very fast, taking only 45-55 microseconds. This is substantially lower than reported numbers for process or VM migration, which are measured in seconds (e.g., one benchmark states CRIU's downtime is roughly 3 seconds [23]). Stretching is also only performed once -when a decision is made that this process would benefit from scaling in the future.
We measured pushing and pulling to be between 30-35 microseconds -roughly the time to submit the request and transfer a page (4KB) of data across a network. For jumping to be effective in speeding up execution in Elas-ticOS, there must be locality. That is, the time for a single jump must be less than the time for the number of remote page pulls that would be saved by jumping. For our performance microbenchmarks, for jumping to be efficient, the process should save at least two remote page pulls. As we show next, the locality is much greater than this, resulting in substantial speedups.
Execution Time and Network Traffic
There are two key metrics to consider when comparing ElasticOS (with jumping, pulling and pushing), to network swap (just pulling and pushing). The first is overall execution time. Here, the key premise behind jumping is that to exploit locality, we should transfer execution to where the data is, rather than pull in the data to where the execution is. The second is the amount of network traffic -jumping needs to transfer context (e.g., the current stack), and pulling/pushing transfers pages.
In Figure 8, we show our measured average execution time for both Nswap and ElasticOS for each of the algorithms we have evaluated. These execution times are averaged over four runs using the threshold that achieves the most improvement. We observe that in the best case, ElasticOS shows substantial performance benefits for most algorithms. For example, Linear Search experienced about an order of magnitude speedup in execution performance, Depth First Search (DFS) achieved about 1.5X delay improvement, while Dijkstra's algorithm achieved no speedup. Table 3 describes the specific threshold values where best performance was achieved in ElasticOS for each algorithm. It also lists the total number of jumps at that threshold as well as the frequency of jumping for each algorithm at that threshold. The jumping rate ranges from less than once per second to hundreds of times per second.
While Figure 8 represents the best case, we were also interested in understanding whether we could find universal threshold values that achieves performance improvements -perhaps not the best -regardless of the algorithm. Our analysis found that, regardless of the algorithm, using any threshold value above 128, Elastic OS performs better than Nswap for any algorithm, either in delay, network overhead or both.
The use of jumping to exploit locality improves the execution time by enabling more local pages to be accessed, rather than needing to go across a network (which is orders of magnitude slower). This also reduces the amount of network traffic, even taking into account the We can see that ElasticOS reduces the amount of traffic on the network for all algorithms tested by a significant amount -from a 5x reduction for Linear Search to about 2x reduction for DFS. By avoiding the process of swapping in and out to remote machines through lightweight jumping, we save a large amount of data and control traffic associated with avoidable remote page faults. Also, even if we did not achieve any delay improvements running ElasticOS, we still can obtain network traffic reduction. For example, Dijkstra's algorithm did not achieve any delay improvement, even though Table 3 shows that Dijkstra had 520 jumps, but these jumps helped reducing its network overhead by 70%. In examining the behavior of Dijsktra's, its initial set of jumps before settling down to execution on one machine resulted in substantial overhead savings.
Understanding Application Specific Behavior
We previously showed that each algorithm has a varying degree of improvements. While the simple reasoning is that it is due to locality, here we examine three of the algorithms in detail to really understand this behavior.
Linear Search
For Linear Search, the memory access pattern is simple and predictable, namely the memory address space is accessed in a linear fashion. As a result, consecutive memory pages tend to age in LRU lists together, and end up being swapped to the remote machine together. When a process jumps towards a remote page, it is very likely for the process to find a chunk of consecutive pages to access, exploiting locality of these pages, which saves the process a significant amount of time by avoiding swap overhead. Figure 10 shows delay improvements on Linear Search with respect to jumping threshold. Linear Search tends to perform better when the counter threshold is smaller, hence jumping early is better when accessing the address space in a linear fashion. Table 3 shows the highest frequency of jumping for linear search, as well as the lowest threshold value used. We also observe that as the threshold for jumping increases, jumping will occur less often and eventually not at all, hence explaining why the delay curve for ElasticOS converges to Nswap.
Depth First Search
On the other hand, Depth First Search has a non linear memory access pattern. When the algorithm starts a depth first search, the search starts at the root node, and traverses the graph branch by branch, from root to the end (depth) of the branch. While the graph nodes are laid out in a certain order in the memory space, the access pattern of DFS does not match this layout. This increased randomness of access to pages means that there is less locality to exploit on each jump than occurred for Linear Search, and hence less gain versus Nswap compared to Linear Search. Figure 11 shows different execution times of DFS for various counter threshold sizes. Elasti-cOS achieves at best about a 1.5x improvement in delay over Nswap across a wide range of counter thresholds, namely larger than 64. However, for very small values of threshold less than or equal to 64, DFS performs worse. Figure 12 shows that when the threshold value is very small, DFS experiences a large number of jumps. Also, our tests showed that DFS's best performance happens when the threshold value is large compared to other algorithms as shown in Table 3. The shape of the graph in DFS can also impact the memory access pattern. For example increasing the depth of the graph would make branches longer, resulting in a longer branch that occupies more memory pages, increasing the chance of a single branch having pages located both on local and remote machines. This would increase the chances of jumping more and performing poorly. Figure 13 shows DFS performance on ElasticOS for different graph depths with a fixed jumping counter size of 512. Increasing the graph depth eventually results in poorer performance. Figure 14 shows that this poorer performance occurs when there is excessive jumping for deep graphs. To make ElasticOS perform better on such graph depth we need to increase the jumping counter size to values larger than 512, to avoid jumping too much.
Dijkstra's Algorithm
ElasticOS achieved very little gain when executing Dijkstra's algorithm when compared to Nswap. Dijkstra's algorithm scans through an adjacency matrix, then learns and stores information about the shortest path in a separate array. However, Dijkstra does not necessarily access all nodes in the adjacency matrix, because some nodes are not connected, or one of the paths was excluded for being too long. Since Dijkstra's algorithm keeps track of the shortest path nodes in a separate array, it only accesses the adjacency matrix nodes once, and keeps useful information in the shortest path array. Based on how Dijkstra's algorithm works, it does not access memory frequently, and only accesses part of the allocated memory. Therefore, most of Dijkstra's execution time does not involve many remote page faults. Since jumping saves time wasted on remote page faults, Dijkstra does not gain much delay improvement, because it does not jump due to very small number of remote page faults. Figure 15 confirms that Dijkstra's algorithm spends most of its execution time on one machine without jumping. Our experiments showed that only a relatively small set of jumps happened at the beginning, and the rest of the time execution stayed on one machine.
Discussion and Future Work
We intend to upgrade ElasticOS to a newer version of Linux. We plan to investigate improved jumping algorithms that better exploit locality by actively learning about elasticized process' memory access patterns during run time and employing adaptive jumping thresholds. Probabilistic models will be investigated. In addition, we will explore whether incorporating into the jumping decision the burstiness of remote page faulting brings any benefit. Also, we are considering a more proactive approach to controlling the swap out operation for elasticized processes by modifying kswapd. If we selectively swap out pages to remote machines, we might be able to create islands of locality on remote machines, thus, making jumping more efficient. We also can pin memory pages, and prevent them from being swapped, which would allow us to control how the memory address space is distributed across participating machines. We plan to test a wider variety of algorithms, including SQL-like database operations. We intend to expand testing to more than two nodes.
Conclusion
In this paper, we have implemented within Linux four new primitives, namely stretch, push, pull, and jump, to support scaling as an OS abstraction. This extended Linux system is called ElasticOS. These primitives transparently achieve joint disaggegration of computation and memory, enabling both data to move towards execution, as well as execution to move towards data within a stretched address space spanning multiple nodes in a data center. Our evaluation results were obtained from Emulab deployments of ElasticOS testing a variety of different application algorithms, and indicate that such joint disaggregation achieves up to 10X speedup in execution time over network swap, as well as 2-5X reductions in network overhead.
Figure 1 :
1ElasticOS Vision for Cloud Data Centers.
Figure 3 :
3EOS Architecture.
Figure 4 :
4Stretching.
Figure 5 :
5Pushing.
Figure 6 :
6Pulling.
Figure 7 :
7Jumping.
Figure 8 :
8Execution Time Comparison.
Figure 9 :
9Network Traffic Comparison.
Figure 10 :
10Linear Search Execution Time.
Figure 11 :
11Depth First Search Execution Time.
Figure 12 :
12Depth First Search Number of Jumps.
Figure 13 :Figure 14 :
1314Depth Depth First Search Jumps on Different Depths.
Figure 15 :
15Maximum Time Spent on a Machine without Jumping.
Table 1 :
1Tested algorithms and their memory footprints.Algorithm
Memory Footprint
Depth First Search
330 million nodes (15 GB)
Linear Search
2 billion long int (15 GB)
Dijkstra
3.5 billion int weights (14 GB)
Block Sort
1.8 billion long int (13 GB)
Heap Sort
1.8 billion long int (14 GB)
Count Sort
1.8 billion long int (14 GB)
Table 2 :
2Micro-benchmarks of ElasticOS primitives.Primitive Latency Network Transfer
Stretch
2.2ms
9KB
Push
30-35us 4KB
Pull
30-35us 4KB
Jump
45-55us 9KB
Table 3 :
3Jumping Thresholds.Algorithm
Threshold Number Jumping
of jumps frequency
(jumps/sec)
DFS
8K
180
0.6
Block Sort
512
1032
12.3
Heap Sort
512
3454
12.4
Linear Search 32
3054
157.4
Count Sort
4096
198
0.6
Dijkstra
512
520
1.4
data transfer needed to perform a jump. Shown in Fig-
ure 9 are our measured results for each of the algo-
rithms tested.
AcknowledgmentsThis research was supported by NSF CCF grant # 1337399, funded under the NSF program Exploiting Parallelism and Scalability "XPS: SDA: Elasticizing the Linux Operating System for the Cloud". We also wish to thank Sepideh Goodarzy and Ethan Hanner.
An organizational grid of federated mosix clusters. A Barak, A Shiloh, L Amar, Cluster Computing and the Grid. IEEE1BARAK, A., SHILOH, A., AND AMAR, L. An organizational grid of federated mosix clusters. In Cluster Computing and the Grid, 2005. CCGrid 2005. IEEE International Symposium on (2005), vol. 1, IEEE, pp. 350-357.
Memx: Virtualization of cluster-wide memory. U Deshpande, B Wang, S Haque, M Hines, K Gopalan, 39th International Conference on. IEEEParallel Processing (ICPP)DESHPANDE, U., WANG, B., HAQUE, S., HINES, M., AND GOPALAN, K. Memx: Virtualization of cluster-wide memory. In Parallel Processing (ICPP), 2010 39th International Conference on (2010), IEEE, pp. 663-672.
Experience with process migration in sprite. F Douglis, Proceedings of USENIX Workshop on Distributed an Multiprocessor Systems. USENIX Workshop on Distributed an Multiprocessor SystemsDOUGLIS, F. Experience with process migration in sprite. In Proceedings of USENIX Workshop on Distributed an Multipro- cessor Systems (1989), pp. 59-72.
Farm: Fast remote memory. A Dragojević, D Narayanan, O Hodson, M Cas-Tro, Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation. the 11th USENIX Conference on Networked Systems Design and ImplementationDRAGOJEVIĆ, A., NARAYANAN, D., HODSON, O., AND CAS- TRO, M. Farm: Fast remote memory. In Proceedings of the 11th USENIX Conference on Networked Systems Design and Imple- mentation (2014), pp. 401-414.
Advanced computer architecture and parallel processing. H El-Rewini, M Abd-El-Barr, Message passing interface (mpi)EL-REWINI, H., AND ABD-EL-BARR, M. Message passing in- terface (mpi). Advanced computer architecture and parallel pro- cessing (2005), 205-233.
Criu: Checkpoint/restore in userspace. P Emelyanov, EMELYANOV, P. Criu: Checkpoint/restore in userspace, july 2011.
. Emulab, Emulab, EMULAB. emulab. https://www.emulab.net/portal/. [Online; accessed 6-Feb-2018].
Efficient memory disaggregation with infiniswap. J Gu, Y Lee, Y Zhang, M Chowdhury, K G Shin, NSDI. GU, J., LEE, Y., ZHANG, Y., CHOWDHURY, M., AND SHIN, K. G. Efficient memory disaggregation with infiniswap. In NSDI (2017), pp. 649-667.
Towards elastic operating systems. A Gupta, E Ababneh, R Han, E Keller, In HotOS. GUPTA, A., ABABNEH, E., HAN, R., AND KELLER, E. To- wards elastic operating systems. In HotOS (2013).
HADOOP. HADOOP. http://hadoop.apache.org/. [Online; accessed 6-Feb-2018].
Berkeley lab checkpoint/restart (blcr) for linux clusters. P H Hargrove, J C Duell, In Journal of Physics: Conference Series. 494IOP PublishingHARGROVE, P. H., AND DUELL, J. C. Berkeley lab check- point/restart (blcr) for linux clusters. In Journal of Physics: Con- ference Series (2006), IOP Publishing, p. 494.
Treadmarks: Distributed shared memory on standard workstations and operating systems. P Keleher, A L Cox, S Dwarkadas, W Zwaenepoel, Proceedings of the USENIX Winter 1994 Technical Conference on USENIX Winter 1994 Technical Conference. the USENIX Winter 1994 Technical Conference on USENIX Winter 1994 Technical ConferenceBerkeley, CA, USAWTEC'94, USENIX AssociationKELEHER, P., COX, A. L., DWARKADAS, S., AND ZWAENEPOEL, W. Treadmarks: Distributed shared memory on standard workstations and operating systems. In Proceedings of the USENIX Winter 1994 Technical Conference on USENIX Winter 1994 Technical Conference (Berkeley, CA, USA, 1994), WTEC'94, USENIX Association, pp. 10-10.
. Kerrighed, Kerrighed, Online; accessed 6-FebKERRIGHED. Kerrighed. http://www.kerrighed.org/ wiki/index.php/Main_Page/. [Online; accessed 6-Feb-
The file system of an integrated local network. P J Leach, P H Levine, J A Hamilton, B L Stumpf, Proceedings of the 1985 ACM Thirteenth Annual Conference on Computer Science. the 1985 ACM Thirteenth Annual Conference on Computer ScienceNew York, NY, USACSC '85, ACMLEACH, P. J., LEVINE, P. H., HAMILTON, J. A., AND STUMPF, B. L. The file system of an integrated local network. In Proceed- ings of the 1985 ACM Thirteenth Annual Conference on Com- puter Science (New York, NY, USA, 1985), CSC '85, ACM, pp. 309-324.
Memory coherence in shared virtual memory systems. K Li, P Hudak, ACM Transactions on Computer Systems (TOCS). 7LI, K., AND HUDAK, P. Memory coherence in shared vir- tual memory systems. ACM Transactions on Computer Systems (TOCS) 7, 4 (1989), 321-359.
. D S Milojičić, F Douglis, Y Paindaveine, R Wheeler, S Zhou, Process migration. ACM Computing Surveys (CSUR). 32MILOJIČIĆ, D. S., DOUGLIS, F., PAINDAVEINE, Y., WHEELER, R., AND ZHOU, S. Process migration. ACM Com- puting Surveys (CSUR) 32, 3 (2000), 241-299.
Supporting the shared memory model on computing clusters. R G Minnich, D V Pryor, Mether, Compcon Spring'93. IEEEMINNICH, R. G., AND PRYOR, D. V. Mether: Supporting the shared memory model on computing clusters. In Compcon Spring'93, Digest of Papers. (1993), IEEE, pp. 558-567.
Nswap: A network swapping module for linux clusters. T Newhall, S Finney, K Ganchev, M Spiegel, European Conference on Parallel Processing. SpringerNEWHALL, T., FINNEY, S., GANCHEV, K., AND SPIEGEL, M. Nswap: A network swapping module for linux clusters. In European Conference on Parallel Processing (2003), Springer, pp. 1160-1169.
Distributed shared memory: A survey of issues and algorithms. B Nitzberg, V Lo, Computer. 24NITZBERG, B., AND LO, V. Distributed shared memory: A sur- vey of issues and algorithms. Computer 24, 8 (1991), 52-60.
Fast crash recovery in ramcloud. D Ongaro, S M Rumble, R Stutsman, J Ouster-Hout, M Rosenblum, Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles. the Twenty-Third ACM Symposium on Operating Systems PrinciplesACMONGARO, D., RUMBLE, S. M., STUTSMAN, R., OUSTER- HOUT, J., AND ROSENBLUM, M. Fast crash recovery in ram- cloud. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles (2011), ACM, pp. 29-41.
. P Research Criu, Performance Research -Criu, RESEARCH CRIU, P. Performance research -CRIU. https: //criu.org/Performance_research. [Online; accessed 6- Feb-2018].
Fast dynamic process migration. E T Roush, R H Campbell, Proceedings of the 16th International Conference on. the 16th International Conference onIEEEDistributed Computing SystemsROUSH, E. T., AND CAMPBELL, R. H. Fast dynamic pro- cess migration. In Distributed Computing Systems, 1996., Pro- ceedings of the 16th International Conference on (1996), IEEE, pp. 637-645.
A survey of process migration mechanisms. J M Smith, ACM SIGOPS Operating Systems Review. 22SMITH, J. M. A survey of process migration mechanisms. ACM SIGOPS Operating Systems Review 22, 3 (1988), 28-40.
Checkpointing and process migration for mpi. G Stellner, Cocheck, Parallel Processing Symposium. IEEEThe 10th InternationalSTELLNER, G. Cocheck: Checkpointing and process migration for mpi. In Parallel Processing Symposium, 1996., Proceedings of IPPS'96, The 10th International (1996), IEEE, pp. 526-531.
The amoeba distributed operating system -a status report. A S Tanenbaum, M F Kaashoek, R V Renesse, H E Bal, Computer Communications. 14TANENBAUM, A. S., KAASHOEK, M. F., RENESSE, R. V., AND BAL, H. E. The amoeba distributed operating system -a status report. Computer Communications 14 (1991), 324-335.
Swapx: An nvm-based hierarchical swapping framework. G Zhu, K Lu, X Wang, Y Zhang, P Zhang, S Mittal, IEEE Access. 5ZHU, G., LU, K., WANG, X., ZHANG, Y., ZHANG, P., AND MITTAL, S. Swapx: An nvm-based hierarchical swapping frame- work. IEEE Access 5 (2017), 16383-16392.
|
[] |
[
"Magnetic field influence on the proximity effect in semiconductor -superconductor hybrid structures and their thermal conductance",
"Magnetic field influence on the proximity effect in semiconductor -superconductor hybrid structures and their thermal conductance"
] |
[
"Grygoriy Tkachov \nDepartment of Physics\nLancaster University\nLA1 4YBLancasterUnited Kingdom\n\nInstitute for Radiophysics and Electronics NAS\n61085KharkivUkraine\n",
"Vladimir I Fal'ko \nDepartment of Physics\nLancaster University\nLA1 4YBLancasterUnited Kingdom\n"
] |
[
"Department of Physics\nLancaster University\nLA1 4YBLancasterUnited Kingdom",
"Institute for Radiophysics and Electronics NAS\n61085KharkivUkraine",
"Department of Physics\nLancaster University\nLA1 4YBLancasterUnited Kingdom"
] |
[] |
We show that a magnetic field can influnce the proximity effect in NS junctions via diamagnetic screening current flowing in the superconductor. Using ballistic quasi-one-dimensional (Q1D) electron channels as an example, we show that the supercurrent flow shifts the proximity-induced minigap in the excitation spectrum of a Q1D system from the Fermi level to higher quasiparticle energies. Thermal conductance of a Q1D channel (normalized by that of a normal Q1D ballistic system) is predicted to manifest such a spectral feature as a nonmonotonic behavior at temperatures corresponding to the energy of excitation into the gapful part of the spectrum. PACS numbers: 74.45.+c, 74.50.+r, 73.23.Ad. The superconducting proximity effect is a mesoscopic scale phenomenon, which consists in the penetration and coherent propagation of Cooper pairs from a superconductor (S) into a normal metal (N). The Cooper pair transfer into the normal metal can be equivalently described as an Andreev reflection process 1 which consists of electron (with momentum p) conversion into a Fermi sea hole (with momentum −p) at the NS interface. The interference between an electron and the Andreev reflected hole imposes a minigap onto the spectrum of quasiparticle excitations near the Fermi level in the normal part of such a hybrid structure 2 , thus giving rise to pronounced features in its I(V) characteristics 3,4,5 and thermoelectric properties 6 . Studies of the proximity effect have recently been made in various combinations of materials, including junctions between superconductors and semiconductor structures 3 supporting a two-dimensional electron gas. In the case of electrons in a semiconductor structure weakly coupled to a superconductor, the minigap value discussed in the literature 7,8 is much smaller than that of the 'mother' gap in the superconductor, both due to the mismatch v F ≪ v S between Fermi velocities in the two-dimensional gas [v F = (2E F /m) 1/2 ] and the superconducting metal (v S ), and also due to a possible Schottky barrier between them, with transparency θ ∼ e −2a/λ (dependent on the length λ of electron penetration into the barrier of the thickness a), E g ≈ vF vS θE F ≪ ∆. It has been noticed that the electron-hole interferences and the SN proximity effect survive at higher magnetic fields than the weak localisation -another quantum interference effect 3,4,5 . This has been understood as a consequence of the fact that the interfering electron and Andreev-reflected hole retrace the same geometrical path in the normal metal, thus hardly encircling any magnetic flux 9 . Therefore, another mechanism of magnetic field influence on the superconducting proximity needs to be taken into account, via a screening diamagnetic supercurrent on the S-side of the hybrid structure. Since Andreev reflection takes place at the NS interface, where Cooper pairs flow, the incoming electron and the hole reflected by a moving condensate of Cooper pairs would be slightly shifted in momentum space; hence the ideal condition for them to retrace the same geometrical path is violated. As the orbital effect of the magnetic field on the normal metal or semiconductor side of the system is weak, the influence via diamagnetic screening may be the major factor of magnetic field influence on the superconducting proximity effect.Below, we analyze the influence of diamagnetic supercurrent in the system where the latter would be the only way a magnetic field might affect the proximity effect: a ballistic one-dimensional conductor connected in parallel to a superconducting bulk [Fig. 1a]. To be specific, we model such a conductor as a quasi-one-dimensional (Q1D) channel formed near the edge of a 2D electron gas in a heterostructure (x-y plane) with the side contact to a superconducting film, by depleting the 2D gas using a split top gate, and subjected to a weak magnetic field B = (0; 0; B). We show that the spectrum of lowenergy quasiparticle excitiations in such a hybrid system has the minigap displaced with respect to the Fermi level to higher energies,reflecting the fact that Cooper pairs in the channel are forced into the flow while tunneling from the bulk of the superconductor (where they are formed of two electrons with exactly opposite momenta) across the region of penetration of the magnetic field. [The Zeeman splitting effect is also taken care of by the term αε Z (α is the spin projection) in Eq. (1)]. As a result, each of the two electrons acquires the momentum shiftcaused by the Lorentz force and equal to the difference between the vector potential A = (0, A, 0) deep inside the superconductor, A = 0, and at its surface,
|
10.1103/physrevb.69.092503
|
[
"https://arxiv.org/pdf/cond-mat/0307062v3.pdf"
] | 7,411,174 |
cond-mat/0307062
|
f9a6f33c713681ca62060cefb1521957347c851a
|
Magnetic field influence on the proximity effect in semiconductor -superconductor hybrid structures and their thermal conductance
26 Apr 2004
Grygoriy Tkachov
Department of Physics
Lancaster University
LA1 4YBLancasterUnited Kingdom
Institute for Radiophysics and Electronics NAS
61085KharkivUkraine
Vladimir I Fal'ko
Department of Physics
Lancaster University
LA1 4YBLancasterUnited Kingdom
Magnetic field influence on the proximity effect in semiconductor -superconductor hybrid structures and their thermal conductance
26 Apr 2004(Dated: November 12, 2018)
We show that a magnetic field can influnce the proximity effect in NS junctions via diamagnetic screening current flowing in the superconductor. Using ballistic quasi-one-dimensional (Q1D) electron channels as an example, we show that the supercurrent flow shifts the proximity-induced minigap in the excitation spectrum of a Q1D system from the Fermi level to higher quasiparticle energies. Thermal conductance of a Q1D channel (normalized by that of a normal Q1D ballistic system) is predicted to manifest such a spectral feature as a nonmonotonic behavior at temperatures corresponding to the energy of excitation into the gapful part of the spectrum. PACS numbers: 74.45.+c, 74.50.+r, 73.23.Ad. The superconducting proximity effect is a mesoscopic scale phenomenon, which consists in the penetration and coherent propagation of Cooper pairs from a superconductor (S) into a normal metal (N). The Cooper pair transfer into the normal metal can be equivalently described as an Andreev reflection process 1 which consists of electron (with momentum p) conversion into a Fermi sea hole (with momentum −p) at the NS interface. The interference between an electron and the Andreev reflected hole imposes a minigap onto the spectrum of quasiparticle excitations near the Fermi level in the normal part of such a hybrid structure 2 , thus giving rise to pronounced features in its I(V) characteristics 3,4,5 and thermoelectric properties 6 . Studies of the proximity effect have recently been made in various combinations of materials, including junctions between superconductors and semiconductor structures 3 supporting a two-dimensional electron gas. In the case of electrons in a semiconductor structure weakly coupled to a superconductor, the minigap value discussed in the literature 7,8 is much smaller than that of the 'mother' gap in the superconductor, both due to the mismatch v F ≪ v S between Fermi velocities in the two-dimensional gas [v F = (2E F /m) 1/2 ] and the superconducting metal (v S ), and also due to a possible Schottky barrier between them, with transparency θ ∼ e −2a/λ (dependent on the length λ of electron penetration into the barrier of the thickness a), E g ≈ vF vS θE F ≪ ∆. It has been noticed that the electron-hole interferences and the SN proximity effect survive at higher magnetic fields than the weak localisation -another quantum interference effect 3,4,5 . This has been understood as a consequence of the fact that the interfering electron and Andreev-reflected hole retrace the same geometrical path in the normal metal, thus hardly encircling any magnetic flux 9 . Therefore, another mechanism of magnetic field influence on the superconducting proximity needs to be taken into account, via a screening diamagnetic supercurrent on the S-side of the hybrid structure. Since Andreev reflection takes place at the NS interface, where Cooper pairs flow, the incoming electron and the hole reflected by a moving condensate of Cooper pairs would be slightly shifted in momentum space; hence the ideal condition for them to retrace the same geometrical path is violated. As the orbital effect of the magnetic field on the normal metal or semiconductor side of the system is weak, the influence via diamagnetic screening may be the major factor of magnetic field influence on the superconducting proximity effect.Below, we analyze the influence of diamagnetic supercurrent in the system where the latter would be the only way a magnetic field might affect the proximity effect: a ballistic one-dimensional conductor connected in parallel to a superconducting bulk [Fig. 1a]. To be specific, we model such a conductor as a quasi-one-dimensional (Q1D) channel formed near the edge of a 2D electron gas in a heterostructure (x-y plane) with the side contact to a superconducting film, by depleting the 2D gas using a split top gate, and subjected to a weak magnetic field B = (0; 0; B). We show that the spectrum of lowenergy quasiparticle excitiations in such a hybrid system has the minigap displaced with respect to the Fermi level to higher energies,reflecting the fact that Cooper pairs in the channel are forced into the flow while tunneling from the bulk of the superconductor (where they are formed of two electrons with exactly opposite momenta) across the region of penetration of the magnetic field. [The Zeeman splitting effect is also taken care of by the term αε Z (α is the spin projection) in Eq. (1)]. As a result, each of the two electrons acquires the momentum shiftcaused by the Lorentz force and equal to the difference between the vector potential A = (0, A, 0) deep inside the superconductor, A = 0, and at its surface,
We show that a magnetic field can influnce the proximity effect in NS junctions via diamagnetic screening current flowing in the superconductor. Using ballistic quasi-one-dimensional (Q1D) electron channels as an example, we show that the supercurrent flow shifts the proximity-induced minigap in the excitation spectrum of a Q1D system from the Fermi level to higher quasiparticle energies. Thermal conductance of a Q1D channel (normalized by that of a normal Q1D ballistic system) is predicted to manifest such a spectral feature as a nonmonotonic behavior at temperatures corresponding to the energy of excitation into the gapful part of the spectrum. PACS numbers: 74.45.+c,74.50.+r,73.23.Ad. The superconducting proximity effect is a mesoscopic scale phenomenon, which consists in the penetration and coherent propagation of Cooper pairs from a superconductor (S) into a normal metal (N). The Cooper pair transfer into the normal metal can be equivalently described as an Andreev reflection process 1 which consists of electron (with momentum p) conversion into a Fermi sea hole (with momentum −p) at the NS interface. The interference between an electron and the Andreev reflected hole imposes a minigap onto the spectrum of quasiparticle excitations near the Fermi level in the normal part of such a hybrid structure 2 , thus giving rise to pronounced features in its I(V) characteristics 3,4,5 and thermoelectric properties 6 . Studies of the proximity effect have recently been made in various combinations of materials, including junctions between superconductors and semiconductor structures 3 supporting a two-dimensional electron gas. In the case of electrons in a semiconductor structure weakly coupled to a superconductor, the minigap value discussed in the literature 7,8 is much smaller than that of the 'mother' gap in the superconductor, both due to the mismatch v F ≪ v S between Fermi velocities in the two-dimensional gas [v F = (2E F /m) 1/2 ] and the superconducting metal (v S ), and also due to a possible Schottky barrier between them, with transparency θ ∼ e −2a/λ (dependent on the length λ of electron penetration into the barrier of the thickness a), E g ≈ vF vS θE F ≪ ∆. It has been noticed that the electron-hole interferences and the SN proximity effect survive at higher magnetic fields than the weak localisation -another quantum interference effect 3,4,5 . This has been understood as a consequence of the fact that the interfering electron and Andreev-reflected hole retrace the same geometrical path in the normal metal, thus hardly encircling any magnetic flux 9 . Therefore, another mechanism of magnetic field influence on the superconducting proximity needs to be taken into account, via a screening diamagnetic supercurrent on the S-side of the hybrid structure. Since Andreev reflection takes place at the NS interface, where Cooper pairs flow, the incoming electron and the hole reflected by a moving condensate of Cooper pairs would be slightly shifted in momentum space; hence the ideal condition for them to retrace the same geometrical path is violated. As the orbital effect of the magnetic field on the normal metal or semiconductor side of the system is weak, the influence via diamagnetic screening may be the major factor of magnetic field influence on the superconducting proximity effect.
Below, we analyze the influence of diamagnetic supercurrent in the system where the latter would be the only way a magnetic field might affect the proximity effect: a ballistic one-dimensional conductor connected in parallel to a superconducting bulk [ Fig. 1a]. To be specific, we model such a conductor as a quasi-one-dimensional (Q1D) channel formed near the edge of a 2D electron gas in a heterostructure (x-y plane) with the side contact to a superconducting film, by depleting the 2D gas using a split top gate, and subjected to a weak magnetic field B = (0; 0; B). We show that the spectrum of lowenergy quasiparticle excitiations in such a hybrid system has the minigap displaced with respect to the Fermi level to higher energies,
ǫ ± αp = v F Π × sgnp − αε Z ± v 2 F (|p| − p F ) 2 + E 2 g ,(1)
reflecting the fact that Cooper pairs in the channel are forced into the flow while tunneling from the bulk of the superconductor (where they are formed of two electrons with exactly opposite momenta) across the region of penetration of the magnetic field. [The Zeeman splitting effect is also taken care of by the term αε Z (α is the spin projection) in Eq. (1)]. As a result, each of the two electrons acquires the momentum shift
Π = eBδ c tanh L 2δ(2)
caused by the Lorentz force and equal to the difference between the vector potential A = (0, A, 0) deep inside the superconductor, A = 0, and at its surface, A = Bδ tanh(L/2δ), where δ and L stand for the London penetration depth and the superconductor film thickness, respectively. The spectrum described by Eq. (1) can also be understood as one of the Bogolubov quasiparticles in the laboratory frame, where the equilibrium conditions are set by the heat reservoirs, for the condensate moving along the Q1D channel with the drift velocity Π/m (m is the effective electron in the semiconductor). According to Eq. (1) the minigap is removed from the Fermi level when the field reaches the value
B * ≈ B c1 δ ξ coth L 2δ θE F ∆ ≪ B c1 ,(3)
where B c1 and ξ are the first critical field and the coherence length in the superconductor. The removal of a minigap from the Fermi level caused by a magnetic field would manifest itself in the transport properties of a hybrid system, such as the electronmediated heat transfer. The ballistic quasiparticle spectrum in Eq. (1) gives rise to the thermal conductance
κ(T, B) = κ N (T ) × 3 4π 2 ± ∞ Eg ±v F Π k B T x 2 dx cosh 2 x 2 ,(4)
where κ N (T ) = πk 2 B T /3h is the conductance of a normal quantum ballistic wire 10 . At a zero magnetic field, the temperature dependence of κ is activational, κ(T < E g /k B ) ∝ e −Eg/kB T , whereas at high fields, when there is no gap at the Fermi energy, κ(T, B) = κ N (T ). The crossover from low to high fields takes place at B * [Eq. (3)] and reflects the presence of a minigap E g in the quasiparticle spectrum at finite excitation energies. This results in a nonmonotonous temperature and magnetic field dependence of the ratio κ(T, B)/κ N (T ).
The analysis of the quasiparticle spectrum formed due to multiple Andreev reflections in this paper is based on the standard weak-coupling approach to the proximity effect description in superconductor junctions with normal metals and electron layers in semiconductors 7 . To be specific, we describe the Q1D confinement (provided by a gate) by the 2D electron wave function ϕ(x) localized in the x direction, whose magnitude at the interface can be estimated from the boundary condition ϕ(0) = λ∂ x ϕ(0), with λ standing for the electron penetration length into the barrier. The Fermi momentum of the Q1D system p F and 3D electron density on the semiconductor side are assumed to be much smaller than those in the superconductor, and we also take the tunneling coefficient θ ∼ exp(−2a/λ) as a small parameter. These assumptions enable us to neglect the influence of the normal system on the superconductor and to investigate the proximity effect in the Q1D system without feedback.
In the presence of a magnetic field B = (0; 0; B) it is convenient to choose the vector potential to be parallel to = U y , It is antisymmetric with respect to the middle of the superconductor: A(x = −a − L/2) = 0 [ Fig.1(b)]. Since A(x) must be continuous at the surface of the superconductor x = −a, in the semiconductor x ≥ −a it varies as A(x) = B(x + a) + Bδ tanh(L/2δ). The width of the electronic wave function in the Q1D channel, δx ∼ k −1 F and the barrier thickness a are both much less than L or δ, therefore, the vector potential acting on the Q1D electrons is virtually a constant: A(x) ≈ A(−a) = Bδ tanh(L/2δ), which will be used below to determine the quasiparticle spectrum in the channel.
l ) ( x j L a - - a - 0 x ) b y ) ( x A x L a - - a - 0
We describe superconducting correlations in the Q1D channel using a pair of coupled equations forψ
p (t) = ψ αp (t) ψ −αp (t) andψ † p (t) = ψ † αp (t) ψ † −αp (t)
-creation and annihilation operators:
ih∂ t − (p + Π) 2 2m + σ 3 ε Z + E F ψ p (t) = (5) = ϑ 1/2 dt ′ G(t, t ′ )ψ p (t ′ ) + F * (t, t ′ )iσ 2ψ † −p (t ′ ) , −ih∂ t − (−p + Π) 2 2m + σ 3 ε Z + E F ψ † −p (t) = = ϑ 1/2 dt ′ F (t, t ′ )iσ t 2 ψ p (t ′ ) + G * (t, t ′ )ψ † −p (t ′ ) , where ϑ = h 2 ϕ(0) mB λ exp(a/λ) 2 ∼ h 2 k 3/2 F m exp(a/λ) 2
characterizes the tunneling coupling to the superconductor and the electron momentum shift in the magnetic field Π is related to the vector potential by Eq. (2). In Eq. (5),
G(t, t ′ ) ≡ G(x = −a, x ′ = −a, t − t ′ ) and F (t, t ′ ) ≡ F (x = −a, x ′ = −a, t − t ′ )
are the normal and anomalous Green functions of the superconductor at its boundary; σ 2 and σ 3 are Pauli matrices (σ t is transposed to σ). Since the size of the Fermi sea in the semiconductor wire is much smaller than in the superconductor, one can ignore the dependence of G and F on the momentum parallel to the interface: only electrons in the superconductor moving nearly perpendicularly to the interface can tunnel into the Q1D wire. Since we are interested in the low-temperature regime k B T ∼ E g ≪ ∆, we will neglect the terms containing the normal Green function G in Eqs. (5). For the chosen gauge, the anomalous Green function of the superconductor, F in Eqs. (5) has no phase factors, despite the presence of a magnetic field. For a weak field B ≪ B c1 , its time Fourier transform can be estimated as F (ǫ) ≈ L −1 px ∆/(∆ 2 − ǫ 2 + η 2 px ), with η px being the normal electron dispersion near the Fermi level in the superconductor. The integration over the perpendicular momentum p x gives F (ǫ) ≈ ∆/hv S (∆ 2 −ǫ 2 ) 1/2 , thus giving us the minigap E g = ϑF (ǫ = 0) mentioned in the introduction and obtained in earlier publications 7 .
The solution of Eqs. (5) for ǫ ≪ ∆ is given by the Bogolubov transformation of the form
ψ αp (t) = u p b αp exp(−itǫ + αp /h) (6) +iσ α,−α 2 v p b † −α−p exp(−itǫ − αp /h) u 2 p = 1 2 1 + v F (|p| − p F ) [v 2 F (|p| − p F ) 2 + E 2 g ] 1/2 v 2 p = 1 − u 2 p
where b αp and b † −α−p are Bogolubov's quasiparticle operators, and the excitation spectrum ǫ ± αp is given by Eq. (1) (see Fig. 2). The Zeeman term in Eq. (1) turns out to be much smaller than the orbital one: E Z /v F Π ∼ g/k F min(δ, L) ≪ 1 -unless the electron g-factor is anomalously large. Due to the motion of the Q1D condensate the excitation energy curve is tilted by energy v F Π signp. The field B * [Eq. (3)] at which the minigap is removed from the Fermi level is determined by the condition that v F Π = E g . Note that at higher fields B * < B ≪ B c1 , the quasiparticle spectrum remains gapful, with the center of the gap moved to energies ∼ E g . Now we turn to the calculation of the thermal conductance κ(T, B) of a long Q1D channel whose ends are kept at temperatures T and T +∆T (∆T ≪ T ). Since no heat can get into the strongly gaped superconductor, the middle of the wire represents a bottle-neck for the heat transport, so that we can analyze κ(T, B) in the infinite wire geometry. The expression for the energy current operator j ǫ (yt) in a wire can be found from the continuity equation ∂ y j ǫ (yt) = −∂ t ρ ǫ (yt), where the density of energy ρ ǫ (yt) corresponding to the equations of motion (5) is
* ) B B a < + e P -F g v E P + F g v E y p - e * ) B B b > + e P + F g v E y p g F E v - P - e FIGρ ǫ (yt) = 1 2 α ψ † α (yt) (p + Π) 2 2m − αε Z − E F ψ α (yt) −E g iσ α,−α 2 ψ † −α (yt)ψ † α (yt) + H.c. ,(7)
where ψ α (yt) = L −1/2 y p ψ α (t) exp(ipy/h) with L y being the length of the Q1D system, andp = −ih∂ y . Using the Bogolubov transformation (6) for ψ α (t), for the density of energy one finds
ρ ǫ (yt) = αpp ′ e iȳ h (p ′ −p) 2L y × (8) ×{v p v p ′ (ǫ − αp ′ + ǫ − αp )b −α−p b † −α−p ′ e − it h (ǫ − αp ′ −ǫ − αp ) + +u p u p ′ (ǫ + αp ′ + ǫ + αp )b † αp b αp ′ e − it h (ǫ + αp ′ −ǫ + αp ) + +u p v p ′ iσ α,−α 2 (ǫ + αp + ǫ − αp ′ )b † αp b † −α−p ′ e it h (ǫ + αp −ǫ − αp ′ ) + +v p u p ′ iσ −α,α 2 (ǫ − αp + ǫ + αp ′ )b −α−p b αp ′ e it h (ǫ − αp −ǫ + αp ′ ) }.
In order to satisfy the continuity equation with ρ ǫ (yt) given by Eq. (8) the energy current j ǫ (yt) must have the following form:
j ǫ (yt) = αpp ′ e iȳ h (p ′ −p) 2L y × (9) ×{v p v p ′ (ǫ − αp ′ ) 2 − (ǫ − αp ) 2 p ′ − p b −α−p b † −α−p ′ e − it h (ǫ − αp ′ −ǫ − αp ) + +u p u p ′ (ǫ + αp ′ ) 2 − (ǫ + αp ) 2 p ′ − p b † αp b αp ′ e − it h (ǫ + αp ′ −ǫ + αp ) }.
In Eq. (9) we have already omitted the terms containing b † αp b † −α−p ′ and b −α−p b αp ′ which vanish after the averaging. The averaged value of the energy current j ǫ can be written as the sum of two contributions:
j ǫ = −h −1 α dp ǫ + αp ∂ p ǫ + αp v 2 p + j q .(10)
The first of them can be attributed to the supercurrent flow and cannot transfer heat, whereas j q represents the heat current:
j q = h −1 α dp ǫ + αp ∂ p ǫ + αp n(ǫ + αp ).(11)
The latter is determined by the energy distributions, n(ǫ + αp ) and the group velocity, ∂ p ǫ + αp of quasiparticles. We express the energy currents (10) and (11) in terms of the "+"-branch of the spectrum (1) using the relationship ǫ − αp = −ǫ + −α−p and the symmetry of the limits in the sum. The distribution functions of rightmovers (∂ p ǫ + αp6 > 0) and leftmovers (∂ p ǫ + αp < 0) are assumed to be different and set by reservoirs, as n(ǫ + αp , T + ∆T ) and n(ǫ + αp , T ), respectively. Using this, we determine the thermal conductance κ(T, B) given by Eq. (4) as the proportionality coefficient between the heat current and the temperature drop, j q = κ(T, B)∆T . Figure 3(a) shows the thermal conductance (4) normalized by that of a normal wire as a function of k B T /E g for different values of the magnetic field. Plot A is related to B = 0 and shows how the conductance exponentially decreases at temperatures smaller than the minigap field crosses the value of B * , at which the edge of the minigap is about to reach the Fermi level. For B < B * (curve B), κ(T )/κ N (T ) is exponentially small only if k B T < E g − v F Π ≪ E g . When the temperature is in the interval E g − v F Π < k B T < E g + v F Π, quasiparticles with negative momenta p ≈ −p F transfer heat, whereas the states with positive p are still unpopulated. This interval corresponds to the plato in curve B where the conductance κ(T ) is half of that in the normal state. At higher temperatures, k B T > E g + v F Π the asymmetry of the excitation spectrum no longer matters, and κ(T ) ≈ κ N (T ).
When the field exceeds B * (curves C and D), the dependence κ(T )/κ N (T ) becomes nonmonotonic. As in a normal wire, at low temperatures k B T ≪ v F Π−E g there are two left-moving and two right-moving modes capable of tranferring heat, which gives κ(T ) = κ N (T ). At intermediate temperatures v F Π − E g ≪ k B T ≪ E g + v F Π, only the states with negative momenta contribute to the thermal conductance: κ(T ) = κ N (T )/2. At higher temperatures the conductance recovers a normal metallic behaviour. Finally, when B ≫ B * the minimum in κ(T )/κ N (T ) is less pronounced and the heat conductance behaviour becomes indistinguishable from that of a normal wire. The magnetic field dependence of κ/κ N is given in Fig. 3(b).
FIG. 1 :
1(a) Schematic view of a superconductor/Q1D system junction. (b) Vector potential profile.the interface, A(x) = [0, A(x), 0] in order to deal with a real order parameter in the superconductor. The vector potential A(x) acting on the normal electrons must be found self-consistently, taking into account the screening of the external magnetic field B by a diamagnetic supercurrent 2,11 . Inside the superconductor A(x) can be found from the London equation with the boundary conditions ∂ x A(−a) = B and ∂ x A(−a − L) = B as follows:A(x) = Bδ sinh[(x + a + L/2)/δ] cosh(L/2δ) .
. 2 :
2Schematic view of the quasiparticle spectrum described by Eq.(1).
FIG. 3 :
3E g . Curves B and C show what happens when the (a) Temperature dependence of the thermal conductance, κ normalized by that of a normal wire, κN for different values of magnetic field: (A) B/B * = 0.01, (B) B/B * = 0.95, (C) B/B * = 1.05, and (D) B/B * = 2. (b) Magnetic field dependence for different temperatures: (A) kBT /Eg = 0.1, (B) kBT /Eg = 0.3, and (C) kBT /Eg = 1.
The authors thank U. Zulicke, I. Aleiner, and A. Geim for useful discussions. This work was funded in parts by EPSRC (UK) and EC STREP within the Framework 6 EU programme.
. A F Andreev, Zh. Exp. Teor. Fiz. 461228Sov. Phys. JETPA.F. Andreev, Zh. Exp. Teor. Fiz. 46, 1823 (1964) [Sov. Phys. JETP 19, 1228 (1964)].
. B J Van Wees, P Vries, P Magne, T M Klapwijk, Phys. Rev. Lett. 69B. J. van Wees, P. de Vries, P. Magne, and T. M. Klap- wijk, Phys. Rev. Lett. 69, 510-513 (1992);
. S G Hartog, C M A Kapteyn, B J Van Wees, T M Klapwijk, G Borghs, 77S. G. den Har- tog, C. M. A. Kapteyn, B. J. van Wees, T. M. Klapwijk, G. Borghs, ibid. 77, 4954-4957 (1996);
. A F Morpurgo, S Holl, B J Van Wees, T M Klapwijk, G Borghs, 78A. F. Morpurgo, S. Holl, B. J. van Wees, T. M. Klapwijk, G. Borghs, ibid. 78, 2636-2639 (1997);
. S G Hartog, B J Van Wees, Yu V Nazarov, T M Klapwijk, G Borghs, 79S. G. den Hartog, B. J. van Wees, Yu. V. Nazarov, T. M. Klapwijk, G. Borghs, ibid. 79, 3250-3253 (1997);
. A F Morpurgo, B J Van Wees, T M Klapwijk, G Borghs, 79A. F. Morpurgo, B. J. van Wees, T. M. Klapwijk, and G. Borghs, ibid. 79, 4010-4013 (1997);
. S G Hartog, B J Van Wees, T M Klapwijk, Yu V Nazarov, G Borghs, Phys. Rev. B. 56S. G. den Har- tog, B. J. van Wees, T. M. Klapwijk, Yu. V. Nazarov, G. Borghs, Phys. Rev. B 56, 13738-13741 (1997).
. V T Petrashov, V N Antonov, P Delsing, R Claeson, Phys. Rev. Lett. 70V.T. Petrashov, V.N. Antonov, P. Delsing, and R. Claeson, Phys. Rev. Lett. 70, 347-350 (1993);
. V T Petrashov, V N Antonov, P Delsing, R Claeson, 74V.T. Petrashov, V.N. Antonov, P. Delsing, and R. Claeson, ibid. 74, 5268-5271 (1995);
. W Belzig, R Shaikhaidarov, V V Petrashov, Yu Nazarov, Phys. Rev. B. 66220505W. Belzig, R. Shaikhaidarov, V.V. Petrashov, and Yu. Nazarov, Phys. Rev. B 66, 220505 (2002).
. H Courtois, Ph, D Gandit, B Mailly, Pannetier, Phys. Rev. Lett. 76H. Courtois, Ph. Gandit, D. Mailly, and B. Pannetier, Phys. Rev. Lett. 76, 130-133 (1996);
. P Dubos, H Courtois, O Buisson, B Pannetier, 87206801P. Dubos, H. Cour- tois, O. Buisson, and B. Pannetier, ibid. 87, 206801 (2001).
. D A Dikin, S Jung, V Chandrasekhar, Phys. Rev. B. 6512511D. A. Dikin, S. Jung, and V. Chandrasekhar, Phys. Rev. B 65, 012511 (2002);
. A Parsons, I A Sosnin, V T Petrashov, ibid. 67140502A. Parsons, I. A. Sosnin, and V. T. Petrashov, ibid. 67, 140502 (2003).
. A F Volkov, P H C Magnee, B J Van Wees, T M Klapwijk, Physica C. 242261A.F. Volkov, P.H.C. Magnee, B.J. van Wees, and T.M. Klapwijk, Physica C 242, 261 (1995);
. A F Volkov, Phys. Lett. A. 174144A.F. Volkov, Phys. Lett. A 174, 144 (1993);
. Pis'ma Zh, Eksp. Teor. Fiz. 55746JETP Lett.Pis'ma Zh. Eksp. Teor. Fiz. 55, 713 (1992) [JETP Lett. 55, 746 (1992)].
. A Chrestin, T Matsuyama, U Merkt, Phys. Rev. B. 558457A. Chrestin, T. Matsuyama, and U. Merkt, Phys. Rev. B 55, 8457 (1997).
. C W J Beenakker, Rev. Mod. Phys. 69731C. W. J. Beenakker, Rev. Mod. Phys. 69, 731 (1997);
. C J Lambert, R Raimondi, J. Phys: Condens. Matter. 10901C.J. Lambert and R. Raimondi, J. Phys: Condens. Matter 10, 901 (1998).
. J B Pendry, J. Phys. A. 162161J. B. Pendry, J. Phys. A 16, 2161 (1983).
. U Zulicke, H Hoppe, G Schoen, Physica B. 298453U. Zulicke, H. Hoppe, and G. Schoen, Physica B 298, 453 (2001).
|
[] |
[
"Detecting the Most Unusual Part of a Digital Image",
"Detecting the Most Unusual Part of a Digital Image"
] |
[
"Kostadin Koroutchev [email protected] \nEPS\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n",
"Elka Korutcheva \nDepto. de Física Fundamental\nUniversidad Nacional\nde Educación a Distancia, c/ Senda del Rey 928080MadridSpain\n"
] |
[
"EPS\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain",
"Depto. de Física Fundamental\nUniversidad Nacional\nde Educación a Distancia, c/ Senda del Rey 928080MadridSpain"
] |
[] |
The purpose of this paper is to introduce an algorithm that can detect the most unusual part of a digital image. The most unusual part of a given shape is defined as a part of the image that has the maximal distance to all non intersecting shapes with the same form. The method can be used to scan image databases with no clear model of the interesting part or large image databases, as for example medical databases. ⋆ E.K. also at G.Nadjakov Inst. Solid State Physics, Bulgarian Academy of Sciences, 72, Tzarigradsko Schaussee Blvd.,
|
10.1007/978-3-540-78275-9_25
|
[
"https://arxiv.org/pdf/0810.3418v1.pdf"
] | 1,909,479 |
0810.3418
|
39d322ee6c7962bb813c9d641d7150a2de653e93
|
Detecting the Most Unusual Part of a Digital Image
19 Oct 2008
Kostadin Koroutchev [email protected]
EPS
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
Elka Korutcheva
Depto. de Física Fundamental
Universidad Nacional
de Educación a Distancia, c/ Senda del Rey 928080MadridSpain
Detecting the Most Unusual Part of a Digital Image
19 Oct 2008
The purpose of this paper is to introduce an algorithm that can detect the most unusual part of a digital image. The most unusual part of a given shape is defined as a part of the image that has the maximal distance to all non intersecting shapes with the same form. The method can be used to scan image databases with no clear model of the interesting part or large image databases, as for example medical databases. ⋆ E.K. also at G.Nadjakov Inst. Solid State Physics, Bulgarian Academy of Sciences, 72, Tzarigradsko Schaussee Blvd.,
Introduction
In this paper we are trying to find the most unusual/rare part with predefined size of a given image. If we consider an one-dimensional quasi-periodical image, as for example electrocardiogram (ECG), the most unusual parts with length about one second will be the parts that correspond to rhythm abnormalities [6]. Therefore they are of some interest. Considering two dimensional images, we can suppose that the most unusual part of the image can correspond to something interesting of the image.
Of course, if we have a clear mathematical model of what the interesting part of the image can be, it would be probably better to build a mathematical model that detects those unusual characteristics of the image part that are interesting. However, as in the case of ECG, the part that we are looking for, can not be defined by a clear mathematical model, or just the model can not be available. In such cases the most unusual part can be an interesting instrument for screening images.
To state the problem, we need first of all a definition of the term "most unusual part". Let us chose some shape S within the image A, that could contain that part and let us denote the cut of the figure A with shape S and origin r by A S (ρ; r), e.g.
A S (ρ; r) ≡ S(ρ)A(ρ + r),
where ρ is the in-shape coordinate vector, r is the origin of the cut A S and we used the characteristic function S(.) of the shape S. Further in this paper we will omit the arguments of A S . We can suppose that the rarest part is the one that has the largest distance with the rest of the cuts with the same shape.
Speaking mathematically, we can suppose that the most unusual part is located at the point r, defined by:
r = arg max r min r ′ :|r ′ −r|>diam(S) ||A S (r) − A S (r ′ )||.(1)
Here we assume that the shifts do not cross the border of the image. The norm ||.|| is assumed to be L 2 norm 1 Because the parts of an image that intersect significantly are similar, we do not allow the shapes located at r ′ and r to intersect, avoiding this by the restriction on r ′ : |r ′ − r| > diam(S).
If we are looking for the part of the image to be rare in a context of an image database, we can assume that further restrictions on r ′ can be added, for example restricting the search to avoid intersection with several images.
The definition above can be interesting as a mathematical construction, but if we are looking for practical applications, it is too strict and does not correspond exactly to the intuitive notion of the interesting part as there can be several interesting parts. Therefore the correct definition will be to find the outliers of the distribution of the distances between the blocks ||.||.
If the figure has N 2 points, and ||S|| ≪ ||A||, in order to find deterministically the most interesting part, we need N 4 operations. This is unacceptable even for large images, not concerning image databases. Therefore we are looking for an algorithm that provides an approximate solution of the problem and solves it within some probability limit.
As is defined above in Eq.(1), the problem is very similar to the problem of location of the nearest neighbor between the blocks. This problem has been studied in the literature, concerning Code Book and Fractal Compression [1]. However, the problem of finding r in the above equation, without specifying r ′ , as we show in the present paper, can be solved by using probabilistic methods avoiding slow calculations.
Summarizing the above statements, we are looking for an algorithm that two blocks are similar or different with some probability.
Projections
The problem in estimating the minima of Eq. (1) is complicated because the block is multidimensional. Therefore we can try to simplify the problem by projecting the block B ≡ A S (r) in one dimension using some projection operator X. For this aim, we consider the following quantity:
b = |X.B 1 − X.B| = |X.(B 1 − B)|, |X| = 1.(2)
The dot product in the above equation is the sum over all ρ-s:
X.B ≡ ρ X(ρ)B(ρ; r).
If X is random, and uniformly distributed on the sphere of corresponding dimension, then the mean value of b is proportional to |B 1 − B|; b = c|B 1 − B| and the coefficient c depends only on the dimensionality of the block. However, when the dimension of the block increases, the two random vectors (B 1 − B and X) are close to orthogonal and the typical projection is small. But if some block is far away from all the other blocks, then with some probability, the projection will be large. The method resembles that of Ref. [4] for finding nearest neighbor.
As mentioned above we ought to look for outliers in the distribution. This would be difficult in the case of many dimensions, but easier in the case of one dimensional projection.
We will regard only projections orthogonal to the vector with components proportional to X 0 (ρ) = 1, ∀ρ. The projection on the direction of X 0 is proportional to the mean brightness of the area and thus can be considered as not so important characteristics of the image. An alternative interpretation of the above statement is by considering all blocks to differ only by their brightness.
Mathematically the projections orthogonal to X 0 have the property:
ρ X(ρ) = 0.(3)
The distribution of the values of the projections satisfying the property (3) is well known and universal [10] for the natural images. The same distribution seems to be valid for a vast majority of the images. The distribution of the projections derived for the X-ray image, shown in Fig. 1, is shown in Fig. 2.
Roughly speaking, the distribution satisfies a power law distribution in loglog scale if the blocks are small enough with exponential drop at the extremes. When the blocks are big enough, the exponential part is predominant.
If A r and A ′ r have similar projections, then they will belong to one and the same or to neighbors bins.
Therefore we can look for blocks that have a minimal number of similar and large projections. But these, due to the universality of the distribution, are exactly the blocks with large projection values.
As a first approximation, we can just consider the projections and score the points according to the bin they belongs to. The distribution can be described by only one parameter that, for convenience, can be chosen to be the standard deviation σ X of the distribution of X.B.
The notion of "large value of the projection" will be different for different projections but will be always proportional to the standard deviation. 2 Therefore we can define a parameter a and score the blocks with |X.B| > aσ X .
This procedure consists of the following steps: 0. Construct a figure B with the same shape as A and with all pixels equal to zero.
1. Generate a random projection operator X, with carrier with shape S, zero mean and norm one.
2. Project all blocks (convolute the figure). We denote the resulting figure as C.
3. Calculate the standard derivation σ X of the result of the convolution. Because the distribution of the projections (Fig. 2) is universal, it is not surprising that the algorithm is operational for different images. We have tested it with some 100 medical Xray images and the results of the visual inspections were good 4 It can be noted that the number of projection operators is not critical and can be kept relatively low and independent of the size of the block. Note that with significantly large blocks, the results can not be regarded as en edge detector. This empirical observation is not a trivial result at all, indicating that the degrees of freedom are relatively few, even with large enough blocks, something that depends on the statistics of the images and can not be stated in general. With more than 20 projections we achieve satisfactory results, even for areas with more than 3000 pixels. The increment of the number of the projections improves the quality, but with more than 30 projection practically no improvement can be observed.
It is possible to look at that algorithm in a different way. Namely, if we are trying to reconstruct the figure by using some projection operators X C (for example DCT as in JPEG), then the length of the code, one uses to code a component with distribution like Fig. 2, will be proportional to the logarithm of the probability of some value of the projection X C .A. Therefore, what we are scoring is the block that has some component of the code larger than some length in bits (here we ignore the psychometric aspects of the coding). Effectively we score the blocks with longer coding, e.g. the ones that have lower probability of occurrence.
Using a smoothed version of the above algorithm in step 4, without adding only one or zero, but for example, penalizing the point with the square of the projection difference in respect to the current block divided by σ, and having in mind the universal distribution of the projection, one can compute the penalty function as a function of the value of the projection x, that results to be just 1/2 + x 2 /2σ 2 . Summing over all projections, we can find that the probability of finding the best block is approximated given by 1/2[1 + erfc(M (1/2 + x 2 /2σ 2 ))] as a consequence of the Central Limit Theorem. The above estimation gives an idea why one need few projections to find the rarest block, in sense of the global distribution of the blocks, almost independently of the size of the block. The only dependence of the size of the blocks is given by σ 2 factor, that is proportional to its size. Further, the probability of error will drop better than exponentially with the increment of M .
The non-smoothed version performs somewhat better that the above estimation in the computer experiments.
Network
The pitfall of the consideration in the previous subsection is that the detected blocks are rare in absolute sense, e.g. in respect to all figures that satisfy the power law or similar distribution of the projections. Actually this is not desirable. If for example in X-ray image appear several spinal segments, although these can be rare in the context of all existing images, they are not rare in the context of thorax or chest X-ray images.
Therefore the parts of the images with many similar projections must "cancel" each other. This gives us the idea to build a network, where its components with similar projection are connected by a negative feedback corresponding to the blocks with similar projections.
As we have seen in the previous section, the small projection values are much more probable and therefore less informative. Using this empirical argument, we can suggest that the connections between the blocks with large projections are more significant.
The network is symmetrical by its nature, because of the reflexivity of the distances. We can try to build it in a way similar to the Hebb network [2] and define Lyapunov o energy function of the network. Thus the network can be described in terms of artificial recursive neural network. Connecting only the elements of the image that produce large projections, the network can be build extremely sparse [11], which makes it feasible in real cases.
Let us try to formalize the above considerations. For each point we define a neuron. The neurons corresponding to some point r and having projection x receive a positive input flux, which is proportional to − log p(x), where p is the probability of having projection with value x. The same element, if its projection is large, also receives a negative flux from the points r ′ with nearest projections that satisfy the condition |r − r ′ | > diam(S). The flux in general is a function of p(x) and x ′ − x.
As a first approximation we assume that the flux is constant with p(x) and the dependence on x ′ − x is trivial: the weight is 1 if |x ′ − x| < δ and zero otherwise, where δ is some parameter of the model.
In other words, we reformulate our problem in terms of a Hebb-like neural network with external field
h = −h 0 M i=1 log p(x i )(4)
and weights
w rr ′ = − M i=1 |x i | > aσ i , |x ′ i | > aσ i , |x ′ i − x i | < δ, x ′ i x i > 0 1.(5)
The extra parameter h 0 balances between the global and the local effects. It can be chosen in a way that the mean fluxes of positive and negative currents are equal in the whole network. The parameter δ, as a proof of concept value, can be assumed to be equal to infinity. So the only parameter, as in the previous case, is a. The dynamics of the network over time t is given by the following equation [3]:
s r (t + 1) = g(β[h r + r ′ w rr ′ s r (t) − T ]),
where g(.) is a sigmoid function, s r (t) is the state of the neuron s at position r and time t, β is the inverse temperature and T is the threshold of the system. The result must be insensitive to the particular chose of g(.).
Once the network is constructed, we need to choose its initial state. If the a priori probabilities for all points to be the origin of the rarest block are equal, one can choose s r (0) = 1, ∀r. Due to the non-linearity, the analysis of the results is not straightforward. The existence of the attractor is guaranteed by the symmetrical nature of the weights w, which is a necessary condition for the existence of an energy function.
We can further refine the results of the previous section by fixing the global threshold T in a way to have only some fraction of the excited neurons. Thus we obtain a bump activity of the network, previously considered in [9,7,8]. A sample result is shown in Fig.5.
Regarding the time analysis of the procedure, one can see that the execution times are proportional to the number of the weights w. Having in mind that actually the connectivity is between the blocks, and that we can use a fraction of blocks less than 1/N 2 , the execution time can drop to order inferior to the N 4 limit. Thus, the number of steps to achieve the attractor is of the order logN .
Discussion and Future Directions
In this paper we present a method to find the most unusual (rare) part in two and higher dimensional images, when its shape is fixed, but in general arbitrary.
The method is almost independent on the size of the shape in terms of the execution speed and time. It gives good results on experimental images without predefined model of the interesting event.
One necessary future development of the algorithm is to achieve practical and computable criteria of the "rareness" of the block and comparing the results on large enough database in order to have qualitative measure of the results. The criterion must be different from Eq. (1), because its direct computing tends to be very slow and crispy.
Exact calculus of the probabilistic features of the network in the thermodynamics limit, performed in the sense of probability of finding the outliers, are also of common interest.
Among the future applications of the present method, one could mention the achievement of experiments on different type of images and large image databases and experiments on acceleration of the network due to the special equivalence class construction.
Fig. 1 .Fig. 2 .
12The original test image. Xray image of a person with ingested coin. The distribution of the projection value for square shape with a size 48x48 pixels.
4 .Fig. 3 .
43For all points of C with absolute values greater than aσ X , increment the corresponding pixel in B. Repeat steps 1-4 for M number of times. 5. Select the maximal values of B as the most singular part of the image. The number of iterations M can be fixed empirically or until the changes in B, normalized by that number, become insignificant. Following the algorithm, one can see that the time to perform it is proportional to M N 2 log N . The speed per image of size 1024 × 2048 on one and the same computer, with S, Score values for different size of the shape (from left to right: 24x24,32x32,40x40,56x56). The value of the parameter a in all the cases is 22.of size 56 × 56 points, is about 3 seconds compared to about an hour, using the direct search implementing the Eq. (1).3 Some results are presented inFigs.3,4, where we used square shapes with different size, 30 projection operators and different values of a.
Fig. 4 .
4Score values for different parameter a (from left to right: a = 8,10,12,16). The size of the shape is 24x24.
Fig. 5 .
5Comparison between score image (left) and network activity image (right). The size of the area is 24x24 and the parameter a = 16.
Similar results are achieved with L1 norm. The algorithm was not tested with Lmax norm due to its extreme noise sensitivity. We use L2 because of its relation with PSNR criteria that closely resembles the human subjective perception.
In general, the standard deviation will be larger for projections with larger lowfrequency components. That is why we choose the criterion proportional to σX and not as absolute value.
If the block is small enough, the convolution can be performed even faster in the space domain and it is possible to improve the execution time.4 Some of the images require normalization of the projection with the deviation of the block B.
AcknowledgmentsThe authors acknowledge the financial support from the Spanish Grants
Fractal Image Compression. Y Fisher, ISBN 0387942114Springer VerlagFisher Y., Fractal Image Compression, ISBN 0387942114, Springer Verlag (1995).
The Organization of Behavior: A Neurophysiological Theory. D Hebb, WileyNew YorkHebb D., The Organization of Behavior: A Neurophysiological Theory, Wiley, New York (1949).
Neural networks and physical systems with emergent collective computational properties. J Hopfield, Proc.Natl.Acad.Sci.USA. .Natl.Acad.Sci.USA792554Hopfield J.:Neural networks and physical systems with emergent collective compu- tational properties, Proc.Natl.Acad.Sci.USA, 79 (1982) 2554.
Uncertainty Principles, Extractors, and Explicit Embedding of L2 into L1. P Indyk, the Proceedings of 39th ACM Symposium on Theory of Computing. Indyk P.: Uncertainty Principles, Extractors, and Explicit Embedding of L2 into L1, in the Proceedings of 39th ACM Symposium on Theory of Computing, (2007).
HOT SAX: efficiently finding the most unusual time series subsequence. E Keogh, J Lin, A Fu, the Proceedings of 39th ACM Symposium on Theory of Computing. Keogh E., Lin J. and Fu A.: HOT SAX: efficiently finding the most unusual time series subsequence, in the Proceedings of 39th ACM Symposium on Theory of Computing, (2007).
E Keogh, Lin in the Proceedings of the Fifth IEEE International Conference on Data Mining. 8Keogh E., Lin in the Proceedings of the Fifth IEEE International Conference on Data Mining, (2005) 8.
Bump formations in binary attractor neural network. K Koroutchev, E Korutcheva, Phys.Rev.E. 7326107Koroutchev K. and Korutcheva E.:Bump formations in binary attractor neural network, Phys.Rev.E 73 (2006) 026107.
Bump formations in attractor neural network and their application to image reconstruction. K Koroutchev, E Korutcheva, the Proceedings of the 9th. Koroutchev K. and Korutcheva E.: Bump formations in attractor neural net- work and their application to image reconstruction, in the Proceedings of the 9th
. 978-0- 7354-0390-1Granada Seminar on Computational and Statistical Physics. 242AIPGranada Seminar on Computational and Statistical Physics, AIP, ISBN 978-0- 7354-0390-1 (2006) 242.
An associate network with spatially organized connectivity. Y Roudi, A Treves, JSTAT. 7010Roudi Y. and Treves A.: An associate network with spatially organized connectiv- ity, JSTAT, 1 (2004) P07010.
D Ruderman, The statistics of natural images. 5598Ruderman D.: The statistics of natural images, Network : Computation in Neural Systems, 5 (1994) 598.
The enhanced storage capacity in neural networks with low activity level. M Tsodyks, M Feigel'man, Europhys.Lett. 6101Tsodyks M. and Feigel'man M.: The enhanced storage capacity in neural networks with low activity level, Europhys.Lett., 6 (1988) 101.
|
[] |
[
"A TAUBERIAN APPROACH TO WEYL'S LAW FOR THE KOHN LAPLACIAN ON SPHERES",
"A TAUBERIAN APPROACH TO WEYL'S LAW FOR THE KOHN LAPLACIAN ON SPHERES"
] |
[
"Henry Bosch ",
"Tyler Gonzales ",
"Kamryn Spinelli ",
"ANDGabe Udell ",
"Yunus E Zeytuncu "
] |
[] |
[] |
We compute the leading coefficient in the asymptotic expansion of the eigenvalue counting function for the Kohn Laplacian on the spheres. We express the coefficient as an infinite sum and as an integral.
|
10.4153/s0008439521000163
|
[
"https://arxiv.org/pdf/2010.04568v1.pdf"
] | 222,271,914 |
2010.04568
|
7566a380bb71b068202d35a2966658b7e315f180
|
A TAUBERIAN APPROACH TO WEYL'S LAW FOR THE KOHN LAPLACIAN ON SPHERES
Henry Bosch
Tyler Gonzales
Kamryn Spinelli
ANDGabe Udell
Yunus E Zeytuncu
A TAUBERIAN APPROACH TO WEYL'S LAW FOR THE KOHN LAPLACIAN ON SPHERES
arXiv:2010.04568v1 [math.CV] 9 Oct 2020
We compute the leading coefficient in the asymptotic expansion of the eigenvalue counting function for the Kohn Laplacian on the spheres. We express the coefficient as an infinite sum and as an integral.
Introduction
Let S 2n−1 denote the unit sphere in C n where n ≥ 2. The hypersurface S 2n−1 is an embedded CR manifold and the tangential Cauchy-Riemann operators ∂ b and ∂ * b are defined on the corresponding Hilbert spaces. Furthermore, the Kohn Laplacian on L 2 (S 2n−1 )
b = ∂ * b ∂ b
is a linear self-adjoint densely defined closed operator. We refer to [Bog91] and [CS01] for the detailed definitions.
Our inspiration is the celebrated Weyl's law for Riemannian manifolds. In that setting, the eigenvalue counting function of the Laplace-Beltrami operator on a manifold M has leading coefficient proportional to the volume of M with a constant depending only on the dimension of M . There is also a corresponding result for the extension of this operator (also called the Hodge Laplacian, or the Laplace-de Rham operator) to differential forms.
Motivated by the spectral theory for the Laplacian on Riemannian manifolds, one can investigate the spectrum of b on CR manifolds and its relation to the complex geometry of the underlying manifold. For example, in [Fu05] and [Fu08] Fu studied the spectrum of the ∂-Neumann Laplacian on smooth pseudoconvex domains Ω and related the distribution of the eigenvalues of to the D'Angelo type of bΩ. In [Fol72] Folland computed the spectrum of b on S 2n−1 on all differential form levels. Recently, in [ABRZ19] and [ABB + 19] the authors used the spectrum of the Kohn Laplacian to prove the non-embeddability of the Rossi sphere.
For λ > 0, let N (λ) denote the number of positive eigenvalues (counting multiplicity) of b on L 2 (S 2n−1 ) that are less or equal to λ. It was noted in [ABB + 19] that N (λ) grows on the order of λ n and later, in [BZ20] (see the erratum), the leading coefficient in the asymptotic expansion was calculated as an infinite sum. In particular, [BZ20] obtained lim λ→∞ N (λ) λ n = 1 2 n n! ∞ q=1 1 q n n + q − 2 q + q − 1 n − 2 by using careful counting arguments.
1.1. Main Result. In this paper, we obtain the same leading coefficient by another argument, namely by Karamata's Tauberian Theorem. We highlight that this technique is different than the one in [BZ20]. Furthermore, we formulate the leading coefficient as a product of the volume of S 2n−1 and an integral that only depends on n. This representation resonates better with Weyl's law and is more amenable to generalization to other CR manifolds.
Theorem 1.1. Let N (λ) be the eigenvalue counting function for b on L 2 (S 2n−1 ) as above. Then lim λ→∞ N (λ) λ n = 1 2 n n! ∞ q=1 1 q n n + q − 2 q + q − 1 n − 2 = vol(S 2n−1 ) n − 1 n(2π) n Γ(n + 1) The integral representation is strikingly similar to (and indeed inspired by) a similar formula for the leading coeficient of the eigenvalue counting function for b acting on (0, q)-forms (where q ≥ 1) in [ST84,Sta84]. However, we note that the expression in Theorem 1.1 is not a special case of the expression in [ST84]. Stanton and Tartakoff's formula doesn't cover the case of functions, and some non-trivial adjustments are necessary to formulate the correct expression for the case q = 0. We present a connection between our formula and the one in [ST84] in the last section. We list the task of finding the leading coefficient of the eigenvalue counting function of b acting on functions of a general pseudoconvex CR manifold of hypersurface type as an open problem.
1.2. Ingredients. Before we present a proof of Theorem 1.1 in the next section, we state some known facts about the eigenvalues of b on S 2n−1 and Karamata's Tauberian Theorem.
The Kohn Laplacian acts on the space of L 2 differential forms on the sphere. In [Fol72], Folland uses unitary representations to explicitly compute the eigenvalues and corresponding eigenspaces for the Kohn Laplacian on (0, j)-forms. In particular, he shows that eigenforms
z 1 q−1 z p n j+1 i=1 dz 1 ∧ · · · ∧ dz i ∧ · · · ∧ dz j+1
have corresponding eigenvalue 2(q + j)(p + n − j − 1), where the hat over a form indicates its exclusion from the wedge product. We are interested in the case where j = 0, corresponding to functions on the sphere. Folland (see also [ABRZ19, ABB + 19]) explicitly shows that in this case
L 2 (S 2n−1 ) = ∞ k=0 H k (S 2n−1 ) = ∞ p,q=0 H p,q (S 2n−1 ),
where H p,q (S 2n−1 ) is the space of spherical harmonics of bidegree p, q. Furthermore, each space H p,q (S 2n−1 ) is an eigenspace of b with dimension dim(H p,q (S 2n−1 )) = n + p − 1 p
n + q − 1 q − n + p − 2 p − 1 n + q − 2 q − 1 ,
as computed by an inclusion-exclusion argument (see [Kli04]). These spaces have corresponding eigenvalues 2q(p + n − 1). Karamata's Tauberian Theorem has been in used in [Kac66,ST84] to understand the distribution of eigenvalues. We follow the statement in [ANPS09, Theorem 1.1, page 57] where the reader can find further references.
Theorem 1.2 (Karamata). Let {λ j } j∈N be a sequence of positive real numbers such that j∈N e −λj t converges for every t > 0. Then for n > 0 and a ∈ R, the following are equivalent.
(1) lim t→0 + t n j∈N e −λj t = a (2) lim λ→∞ N (λ) λ n = a Γ(n+1)
where N (λ) = #{λ j : λ j ≤ λ} is the counting function.
The next section is dedicated to the proof of Theorem 1.1. First, we prove the expression for the leading coefficient as a series (recovering the result in [BZ20]). Next, we express the leading coefficient as the volume of S 2n−1 times an integral.
Tonelli Tactic To Tie Together Two Tauberian Terms
Instead of considering N (λ) directly we define the function G(t) = j∈N e −λj t and consider
lim t→0 + t n j∈N e −λj t
where the sequence {λ j } j∈N is the sequence of all positive eigenvalues (with multiplicity) of b on the sphere. Once we compute this limit we can invoke Karamata's Theorem. Using the preliminaries from section 1.2, we have
G(t) = j∈N e −λj t = ∞ q=1 ∞ p=0 dim(H p,q )e −2q(p+n−1)t = ∞ q=1 ∞ p=0 n + p − 1 p n + q − 1 q − n + p − 2 p − 1 n + q − 2 q − 1 (e −2tq ) p+n−1 .
Applying the standard recursive formula for binomial coefficients to the first product of binomial coefficients gives us 1
n + p − 1 p n + q − 1 q = n + p − 1 p n + q − 2 q + n + q − 2 q − 1 = n + p − 1 p n + q − 2 q + n + p − 1 p n + q − 2 q − 1 = n + p − 1 p n + q − 2 q + n + p − 2 p + n + p − 2 p − 1 n + q − 2 q − 1 = n + p − 1 p n + q − 2 q + n + p − 2 p n + q − 2 q − 1 + n + p − 2 p − 1 n + q − 2 q − 1 .
This allows us to rewrite G(t) as a sum of two positive pieces by noticing that
G(t) = ∞ q=1 ∞ p=0 n + p − 1 p n + q − 2 q + n + p − 2 p n + q − 2 q − 1 e −2tq(p+n−1) .
We label the parts as
G 1 (t) = ∞ q=1 ∞ p=0 n + p − 1 p n + q − 2 q e −2tq(p+n−1)
and
G 2 (t) = ∞ q=1 ∞ p=0 n + p − 2 p n + q − 2 q − 1 e −2tq(p+n−1) , so that G(t) = G 1 (t) + G 2 (t).
Our goal is to calculate lim t→0 + t n G(t), which we do by calculating lim t→0 + t n G 1 (t) and lim t→0 + t n G 2 (t) separately.
Theorem 1.1 claims that lim λ→∞ N (λ) λ n can be written either as an infinite series or as an improper integral. The key to obtaining these two distinct forms is Tonelli's Theorem (see [Rud87,Theorem 8.8]). Since the summands in G 1 (t) and G 2 (t) are positive, we can exchange the order of the infinite sums in each. When the outer sum is over q, we can show that lim t→0 + G 1 (t) is an infinite series and lim t→0 + G 2 (t) is an improper integral; exchanging the order of summation allows us to express lim t→0 + G 1 (t) as an integral and lim t→0 + G 2 (t) as an infinite series.
2.1. Serious Series for Spherical Spectra. In this part we prove the infinite series formula for lim λ→∞ N (λ) λ n . The computations for G 1 (t) and G 2 (t) are quite similar. In each instance we will apply the Dominated Convergence Theorem by leveraging Lemma 2.2.
In each computation we eventually exchange the limit as t → 0 + with a sum and in each case we require the following limit calculation, which follows from L'Hopital's rule.
Lemma 2.1. Let α > 0. Then
lim t→0 t n (1 − e −αt ) n = α −n .
In order to exchange limits with sums we will apply the Dominated Convergence Theorem. This next technical lemma will come in handy for showing the conditions of the Dominated Convergence Theorem are satisfied in each case.
Lemma 2.2. For n ∈ N, there exists M > 0 such that
f (x) = x n e 2x(n−1) (e 2x − 1) n < M
for all x ∈ (0, ∞).
Proof. The function f is continuous on (0, ∞). If we show that lim x→∞ f (x) < ∞ and lim x→0 + f (x) < ∞ it will follow that there exists some M such that f (x) < M on (0, ∞). Indeed, lim x→∞ f (x) = 0. Further, L'Hopital's rule gives lim x→0 + x e 2x −1 = 1 2 and so lim x→0 + x n e 2x(n−1) (e 2x −1) n = 1 2 n . Now we are ready to compute lim t→0 + t n G 1 (t).
Proposition 2.3. lim t→0 + t n G 1 (t) = 1 2 n ∞ q=1 n + q − 2 q 1 q n .
Proof. We start by coming up with an expression for G 1 (t) containing only a single sum. For |z| < 1, there is the Taylor series expansion
1 (1 − z) n = ∞ p=0 n + p − 1 p z p = ∞ p=1 n + p − 2 p − 1 z p−1 .
Let z = e −2tq ; then
G 1 (t) = ∞ q=1 ∞ p=0 n + p − 1 p n + q − 2 q e −2tq(p+n−1) = ∞ q=1 n + q − 2 q ∞ p=0 n + p − 1 p (e −2tq ) p+n−1 = ∞ q=1 n + q − 2 q ∞ p=0 n + p − 1 p z p+n−1 = ∞ q=1 n + q − 2 q z n−1 ∞ p=0 n + p − 1 p z p = ∞ q=1 n + q − 2 q z n−1 (1 − z) n = ∞ q=1 n + q − 2 q (e −2tq ) n−1 (1 − e −2tq ) n = ∞ q=1 n + q − 2 q e 2tq (e 2tq − 1) n . For positive t, t n (e 2tq ) (e 2tq −1) n ≤ 1 q n (qt) n (e 2tq(n−1) ) (e 2tq −1) n = f (qt) q n ≤ M q n where f is as in Lemma 2.2. Hence, t n n+q−2 q (e 2tq ) (e 2tq −1) n ≤ M n+q−2 q 1
q n for all t and since ∞ q=1 M n+q−2 q 1 q n converges, we are free to apply the Dominated Convergence Theorem to lim t→0 + t n G 1 (t).
This technical justification allows us to exchange the order of the limit and summation in lim t→0 + t n G 1 (t) and conclude that
lim t→0 + ∞ q=1 n + q − 2 q t n (e −2tq ) n−1 (1 − e −2tq ) n = ∞ q=1 lim t→0 + n + q − 2 q t n (e −2tq ) n−1 (1 − e −2tq ) n = ∞ q=1 n + q − 2 q 1 (2q) n ,
where the final equality follows from Lemma 2.1.
Next we move to the second piece and compute lim t→0 + t n G 2 (t).
Proposition 2.4. lim t→0 + t n G 2 (t) = 1 2 n ∞ q=1 q − 1 n − 2 1 q n .
Proof. We start out by manipulating the form of G 2 (t) as we did with G 1 (t) but this time we apply Tonelli's Theorem to switch the order of summation. In our calculation we will make use of the substitutions z = e −2t(p+n−1) and w = p + n − 1 and we will apply the power series expansion of
1 (1−z) n−1 . G 2 (t) = ∞ q=1 ∞ p=0 n + p − 2 p n + q − 2 q − 1 e −2tq(p+n−1) = ∞ p=0 n + p − 2 p ∞ q=1 n + q − 2 q − 1 e −2tq(p+n−1) = ∞ p=0 n + p − 2 p z ∞ q=1 n + q − 2 q − 1 z q−1 = ∞ p=0 n + p − 2 p z (1 − z) n = ∞ p=0 n + p − 2 n − 2 e −2t(p+n−1) (1 − e −2t(p+n−1) ) n = ∞ w=n−1 w − 1 n − 2 e −2tw (1 − e −2tw ) n = ∞ w=1 w − 1 n − 2 e −2tw ( e 2tw −1 e 2tw ) n = ∞ w=1 w − 1 n − 2 e 2tw(n−1) (e 2tw − 1) n . By Lemma 2.2, t n w−1 n−2 e 2tw(n−1) (e 2tw −1) n = w−1 n−2 f (tw) w n ≤ M w−1 n−2 1 w n . As ∞ w=0 M w−1 n−2 1
w n converges, we apply the Dominated Convergence Theorem to lim t→0 + t n G 2 (t) in order to exchange the limit and the sum. Thus,
lim t→0 + t n G 2 (t) = ∞ w=1 lim t→0 + w − 1 n − 2 t n e −2tw ( e 2tw −1 e 2tw ) n = 1 2 n ∞ w=1 w − 1 n − 2 1 w n .
Applying the Tauberian Theorem 1.2 in combination with Propositions 2.3 and 2.4 proves that
lim λ→∞ N (λ) λ n = 1 2 n n! ∞ q=1 1 q n n + q − 2 q + q − 1 n − 2 .
This concludes the proof of the first identity in Theorem 1.1.
2.2.
Isn't it Interesting: Intense Integral is Identical In Immensity. In this subsection we will prove the second part of Theorem 1.1. We start off by re-analyzing G 1 (t):
G 1 (t) = ∞ q=1 ∞ p=0 n + p − 1 p n + q − 2 q e −2tq(p+n−1) = ∞ p=0 ∞ q=1 n + p − 1 p n + q − 2 q e −2tq(p+n−1) = ∞ p=0 n + p − 1 p ∞ q=1 n + q − 2 q (e −2t(p+n−1) ) q = ∞ p=0 n + p − 1 p 1 (1 − e −2t(p+n−1) ) n−1 − 1 = ∞ w=n−1 w n − 1 1 (1 − e −2tw ) n−1 − 1
where the switching of the order of summation is justified by Tonelli's Theorem and we have taken w = n + p − 1. To calculate the limit as t → 0 + , we need the following lemma.
Lemma 2.5. Fix an integer r ≥ 1 and consider the function
f r (x) = x r 1 (1 − e −2x ) r − 1 defined for x > 0. Then (1) f r (x) > 0. (2) f ′ r (x) < 0 for sufficiently large x. (3) f r is bounded and ∞ 0 f r (x)dx < ∞. Proof. We can tell f r (x) > 0 since e −2x < 1 so 0 < 1 − e −2x < 1 and hence 1 (1−e −2x ) r > 1, proving (1).
To establish (2), a derivative calculation shows that
f ′ r (x) = rx r−1 1 (1 − e −2x ) r − 1 − x r 2re −2x (1 − e −2x ) r+1 = rx r−1 (1 − e −2x ) r+1 1 − e −2x − (1 − e −2x ) r+1 − 2xe −2x = rx r−1 (1 − e −2x ) r+1 e 2x e 2x − 1 − (1 − e −2x ) r+1 e 2x − 2x . Define l(x) = e 2x − 1 − (1 − e −2x ) r+1 e 2x − 2x.
To prove (2), it suffices to show that l(x) < 0 for sufficiently large x. For this, let y = e −2x and note that
lim x→∞ e 2x − 1 − (1 − e −2x ) r+1 e 2x = lim y→0 + 1 − y − (1 − y) r+1 y = lim y→0 + −1 + (r + 1)(1 − y) r = r.
Looking at the definition of l, the first three terms converge to r and the last goes to −∞, so l(x) < 0 for sufficiently large x.
To prove (3) we will show that f r (x) is bounded by a constant near 0 and bounded by an exponentially decaying function for large x. The function f r (x) isn't defined at x = 0, but Lemma 2.1 implies that
lim x→0 f r (x) < ∞ so the discontinuity at 0 is removable. This implies that f r (x) is bounded on [0, log(2) 2 ]. To bound f r (x) on [ log 2 2 , ∞) we note 1 (1 − e −2x ) r − 1 = 1 ((e 2x − 1)/e 2x ) r − 1 = (e 2x ) r − (e 2x − 1) r (e 2x − 1) r .
Using the formula for a difference of rth powers we obtain the following expression for the numerator of the above fraction
(e 2x ) r − (e 2x − 1) r = (e 2x − (e 2x − 1)) r−1 i=0 (e 2x ) i (e 2x − 1) r−1−i ≤ r−1 i=0 (e 2x ) i (e 2x ) r−1−i = re 2x(r−1) .
Hence,
1 (1 − e −2x ) r − 1 ≤ re 2x(r−1) (e 2x − 1) r . Note e 2x − 1 ≥ e 2x 2 for x ≥ log(2) 2 so 1 (1 − e −2x ) r − 1 ≤ re 2x(r−1) (e 2x /2) r = r2 r e 2x .
Applying this to f r (x) gives that when x ≥ log(2) 2 , f r (x) ≤ r2 r x r e 2x . Thus f r (x) is dominated by an exponentially decaying function for sufficiently large x, so
∞ 0 f r (x) dx < ∞.
It will be helpful to make the following definition:
Definition 2.6. For real α = 0, define the scaled ceiling function ⌈·⌉ α : R → R by
⌈x⌉ α = α⌈x/α⌉.
For example, ⌈7⌉ 3 = 3⌈ 7 3 ⌉ = 3 · 3 = 9 and ⌈6⌉ 2 = 2⌈ 6 2 ⌉ = 2 · 3 = 6. Then we have the following: Proposition 2.7. ⌈x⌉ α is x rounded up to the nearest integral multiple of α, i.e. ⌈x⌉ α = min{nα : n ∈ Z, nα ≥ x}.
Proof.
⌈x⌉ α = α min{n ∈ Z : n ≥ x/α} = α min{n ∈ Z : αn ≥ x} and the result follows.
The scaled ceiling function has the following properties:
(1) Fix x ∈ R, α > 0. Then 0 ≤ ⌈x⌉ α − x < α.
(2) Fix x ∈ R. Then lim α→0 + ⌈x⌉ α = x.
(
3) Let f : [a, b] → R be monotonically decreasing. Fix 0 < α ≤ b − a. Then for all x ∈ [a, b − α], f (⌈x⌉ α ) ≤ f (x)
. The next few propositions will allow us to compute lim t→0 + G 1 (t).
Proposition 2.8.
lim t→0 + t n ∞ w=n−1 w n−1 1 (1 − e −2tw ) n−1 − 1 = ∞ 0 x n−1 1 (1 − e −2x ) n−1 − 1 dx.
Proof. Manipulating the sum, we have
t n ∞ w=n−1 w n−1 1 (1 − e −2tw ) n−1 − 1 = ∞ w=n−1 t n w n−1 1 (1 − e −2tw ) n−1 − 1 = ∞ w=n−1 w w−1 t n ⌈w ′ ⌉ n−1 1 (1 − e −2t⌈w ′ ⌉ ) n−1 − 1 dw ′ = ∞ n−2 t n ⌈w ′ ⌉ n−1 1 (1 − e −2t⌈w ′ ⌉ ) n−1 − 1 dw ′ = ∞ t(n−2) (t⌈x/t⌉) n−1 1 (1 − e −2t⌈x/t⌉ ) n−1 − 1 dx = ∞ t(n−2) ⌈x⌉ n−1 t 1 (1 − e −2⌈x⌉t ) n−1 − 1 dx = ∞ t(n−2) f n−1 (⌈x⌉ t )dx.
Fix C such that f ′ n−1 (x) < 0 for x ≥ C and fix M such that f n−1 (x) < M . Then
f n−1 (⌈x⌉ t ) ≤ M ½ {x<C} + f n−1 (x)
for all x > 0. Since the integral of the right hand side is finite, we may apply dominated convergence to see that
lim t→0 + ∞ w=n−1 t n w n−1 1 (1 − e −2tw ) n−1 − 1 dx = ∞ 0 x n−1 1 (1 − e −2x ) n−1 − 1 dx
which completes the proof.
The following proposition will help us compute the rest of the limits we need before we can tackle lim t→0 + t n G(t) in its entirety.
Proposition 2.9. Suppose that a t (w) is a positive function of real t and integer w ≥ 1 such that lim t→0 + a t (w) = 0 for each w and lim t→0 + ∞ w=1 a t (w) = M < ∞. Then
lim t→0 + ∞ w=1 a t (w) w = 0.
Proof. Let ǫ > 0 be arbitrary and let k > 2 M ǫ − 1 be an integer. Since lim t→0 + a t (w) = 0 for each w, there exists some T such that for all 0 < t < T and all w ≤ k, a t (w) ≤ ǫ 2k . Thus for t < T , we have
∞ w=1 a t (w) w = k w=1 a t (w) w + ∞ w=k+1 a t (w) w ≤ k w=1 a t (w) + 1 k + 1 ∞ w=k+1 a t (w) ≤ k w=1 a t (w) + 1 k + 1 ∞ w=1 a t (w) ≤ k w=1 ǫ 2k + M k + 1 ≤ ǫ 2 + ǫ 2 = ǫ.
Therefore lim t→0 + ∞ (1−e −2tw ) n−1 − 1 allows us to apply Propositions 2.9 and 2.8 and arrive at the desired conclusion.
With Propositions 2.8 and 2.10 in hand, we can move forward. We break w n−1 into a polynomial in w, as follows:
w n − 1 = w! (n − 1)!(w − n + 1)! = w(w − 1) · · · (w − n + 2) (n − 1)! = w n−1 (n − 1)! + n−2 k=0 a k w k . We write lim t→0 + t n G 1 (t) = lim t→0 + t n ∞ w=n−1 w n − 1 1 (1 − e −2tw ) n−1 − 1 = lim t→0 + t n (n − 1)! ∞ w=n−1 w n−1 1 (1 − e −2tw ) n−1 − 1 + t n n−2 k=0 a k ∞ w=n−1 w k 1 (1 − e −2tw ) n−1 − 1 = 1 (n − 1)! ∞ 0 x n−1 1 (1 − e −2x ) n−1 − 1 dx. (2.1)
The final equality was obtained by applying Propositions 2.8 and 2.10. Now we move on to G 2 (t). We have
G 2 (t) = ∞ q=1 ∞ p=0 n + p − 2 p n + q − 2 q − 1 e −2tq(p+n−1) = ∞ q=1 n + q − 2 q − 1 e −2tq(n−1) ∞ p=0 n + p − 2 p (e −2tq ) p = ∞ q=1 n + q − 2 q − 1 e −2tq(n−1) (1 − e −2tq ) n−1 = ∞ q=1 n + q − 2 n − 1 e −2tq(n−1) (1 − e −2tq ) n−1 .
This next lemma plays a role in our analysis of G 2 (t) which is analogous to the role of Lemma 2.5 in our analysis of G 1 (t).
Lemma 2.11. Fix integers r ≥ 0 and s ≥ 1, and consider the function
g r,s (x) = x r e −2xs (1 − e −2x ) s defined for x > 0. Then (1) g r,s (x) > 0. (2) g ′ r,r (x) < 0 . (3) If r ≥ s, g r,s (x) is bounded and ∞ 0 g r,s (x)dx < ∞. (4) If r ≥ s + 1, lim x→0 g r,s (x) = 0.
Proof. Suppose r ≥ 0 and s ≥ 1 are integers and that x > 0. As x r > 0, e −2xs > 0, and 1 − e −2x > 0, we have g r,s (x) > 0 which shows (1). To prove (2) notice that we may write g r,r (x) = x r e −2xr
(1 − e −2x ) r =
x r (e 2x − 1) r .
Since the function x/(e 2x − 1) is decreasing, it follows that g ′ r,r (x) < 0. For part (3), fix M ∈ R such that for all x ≥ M , e 2x − 1 ≥ e 2x 2 and (e 2x ) 1/2 ≥ x r . Then for x ≥ M ,
g r,s (x) = x r (e 2x − 1) s ≤ x r e 2x 2 s = 2 s x r (e 2x ) s ≤ 2 s (e 2x ) 1/2 (e 2x ) s = 2 s (e 2x ) s−1/2 .
As s ≥ 1, the right-hand side has finite integral over [M, ∞) and hence ∞ M g r,s (x) dx < ∞. The integral of g r,s (x) over [0, M ] is also finite because we can extend g r,s (x) to a continuous, bounded function on this compact interval. Adding these two parts together shows that
∞ 0 g r,s (x) dx < ∞.
It remains to show lim x→0 g r,s (x) = 0 whenever r ≥ s + 1. First notice that Lemma 2.1 can be used to show that
lim x→0 x r (1 − e −2x ) s = lim x→0 x r−s lim x→0 x s (1 − e −2x ) s = 0 · 2 −n = 0. Therefore, when r ≥ s + 1, we have lim x→0 x r e −2sx
(1 − e −2x ) s = lim x→0 e −2xs lim x→0 x r (1 − e −2x ) s = 1 · 0 = 0 as we needed to show, this proves (4). Now we move on to propositions which, similarly to our strategy for analyzing G 1 (t), allow us to break up the binomial coefficient in G 2 (t) into a polynomial and analyze the resulting sums separately.
Proposition 2.12. Fix 0 ≤ k < n − 1. Then
lim t→0 + t n ∞ q=1 q k e −2tq(n−1)
(1 − e −2tq ) n−1 = 0.
Proof. We recognize that this expression can be rewritten in terms of g n,n−1 (tq):
t n ∞ q=1 q k e −2tq(n−1) (1 − e −2tq ) n = ∞ q=1 1 q n−k (tq) n e −2tq(n−1)
(1 − e −2tq ) n−1 = ∞ q=1 1 q n−k g n,n−1 (tq).
Since g n,n−1 is bounded and n − k ≥ 2, the summand is dominated by some constant times 1 q 2 . Therefore we can apply dominated convergence to conclude that the limit is 0, since the summand converges pointwise to 0 by the previous lemma.
Proposition 2.13. lim t→0 + t n ∞ q=1 q n−1 e −2tq(n−1) (1 − e −2tq ) n−1 = ∞ 0 x n−1 e −2x(n−1) (1 − e −2x ) n−1 dx. Proof. Write t n ∞ q=1 q n−1 e −2tq(n−1) (1 − e −2tq ) n−1 = ∞ q=1 q q−1 t n ⌈q ′ ⌉ n−1 e −2t⌈q ′ ⌉(n−1) (1 − e −2t⌈q ′ ⌉ ) n−1 dq ′ = ∞ 0 t n ⌈q ′ ⌉ n−1 e −2t⌈q ′ ⌉(n−1) (1 − e −2t⌈q ′ ⌉ ) n−1 dq ′ = ∞ 0 (t⌈x/t⌉) n−1 e −2t⌈x/t⌉(n−1) (1 − e −2t⌈x/t⌉ ) n−1 dx = ∞ 0 ⌈x⌉ n−1 t e −2⌈x⌉t(n−1) (1 − e −2⌈x⌉t ) n−1 dx.
The integrand is exactly g n−1,n−1 (⌈x⌉ t ), and is thus dominated by g n−1,n−1 since g n−1,n−1 is a decreasing function. Therefore we can apply dominated convergence to find that
lim t→0 + t n ∞ q=1 q n−1 e −2tq(n−1) (1 − e −2tq ) n−1 = ∞ 0 x n−1 e −2x(n−1) (1 − e −2x ) n−1 dx.
We can expand n+q−2
n−1 as n + q − 2 n − 1 = q n−1 (n − 1)! + n−2 k=0 b k q k . Therefore, lim t→0 + t n G 2 (t) = lim t→0 + t n ∞ q=1 n + q − 2 n − 1 e −2tq(n−1) (1 − e −2tq ) n−1 = lim t→0 + 1 (n − 1)! t n ∞ q=1 q n−1 e −2tq(n−1) (1 − e −2tq ) n−1 + t n n−2 k=0 b k ∞ q=1 q k e −2tq(n−1) (1 − e −2tq ) n−1 = 1 (n − 1)! ∞ 0 x n−1 e −2x(n−1) (1 − e −2x ) n−1 dx = 1 (n − 1)! ∞ 0 x n−1 1 (e 2x − 1) n−1 dx.
Next, we put the two parts of G back together in the limit. This gives us
lim t→0 + t n G(t) = lim t→0 + t n G 1 (t) + lim t→0 + t n G 2 (t) = 1 (n − 1)! ∞ 0 x n−1 1 (1 − e −2x ) n−1 − 1 dx + ∞ 0 x n−1 1 (e 2x − 1) n−1 dx = 1 (n − 1)! ∞ 0 x n−1 1 (1 − e −2x ) n−1 − 1 + 1 (e 2x − 1) n−1 dx.
We now have an expression for lim t→0 + G(t) in terms of an integral so we could have chosen to stop here. Instead, we will press on and apply a few more tricks in order to arrive at the expression in Theorem 1.1. One reason we prefer the expression in Theorem 1.1 is its similarity to a closely related result in [ST84].
To continue on our path of manipulating the combined integral, we apply integration by parts with
u = 1 (1 − e −2x ) n−1 − 1 + 1 (e 2x − 1) n−1 , dv = x n−1 dx, du = −(n − 1) 2e −2x (1 − e −2x ) n + 2e 2x (e 2x − 1) n dx = −2(n − 1) e (n−2)x + e −(n−2)x (e x − e −x ) n dx = − n − 1 2 n−2 cosh((n − 2)x) sinh(x) n dx,
and v = n −1 x n . This gives us
lim t→0 + t n G(t) = 1 n! x n 1 (1 − e −2x ) n−1 − 1 + 1 (e 2x − 1) n−1 ∞ 0 + n − 1 2 n−2 n! ∞ 0 x n cosh((n − 2)x) sinh(x) n dx = n − 1 2 n−2 n! ∞ 0 x n cosh((n − 2)x) sinh(x) n dx.
The boundary term clearly vanishes at ∞ because x n 1 (1−e −2x ) n−1 − 1 and
x n (e 2x −1) n−1 each vanish at ∞. To see that it vanishes at 0, apply L'Hopital's rule to the quotients x 1−e −2x and x e 2x −1 . Since the integrand is even, we further have
lim t→0 + t n G(t) = n − 1 2 n−2 n! ∞ 0 x n cosh((n − 2)x) sinh(x) n dx = n − 1 2 n−1 n! ∞ −∞ x n cosh((n − 2)x) sinh(x) n dx = n − 1 2 n−2 n! ∞ −∞ x n sinh(x) n e x(n−2) dx + ∞ −∞
x n sinh(x) n e −x(n−2) dx .
As
x sinh(x) is an even function, the value of the first integral is unchanged if we change the e x(n−2) in the integrand to e −x(n−2) . Thus,
lim t→0 + t n G(t) = n − 1 2 n−2 n! ∞ −∞ x n sinh(x) n e −x(n−2) dx + ∞ −∞ x n sinh(x) n e −x(n−2) dx = n − 1 2 n−1 n! ∞ −∞ x sinh(x) n e −x(n−2) dx = 2π n (n − 1)! · n − 1 (2π) n n ∞ −∞ x sinh(x) n e −x(n−2) dx = vol(S 2n−1 ) n − 1 (2π) n n ∞ −∞ x sinh(x) n e −x(n−2) dx.
Therefore, by the Tauberian Theorem due to Karamata, we obtain the following limit
lim λ→∞ N (λ) λ n = vol(S 2n−1 ) (n − 1) n(2π) n Γ(n + 1) ∞ −∞ x sinh(x) n e −x(n−2) dx,
and complete the proof of Theorem 1.1.
The Formula for Functions versus the Formula for Forms
In [ST84], Stanton and Tartakoff prove the following formula, reminiscent of Weyl's law, for the Kohn Laplacian on CR manifolds of hypersurface type. For spheres embedded in C n , the induced metric is a Levi metric (see Definition 1.5 and the following remark in [ST84]). However, as stated, this only applies to (p, q) forms with q ≥ 1. We analyze how this expression relates to Theorem 1.1. Towards this end, we define the function
f (q) = n − 1 q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ n−1 e −(n−1−2q)τ dτ,
which for q = 1, . . . , n − 2 is the leading coefficient on λ n for the asymptotic growth of N (λ), the eigenvalue counting function of b on M acting on (0, q) forms.
The following statement shows that this function is closely related to our formula.
Theorem 3.2. The definition of f given above is convergent for complex q satisfying 0 < ℜ(q) < n − 1.
Further, f is holomorphic on this strip, and has an analytic continuation to a meromorphic function on the strip −1 < ℜ(q) < n − 1 whose only pole is at q = 0. Finally, the Laurent expansion of f about 0 is f (q) = a n /q n + vol(S 2n−1 ) (n − 1) n(2π) n Γ(n + 1)
∞ −∞ x sinh(x) n e −x(n−2) dx,
where a n = 0.
In other words, the constant term in the Laurent expansion of f about 0 is the expression from Theorem 1.1.
Proof. For notational convenience let m = n − 1. Then we may write f as
f (q) = m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ m e −(m−2q)τ dτ.
We first prove that the integrand is integrable (L 1 ) whenever 0 < ℜ(q) < m. For this, choose C > 0 and α > 0 so that whenever |x| ≥ C, | sinh(x)| ≥ αe |x| . Therefore for |x| ≥ C we have
x sinh(x) = x sinh(x) ≤ |x| αe |x| .
Since x/ sinh(x) has a removable singularity at 0, it is continuous and thus bounded on [−C, C]. Hence x m / sinh(x) m is bounded on [−C, C] as well, so we can choose D so that x m / sinh(x) m ≤ D whenever |x| ≤ C. Thus, we have the bound
x m sinh(x) m ≤ |x| m α m e m|x| ½ |x|≥C + D½ |x|≤C for all x ∈ R.
Therefore, we may write
∞ −∞ τ sinh τ m e −(m−2q)τ dτ = ∞ −∞ τ sinh τ m e −(m−2ℜ(q))τ dτ ≤ α −m C −∞ |τ | m e −m|τ |−(m−2ℜ(q))τ dτ + α −m ∞ C τ m e −mτ −(m−2ℜ(q))τ dτ + C −C D dτ = α −m ∞ C τ m e −mτ +(m−2ℜ(q))τ dτ + α −m ∞ C τ m e −mτ −(m−2ℜ(q))τ dτ + 2CD = α −m ∞ C τ m e −2τ ℜ(q) dτ + α −m ∞ C τ m e −2τ (m−ℜ(q)) dτ + 2CD.
A repeated integration by parts shows that these integrals are finite if 0 < ℜ(q) < m. To define the binomial coefficient, use the gamma function, i.e.
m q = m! Γ(q + 1)Γ(m − q + 1) ,
which is defined since Γ has no zeros. Thus, f is well defined on its domain of definition. Now, rewrite f as follows:
f (q) = m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ m e −(m−2q)τ dτ = 1 2 m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ m e −(m−2q)τ dτ + ∞ −∞ τ sinh τ m e −(m−2q)τ dτ .
As
x sinh(x) is an even function, the value of the second integral is unchanged if we change the e −(m−2q)τ in the integrand to e (m−2q)τ . Therefore,
f (q) = 1 2 m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ m e −(m−2q)τ dτ + ∞ −∞ τ sinh τ m e (m−2q)τ dτ = m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ −∞ τ sinh τ m cosh((m − 2q)τ ) dτ = 2 m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ 0 τ sinh τ m cosh((m − 2q)τ ) dτ
where the last step follows since the integrand is even. Now, define the function g as follows:
g(q) = 2 m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ 0 τ m cosh((m − 2q)τ )) (sinh τ ) m − 2 m−1 e −2qτ dτ.
We claim that g is holomorphic on the strip −1 < ℜ(q) < m. Assuming this for now, consider f (q) − g(q) = 2 m m q vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ 0 τ m e −2qτ dτ defined for 0 < ℜ(q) < m. This is easy to evaluate explicitly with the substitution u = 2qτ :
∞ 0 τ m e −2qτ dτ = 1 (2q) m ∞ 0 u m e −u du = Γ(m + 1) (2q) m so f (q) − g(q) = m q vol(S 2n−1 ) (2π) n (m + 1) 1 q m+1 = n − 1 q
vol(S 2n−1 ) n(2π) n 1 q n . This is meromorphic in q on the whole complex plane. Thus, the function g + (f − g) is meromorphic in q for −1 < ℜ(q) < n − 1, and equal to f if 0 < ℜ(q) < n − 1. Since f converges on its domain of definition, f is holomorphic in q, and g + (f − g) is the desired continuation. To complete the proof, we need to show that g(0) is our formula. We have
g(0) = 2 vol(S 2n−1 ) (2π) n Γ(n + 1) ∞ 0 τ m cosh(mτ ) (sinh τ ) m − 2 m−1 dτ = 2 2π n /Γ(n) (2π) n Γ(n + 1) ∞ 0 τ m e mτ + e −mτ 2 m−1 (e τ − e −τ ) m − 2 m−1 dτ = 1 Γ(n)Γ(n + 1) ∞ 0 τ m e mτ + e −mτ (e τ − e −τ ) m − 1 dτ = 1 (n − 1)! Γ(n + 1) ∞ 0 τ m e mτ (e τ − e −τ ) m + e −mτ (e τ − e −τ ) m − 1 dτ = 1 (n − 1)! Γ(n + 1) ∞ 0 τ m 1 (1 − e −2τ ) m + 1 (e 2τ − 1) m − 1 dτ = 1 Γ(n + 1) lim t→0 t n G(t) = lim λ→∞ N (λ)
λ n where we have used the expression appearing in the discussion after Proposition 2.13, and Karamata's Theorem. Thus we have our theorem, modulo showing that g is holomorphic for −1 < ℜ(q) < m. We move on to this now. It clearly suffices to show that the expression
h(q) = ∞ 0 τ m cosh((m − 2q)τ )) (sinh τ ) m − 2 m−1 e −2qτ dτ
is holomorphic for −1 < ℜ(q) < m. As a stepping stone towards proving that h(q) is holomorphic, we will first prove that for β > 0,
φ(β) = ∞ 0 e −2βτ τ 1 − e −2τ m
dτ is convergent and continuous. By differentiating with respect to β, we see that the integrand is motonically increasing, for each τ , as β → 0. Fix some β 0 > 0. Now, for τ near 0, the integrand is bounded since it has a limit at 0. For large τ , it is bounded by a constant times e −2βτ τ m , which is integrable on [0, ∞). To see continuity at β 0 , fix some β 1 with 0 < β 1 < β 0 , and note that the integrand at β 1 dominates the integrand at β for all β > β 1 . Continuity of φ at β 0 follows then by dominated convergence. Thus, we have shown that φ is convergent and continuous for β > 0.
Next, we compute τ m e −2ℜ(q)τ e −2τ 2 m (1 − e −2τ ) m = 2 m φ(ℜ(q) + 1). In total we have shown that ∞ 0 τ m cosh((m − 2q)τ )) (sinh τ ) m − 2 m−1 e −2qτ dτ ≤ 2 m−1 φ(m − ℜ(q)) + 2 2m−1 φ(ℜ(q) + 1).
∞ 0 τ m cosh((m − 2q)τ )) (sinh τ ) m − 2 m−1 e −2qτ dτ = 2 m−1 ∞ 0 τ m e (m−2q)τ + e −(m−2q)τ (e τ − e −τ ) m − e −2qτ dτ ≤ 2 m−1 ∞ 0 τ m e −(m−2q)τ (e τ − e −τ ) m dτ + 2 m−1 ∞ 0 τ m e (m−2q)τ (e τ − e −τ ) m − e −
Using this bound, we now show that h is holomorphic via Morera's Theorem. Fix some triangle ∆ ⊂ {q ∈ C : −1 < ℜ(q) < m}. Parameterize ∆ by arc length with the piecewise differentiable curve γ(t), a ≤ t ≤ b (so |γ ′ (t)| = 1) for all t). Then h(γ(t))γ ′ (t) dτ dt.
The estimate above, ∞ 0 τ m cosh((m − 2γ(t))τ )) (sinh τ ) m − 2 m−1 e −2γ(t)τ γ ′ (t) dτ ≤ 2 m−1 φ(m − ℜ(γ(t))) + 2 2m−1 φ(ℜ(γ(t)) + 1), is uniformly bounded in t by the compactness of ∆ and the continuity of the upper bound in t, so b a ∞ 0 τ m cosh((m − 2γ(t))τ )) (sinh τ ) m − 2 m−1 e −2γ(t)τ γ ′ (t) dτ dt < ∞.
Therefore we may apply Fubini's Theorem to see that is a holomorphic function of q for −1 < ℜ(q) < m, and thus g(q) is holomorphic for −1 < ℜ(q) < m, completing the proof.
Let M be a CR submanifold of C n , n ≥ 3. Let N (λ) be the eigenvalue counting function of b on M acting on (p, q) forms, Kohn Laplacian and the volume of M are defined with respect to a Levi metric.
2qτ dτ.To analyze the first term, we have∞ 0 τ m e −(m−2q)τ (e τ − e −τ ) m dτ = ∞ 0 τ m e −(m−2ℜ(q))τ (e τ − e −τ ) m dτ = ∞ 0 τ m e −(2m−2ℜ(q))τ (1 − e −2τ ) m dτ = φ(m − ℜ(q)). For the second term, we have ∞ 0 τ m e (m−2q)τ (e τ − e −τ ) m − e −2qτ dτ = ∞ 0 τ m e −2qτ e mτ (e τ − e −τ ) m − 1 dτ = ∞ 0 τ m e −2ℜ(q)τ 1 (1 − e −2τ ) m − 1 dτ.We bound the latter expression in the integrand by1 (1 − e −2τ ) m − 1 = 1 − (1 − e −2τ ) m (1 − e −2τ ) e −2τ ) m ≤ e −2τ 2 m (1 − e −2τ ) m .Thus for the second term we have∞ 0 τ m e (m−2q)τ (e τ − e −τ ) m − e −2qτ dτ ≤ ∞ 0
cosh((m − 2γ(t))τ )) (sinh τ ) m − 2 m−1 e −2γ(t)τ γ ′ (t) dτ dt = cosh((m − 2q)τ )) (sinh τ ) m − 2 m−1 e −2qτ dτ
(Henry Bosch) Harvard University, Department of Mathematics, Cambridge, MA 02138, USA Email address: [email protected] (Tyler Gonzales) University of Wisconsin-Eau Claire, Department of Mathematics, Eau Claire, WI 54701, USA Email address: [email protected] (Kamryn Spinelli) Worcester Polytechnic Institute, Department of Mathematical Sciences, Worcester, MA 01609, USA Email address: [email protected] (Gabe Udell) Pomona College, Department of Mathematics, Claremont, CA 91711 Email address: [email protected] (Yunus E. Zeytuncu) University of Michigan-Dearborn, Department of Mathematics and Statistics, Dearborn, MI 48128, USA Email address: [email protected]
Note that we set a b = 0 if a ≤ 0.
AcknowledgementsWe would like to thank Mohit Bansil and Michael Dabkowski for careful comments on an earlier version of this paper. This research was conducted at the NSF REU Site (DMS-1950102, DMS-1659203) in Mathematical Analysis and Applications at the University of Michigan-Dearborn. We would like to thank the National Science Foundation, National Security Agency, and University of Michigan-Dearborn for their support.
Mohit Abb + 19] John Ahn, Garrett Bansil, Emilee Brown, Yunus E Cardin, Zeytuncu, Spectra of Kohn Laplacians on spheres. Involve. 12ABB + 19] John Ahn, Mohit Bansil, Garrett Brown, Emilee Cardin, and Yunus E. Zeytuncu. Spectra of Kohn Laplacians on spheres. Involve, 12(5):855-869, 2019.
Spectrum of the Kohn Laplacian on the Rossi sphere. Tawfik Abbas, M Madelyne, Allison Brown, Yunus E Ramasami, Zeytuncu, Involve. 121Tawfik Abbas, Madelyne M. Brown, Allison Ramasami, and Yunus E. Zeytuncu. Spectrum of the Kohn Laplacian on the Rossi sphere. Involve, 12(1):125-140, 2019.
Weyl's Law. Wolfgang Arendt, Robin Nittka, Wolfgang Peter, Frank Steiner, Spectral Properties of the Laplacian in Mathematics and Physics. LtdJohn Wiley & SonsWolfgang Arendt, Robin Nittka, Wolfgang Peter, and Frank Steiner. Weyl's Law: Spectral Properties of the Laplacian in Mathematics and Physics, chapter 1, pages 1-71. John Wiley & Sons, Ltd, 2009.
CR Manifolds and the Tangential Cauchy Riemann Complex. A Boggess, Studies in Advanced Mathematics. Taylor & FrancisA. Boggess. CR Manifolds and the Tangential Cauchy Riemann Complex. Studies in Advanced Mathematics. Taylor & Francis, 1991.
An analog of the Weyl law for the Kohn Laplacian on spheres. Mohit Bansil, Yunus E Zeytuncu, Complex Anal. Synerg. 61Paper No. 1, 6, 2020Mohit Bansil and Yunus E. Zeytuncu. An analog of the Weyl law for the Kohn Laplacian on spheres. Complex Anal. Synerg., 6(1):Paper No. 1, 6, 2020.
S C Chen, M C Shaw, Partial Differential Equations in Several Complex Variables. AMS/IP studies in advanced mathematics. American Mathematical SocietyS.C. Chen and M.C. Shaw. Partial Differential Equations in Several Complex Variables. AMS/IP studies in advanced mathematics. American Mathematical Society, 2001.
The tangential Cauchy-Riemann complex on spheres. G B Folland, Trans. Amer. Math. Soc. 171G. B. Folland. The tangential Cauchy-Riemann complex on spheres. Trans. Amer. Math. Soc., 171:83-133, 1972.
Hearing pseudoconvexity with the Kohn Laplacian. Siqi Fu, Math. Ann. 3312Siqi Fu. Hearing pseudoconvexity with the Kohn Laplacian. Math. Ann., 331(2):475-485, 2005.
Hearing the type of a domain in C 2 with the ∂-Neumann Laplacian. Siqi Fu, Adv. Math. 2192Siqi Fu. Hearing the type of a domain in C 2 with the ∂-Neumann Laplacian. Adv. Math., 219(2):568-603, 2008.
Can one hear the shape of a drum?. Mark Kac, part II. 73Mark Kac. Can one hear the shape of a drum? Amer. Math. Monthly, 73(4, part II):1-23, 1966.
Analysis of a subelliptic operator on the sphere in complex n-space. Oldrich Klima, School of Mathematics, School of Mathematics and Statistics. UNSWUniversity of New South WalesMaster's thesisOldrich Klima. Analysis of a subelliptic operator on the sphere in complex n-space. Master's thesis, University of New South Wales, School of Mathematics, School of Mathematics and Statistics, UNSW Sydney, NSW, 2052, Australia, 2004.
Real and complex analysis. Walter Rudin, McGraw-Hill Book CoNew Yorkthird editionWalter Rudin. Real and complex analysis. McGraw-Hill Book Co., New York, third edition, 1987.
The heat equation for the∂ b -Laplacian. Nancy K Stanton, David S Tartakoff, Comm. Partial Differential Equations. 97Nancy K. Stanton and David S. Tartakoff. The heat equation for the∂ b -Laplacian. Comm. Partial Differential Equations, 9(7):597-686, 1984.
The heat equation in several complex variables. Nancy K Stanton, Bull. Amer. Math. Soc. (N.S.). 111Nancy K. Stanton. The heat equation in several complex variables. Bull. Amer. Math. Soc. (N.S.), 11(1):65-84, 1984.
|
[] |
[
"Complete next-to-leading order calculation for pion production in nucleon-nucleon collisions at threshold",
"Complete next-to-leading order calculation for pion production in nucleon-nucleon collisions at threshold"
] |
[
"C Hanhart \nInstitut für Kernphysik (Theorie)\nForschungszentrum Jülich\nD-52425JülichGermany\n",
"N Kaiser \nInstitut für Theoretische Physik\nPhysik-Department T39\nTechnische Universität München\nD-85747GarchingGermany\n"
] |
[
"Institut für Kernphysik (Theorie)\nForschungszentrum Jülich\nD-52425JülichGermany",
"Institut für Theoretische Physik\nPhysik-Department T39\nTechnische Universität München\nD-85747GarchingGermany"
] |
[] |
Based on a counting scheme that explicitly takes into account the large momentum √ M m π characteristic for pion production in nucleon-nucleon collisions we calculate all diagrams for the reaction N N → N N π at threshold up to next-to-leading order. At this order there are no free parameters and the size of the next-to-leading order contributions is in line with the expectation from power counting. The sum of loop corrections at that order vanishes for the process pp → ppπ 0 at threshold. The total contribution at nextto-leading order from loop diagrams that include the delta degree of freedom vanishes at threshold in both reaction channels pp → ppπ 0 , pnπ + .
|
10.1103/physrevc.66.054005
|
[
"https://arxiv.org/pdf/nucl-th/0208050v1.pdf"
] | 18,585,287 |
nucl-th/0208050
|
c535e9387caf4b84a999e1fa086da2fee9485c06
|
Complete next-to-leading order calculation for pion production in nucleon-nucleon collisions at threshold
26 Aug 2002
C Hanhart
Institut für Kernphysik (Theorie)
Forschungszentrum Jülich
D-52425JülichGermany
N Kaiser
Institut für Theoretische Physik
Physik-Department T39
Technische Universität München
D-85747GarchingGermany
Complete next-to-leading order calculation for pion production in nucleon-nucleon collisions at threshold
26 Aug 2002
Based on a counting scheme that explicitly takes into account the large momentum √ M m π characteristic for pion production in nucleon-nucleon collisions we calculate all diagrams for the reaction N N → N N π at threshold up to next-to-leading order. At this order there are no free parameters and the size of the next-to-leading order contributions is in line with the expectation from power counting. The sum of loop corrections at that order vanishes for the process pp → ppπ 0 at threshold. The total contribution at nextto-leading order from loop diagrams that include the delta degree of freedom vanishes at threshold in both reaction channels pp → ppπ 0 , pnπ + .
The high precision data for the processes pp → ppπ 0 , pp → pnπ + and pp → dπ + in the threshold region [1] have spurred a flurry of theoretical investigations. The first data on neutral pion production were a big surprise because the experimental cross sections turned out to be a factor of five larger than the theoretical predictions based on direct pion production and neutral pion rescattering fixed from on-shell πN data [2,3]. Subsequently, it was argued that heavy-meson exchanges might be able to remove this discrepancy [4]. On the other hand, it was found [5,6] that the (model-dependent) off-shell behavior of the full πN T-matrix can also enhance the cross sections near threshold considerably.
Due to their nature as pseudo-Goldstone bosons the dynamics of pions is largely constrained by chiral symmetry. Thus one might hope that effective field theory studies which incorporate these constraints strictly will help to resolve the so far confusing situation. In the literature there are several calculations carried out in the framework of tree-level chiral perturbation theory including the dimension two (single-nucleon) operators for neutral pion production [7,8,9,10] as well as for charged pion production [11,12]. A common feature of these calculations is that the contributions from the isoscalar pion rescattering interfere destructively with the direct production amplitude, thus leading to an even more severe discrepancy between experiment and theory. It should be noted that such an interference pattern is in contradiction to the one found in phenomenological approaches [5,6]. Furthermore, within the Weinberg scheme, where all momenta are considered of the order of m π , one loop calculations have been performed for neutral pion production pp → ppπ 0 [13,14,15]. According to some of these works the loop corrections are larger by at least a factor of two compared to the tree level diagrams, that according to the counting scheme applied appear one order down. This feature (if correct) would seriously question the convergence of the chiral expansion for pion production in NN-collisions. On the other hand, according to ref. [16] the chiral expansion seems to show convergence in the case of p-wave pion production.
The purpose of the present work is to present a complete next-to-leading order calculation of the reaction NN → NNπ at threshold. In particular, we evaluate all one-loop diagrams at next-to-leading order employing a counting scheme that takes into account the large momentum √ Mm π characteristic for pion production in NN-collisions, as suggested in refs. [8,16]. We consider also the contributions from explicit delta-isobars at tree level and at one-loop order.
To the order we are working there are no free parameters and we demonstrate that the size of the individual next-to-leading order contributions is in line with the expectations from power counting.
Let us begin with writing down the general form of the threshold T-matrix for the pion production reaction N 1 ( p ) + N 2 (− p ) → N + N + π in the center-of-mass frame, which reads [17]:
T cm th (NN → NNπ) = A 2 (i σ 1 − i σ 2 + σ 1 × σ 2 ) · p ( τ 1 + τ 2 ) · φ * + B 2 ( σ 1 + σ 2 ) · p (i τ 1 − i τ 2 + τ 1 × τ 2 ) · φ * ,(1)
with σ 1,2 and τ 1,2 the spin and isospin operators of the two nucleons. φ denotes the threecomponent isospin wave function of the final state pion produced in an s-wave state, e.g. φ = (0, 0, 1) for π 0 -production and φ = (1, i, 0)/ √ 2 for π + -production. The complex amplitudes A and B belong to the transitions 3 P 0 → 1 S 0 and 3 P 1 → 3 S 1 in the two-nucleon system, respectively. In fact the selection rules which follow from the conservation of parity, angular momentum and isospin allow only for these two transitions for the reaction NN → NNπ at threshold. In the case of neutral pion production pp → ppπ 0 the threshold amplitude A is the only relevant one whereas in charged pion production pp → pnπ + both threshold amplitudes A and B can contribute. Note that the threshold T-matrix written in eq.(1) incorporates the Pauli exclusion principle since combined left multiplication with the spin-exchange operator (1 + σ 1 · σ 2 )/2 and the isospin exchange operator (1 + τ 1 · τ 2 )/2 reproduces T cm th (NN → NNπ) up to an important minus sign. The magnitude of the nucleon center-of-mass momentum p necessary to produce a pion at rest is given by:
| p | = m π (M + m π /4) ,(2)
with M = 939 MeV and m π = 139.6 MeV denoting the nucleon and pion mass, respectively. Eq.(2) exhibits the important feature of the reaction NN → NNπ, namely the large momentum mismatch between the initial and the final nucleon-nucleon state. This leads to a large invariant (squared) momentum transfer t = −Mm π between in-and outgoing nucleons. The appearance of the large momentum scale √ Mm π in pion production demands for a change in the chiral power counting rules, as pointed out already in ref. [8]. In addition, it seems compulsory to include the delta-isobar as an explicit degree of freedom, since the delta-nucleon mass difference ∆ = 293 MeV is comparable to the external momentum p ≃ √ Mm π = 362 MeV. The hierarchy of scales
M ≫ p ≃ ∆ ≫ m π ,(3)
suggested by this feature is in line with findings within meson exchange models where the delta-isobar gives significant contributions even close to the threshold [18,19].
Let us now state our counting rules. The external momentum p ≃ √ Mm π sets the overall scale relevant for the process NN → NNπ. This momentum scale p enters the internal lines of tree and loop diagrams. Therefore we count all four-momenta #1 l µ inside loops generically as order p and the loop integration measure d 4 l as order p 4 . A pion propagator is counted as order 1/p 2 . The delta-propagator of the form 1/(energy − ∆) counts as order 1/p, since we made the choice ∆ ∼ p. For the nucleon propagator of the form 1/energy one has to distinguish whether it occurs outside or inside a loop. The associated residual energy counts as order m π outside a loop and as order p ∼ √ Mm π inside a loop. Furthermore, external pion energies are counted as order m π .
According to these counting rules one-loop diagrams contribute at order p 2 in the expansion of the T-matrix and thus generate threshold amplitudes of the form A, B ∼ p ≃ √ Mm π . The new counting rules demand also for a reordering of the terms in the interaction Lagrangian, since "relativistic corrections" proportional to nucleon kinetic energies p 2 /M are now of the same order as "leading order contributions" proportional to residual nucleon energies. Several examples of this effect will be encountered here.
In Fig. 1, we display tree-level diagrams which according to the abovementioned counting rules contribute at leading order, next-to-leading order and next-to-next-to-leading order. Diagrams for which the role of both nucleons is interchanged and diagrams with crossed outgoing nucleon lines are not shown. Subsets of four diagrams obtained by these operations map properly onto the crossing antisymmetric threshold T-matrix eq.(1). Diagram a) involving the (isovector) Weinberg-Tomozawa ππNN-contact vertex gives a leading-order contribution of the form:
A (W T ) = 0 , B (W T ) = − g A 2Mf 3 π ,(4)
with g A ≃ 1.3 the nucleon axial vector coupling and f π = 92.4 MeV the pion decay constant.
It is important to note that the Weinberg-Tomozawa vertex generates here a proportionality factor m π at "leading order" in the chiral πN-Lagrangian via the pion and nucleon (residual) energies as well as through a "relativistic correction" of the form p 2 /M. This factor of m π gets finally canceled by the pion propagator [m π (M + m π )] −1 . Obviously, the isovector Weinberg-Tomozawa vertex cannot contribute to the neutral pion production threshold amplitude A. From the one-pion exchange diagram b) one finds:
A (1π) = g 3 A 8Mf 3 π , B (1π) = 3g 3 A 8Mf 3 π .(5)
This result stems from the recoil correction to the πNN-vertex proportional to (m π /M) σ 1 · p with the m π -factor getting now canceled by the intermediate nucleon propagator. Furthermore, the product of the two vertices on the left nucleon line ( σ 1 · p ) 2 = Mm π is canceled by the pion propagator. The ratio B (1π) /A (1π) = 3 has its origin in the isospin factor of diagram b). From the analogous diagram d) with one virtual delta-isobar excitation one finds:
A (∆) = g 3 A m π 4Mf 3 π ∆ , B (∆) = 0 ,(6)
where we have used the empirically well satisfied relation h A = 3g A / √ 2 for the πN∆-coupling constant. The spin and isospin transition operators entering the πN∆-vertex (h A /2f π ) S · p T a #1 Baryon energies are residual energies with the nucleon mass M subtracted. satisfy the usual relations S i S † j = (2δ ij − iǫ ijk σ k )/3 and T a T † b = (2δ ab − iǫ abc τ c )/3. The latter isospin relation is the reason behind the vanishing of B (∆) . According to our counting of the mass-splitting ∆ the term A (∆) in eq.(6) is a next-to-leading order contribution, since ∆ ∼ p (c.f. relation (3)). Diagram f) involves the second order chiral ππNN-contact vertex proportional to the low-energy constants c 1,2,3,4 [20]. We find the following contributions to the threshold amplitudes at next-to-next-to-leading order:
A (c i ) = g A m π 2Mf 3 π (c 3 + 2c 2 − 4c 1 ) , B (c i ) = g A m π 2Mf 3 π (c 4 + c 3 + 2c 2 − 4c 1 ) .(7)
In a previous calculation in ref. [7] (see eq.(32) therein) the c 2 -term has been found with a relative factor 1/2 smaller. The reason for this discrepancy is again that "relativistic corrections" from the c 2 -vertex are of the same order as its "static" contribution, since p 2 /M = m π . We also note that our results eqs.(4-7) agree up to the respective order with those of the fully relativistic calculation in ref. [17] where no approximations to the threshold kinematics have been made. We do not specify here the contributions from diagrams c), e) and g) in Fig. 1 which are proportional to the (a priori unknown) strengths of four-nucleon contact-vertices etc. It is important to note that already at leading order long-range effects from pion-exchange and short-range contributions appear simultaneously.
Let us now turn to the non-vanishing one-loop diagrams at threshold. Not every loop diagram appearing formally at next-to-leading order truly contributes at that order. In case of the diagrams b) and c) in Fig. 2 the (spin-independent) one-loop πN-scattering subdiagrams are proportional to m 3 π , and this pushes their contributions to the threshold T-matrix eq.(1) beyond next-to-leading order. A closer inspection of diagrams a) in Fig. 2 reveals that they contribute in the form m π ln m π to the threshold amplitudes A and B, i.e. beyond next-toleading order. The specific vertex structures of diagrams d) and e) in Fig. 2 make also their next-to-leading order contributions vanishing. Therefore we have to focus only on the diagrams shown in Figs. 3 and 4.
We evaluate only the genuine next-to-leading order pieces of the loop integrals emerging from the diagrams in Figs. 3 and 4. For instance, in the integrands we can systematically drop terms of order m π compared to l 0 (and ∆). Straightforward but tedious evaluation leads to the following next-to-leading order contributions of the one-loop diagrams in Fig. 3 with nucleons only:
A (N −loop) = g 3 A √ Mm π 256f 5 π (−2 − 1 + 3) , B (N −loop) = g 3 A √ Mm π 256f 5 π (−2 + 0 + 3) .(8)
Here, the numerical entries correspond to the diagrams a), b) and c), in that order. Interestingly, the total next-to-leading order loop contribution vanishes identically for neutral pion production A (N −loop) = 0. Diagrams a) and c) in Fig. 3 have been calculated fully relativistically (i.e. without any approximation to the threshold kinematics) for pp → ppπ 0 in ref. [17]. It is an important check for our calculation that the non-analytical piece proportional to √ Mm π agrees with the one derived by expanding eq.(16) in ref. [17]. In addition, after correcting a sign error in ref. [14] and extracting at threshold the truly next-to-leading order pieces from that work our results agree with theirs [21].
Numerically, the loop correction in eq.(8) gives B (N −loop) = g 3 A √ Mm π /256f 5 π ≃ 0.70 fm 4 . This is about 50% of the leading order one-pion exchange contributions |B (W T ) | ≃ 1.33 fm 4 or B (1π) ≃ 1.69 fm 4 . Indeed from chiral power counting one expects a similar suppression factor p/M = m π /M ≃ 0.4.
According to our counting of the mass difference ∆ ∼ √ Mm π loop diagrams with explicit delta-isobars are of the same order as those with nucleons only, namely of order p 2 . The relevant one-loop diagrams which generate truly next-to-leading order contributions are shown in Fig. 4. Straightforward but tedious evaluation leads to the following result:
A (∆−loop) = g 3 A K(∆) 32f 5 π (8 − 12 + 1 + 3) , B (∆−loop) = g 3 A K(∆) 32f 5 π (8 − 12 + 3 + 1)(9)
with the numerical entries corresponding to the subclasses a), b), c) and d), in that order. The relevant combination of loop functions reads:
K(∆) = 2J 0 (−∆) − 2∆ I 0 (−Mm π ) + (2∆ 2 − Mm π ) γ 0 (−∆, −Mm π ) ,(10)
with the following loop integrals [20] truncated at lowest order according to our counting scheme:
J 0 (−∆) = 4∆ L(λ) + ∆ 4π 2 ln 2∆ λ − 1 2 ∼ O(p) ,(11)I 0 (−Mm π ) = −2L(λ) − 1 16π 2 1 + ln Mm π λ 2 ∼ O(p 0 ) ,(12)γ 0 (−∆, −Mm π ) = 1 4π 2 √ Mm π ∞ 0 dx 1 + x 2 arctan x √ Mm π 2∆ ∼ O(p −1 ) .(13)
The (scale dependent) quantity:
L(λ) = λ d−4 16π 2 1 d − 4 + 1 2 (γ E − 1 − ln 4π) ,(14)
denotes for the standard divergent piece in dimensional regularization. The reason for grouping together the three specific diagrams into subclass b) is that this way the (in the chiral limit) singular term ∆ 2 J 0 (−∆)/Mm π does not appear explicitly. Evidently, the formal limit ∆ → 0 corresponds to loops diagrams with nucleons only, and therefore K(0) = − √ Mm π /16 enters eq.(8). Note, however, that for planar box diagrams this limit becomes inconsistent with the counting scheme employed.
One concludes that the contributions stemming from loop diagrams with delta-excitation vanish identically for both threshold amplitudes A and B, respectively for both reaction channels pp → ppπ 0 , pnπ + . The complete cancellations in eq.(9) are actually important consistency checks for our power counting scheme ∆ ∼ p. The combination of loop functions K(∆) ∼ p in eq.(10) is divergent, but at next-to-leading order there is no local counter term to absorb divergences.
In summary, we have performed here a complete next-to-leading order calculation of the reaction NN → NNπ at threshold. We have employed the counting scheme developed in refs. [8,16], that explicitly accounts for the large momentum p ≃ √ Mm π characteristic for this process. We find that the total next-to-leading order loop corrections either vanish or are in accordance with the expectation from power counting. At this stage we conclude, that the chiral expansion seems to converge also in the s-wave. Note, however, that at next-to-next-to-leading order a large number of loops enters, that have not yet been evaluated completely.
In order to compare our results directly to pion production data the emerging chiral operators have to be folded with (realistic) NN-wave functions. This convolution has been carried out in ref. [15] in a way that the symmetries are preserved. However, in that work the traditional Weinberg counting has been used. Consequently the results presented in ref. [15] do not allow any firm conclusion about the convergence of the chiral series, since contributions of different orders are mixed and the next-to-next-to-leading order is incomplete. Based on our counting scheme a complete next-to-next-to-leading order calculation is within reach and should be performed. Another direction should be the calculation of loop corrections to the higher partial waves amplitudes.
Figure 1 :Figure 2 :Figure 3 :Figure 4 :
1234Tree level contributions to threshold pion production at leading order (a,b,c), nextto-leading order (d,e) and next-to-next-to-leading order (f,g). A single solid, double solid and dashed line denotes a nucleon, delta-isobar and pion, respectively. Leading (subleading) order vertices are symbolized by solid dots (open circles). One-loop diagrams that start to contribute at next-to-next-to-leading order. For further notations seeFig. 1. Next-to-leading order one-loop diagrams for pion production at threshold with nucleons only. For further notations seeFig. 1. Next-to-leading order one-loop diagrams for pion production at threshold with intermediate delta-isobars. For further notations seeFig. 1.
For a recent review see: H. Machner and J. Haidenbauer. J. Phys. 25231For a recent review see: H. Machner and J. Haidenbauer, J. Phys. G25 (1999) R231.
. G A Miller, P Sauer, Phys. Rev. 441725G.A. Miller and P. Sauer, Phys. Rev. C44 (1991) 1725.
. J A Niskanen, Phys. Lett. 289227J.A. Niskanen, Phys. Lett. B289 (1992) 227.
. T.-S H Lee, D O Riska, Phys. Rev. Lett. 702237T.-S.H. Lee and D.O. Riska, Phys. Rev. Lett. 70 (1993) 2237.
. E Hernandez, E Oset, Phys. Lett. 350158E. Hernandez and E. Oset, Phys. Lett. B350 (1995) 158.
. C Hanhart, J Haidenbauer, A Reuber, C Schütz, J Speth, Phys. Lett. 35821C. Hanhart, J. Haidenbauer, A. Reuber, C. Schütz and J. Speth, Phys. Lett. B358 (1995) 21.
. B Y Park, F Myhrer, J R Morones, T Meissner, K Kubodera, Phys. Rev. 531519B.Y. Park, F. Myhrer, J.R. Morones, T. Meissner and K. Kubodera, Phys. Rev. C53 (1996) 1519.
. T D Cohen, J L Friar, G A Miller, U Van Kolck, Phys. Rev. 532661T.D. Cohen, J.L. Friar, G.A. Miller and U. van Kolck, Phys. Rev. C53 (1996) 2661.
. U Van Kolck, G A Miller, D O Riska, Phys. Lett. 388679U. van Kolck, G.A. Miller and D.O. Riska, Phys. Lett. B388 (1996) 679.
. T Sato, T.-S H Lee, F Myhrer, K Kubodera, Phys. Rev. 561246T. Sato, T.-S.H. Lee, F. Myhrer and K. Kubodera, Phys. Rev. C56 (1997) 1246.
. C Hanhart, Phys. Lett. 424C. Hanhart et al., Phys. Lett. B424 (1998) 8.
. C Da Rocha, G A Miller, U Van Kolck, Phys. Rev. 6134613C. da Rocha, G.A. Miller and U. van Kolck, Phys. Rev. C61 (2000) 034613.
. E Gedalin, A Moalem, L Razdolskaya, Phys. Rev. 6031E. Gedalin, A. Moalem and L. Razdolskaya, Phys. Rev. C60 (1999) 31.
. V Dmitrasinovic, K Kubodera, F Myhrer, T Sato, Phys. Lett. B. 46543V. Dmitrasinovic, K. Kubodera, F. Myhrer and T. Sato, Phys. Lett. B 465 (1999) 43.
. S Ando, T S Park, D P Min, Phys. Lett. B. 509253S. Ando, T.S. Park and D.P. Min, Phys. Lett. B 509 (2001) 253.
. C Hanhart, U Van Kolck, G A Miller, Phys. Rev. Lett. 852905C. Hanhart, U. van Kolck and G.A. Miller, Phys. Rev. Lett. 85 (2000) 2905.
. V Bernard, N Kaiser, -G Ulf, Meißner, Eur. Phys. J. 4259V. Bernard, N. Kaiser and Ulf-G. Meißner, Eur. Phys. J. A4 (1999) 259.
. J A Niskanen, Phys. Rev. 53526J.A. Niskanen, Phys. Rev. C53 (1996) 526.
. C Hanhart, J Haidenbauer, O Krehl, J Speth, Phys. Lett. 44425C. Hanhart, J. Haidenbauer, O. Krehl and J. Speth, Phys. Lett. B444 (1998) 25.
. V Bernard, N Kaiser, -G Ulf, Meißner, Int. J. Mod. Phys. 4193V. Bernard, N. Kaiser and Ulf-G. Meißner, Int. J. Mod. Phys. E4 (1995) 193.
. F Myhrer, private communicationsF. Myhrer, private communications.
|
[] |
[
"POROUS MEDIUM FLOW WITH BOTH A FRACTIONAL POTENTIAL PRESSURE AND FRACTIONAL TIME DERIVATIVE",
"POROUS MEDIUM FLOW WITH BOTH A FRACTIONAL POTENTIAL PRESSURE AND FRACTIONAL TIME DERIVATIVE"
] |
[
"Mark Allen ",
"ANDLuis Caffarelli ",
"Alexis Vasseur "
] |
[] |
[] |
We study a porous medium equation with right hand side. The operator has nonlocal diffusion effects given by an inverse fractional Laplacian operator. The derivative in time is also fractional of Caputo-type and which takes into account "memory". The precise model is. We pose the problem over {t ∈ R + , x ∈ R n } with nonnegative initial data u(0, x) ≥ 0 as well as right hand side f ≥ 0. We first prove existence for weak solutions when f, u(0, x) have exponential decay at infinity. Our main result is Hölder continuity for such weak solutions.
|
10.1007/s11401-016-1063-4
|
[
"https://arxiv.org/pdf/1509.06325v1.pdf"
] | 51,793,744 |
1509.06325
|
fec5ddc96b6882949e7f0154df094ed53d6fe847
|
POROUS MEDIUM FLOW WITH BOTH A FRACTIONAL POTENTIAL PRESSURE AND FRACTIONAL TIME DERIVATIVE
21 Sep 2015
Mark Allen
ANDLuis Caffarelli
Alexis Vasseur
POROUS MEDIUM FLOW WITH BOTH A FRACTIONAL POTENTIAL PRESSURE AND FRACTIONAL TIME DERIVATIVE
21 Sep 2015
We study a porous medium equation with right hand side. The operator has nonlocal diffusion effects given by an inverse fractional Laplacian operator. The derivative in time is also fractional of Caputo-type and which takes into account "memory". The precise model is. We pose the problem over {t ∈ R + , x ∈ R n } with nonnegative initial data u(0, x) ≥ 0 as well as right hand side f ≥ 0. We first prove existence for weak solutions when f, u(0, x) have exponential decay at infinity. Our main result is Hölder continuity for such weak solutions.
Introduction
In this paper we study both existence and regularity for solutions to a porous medium equation. The pressure is related to the density via a nonlocal operator. This diffusion takes into account long-range effects. The time derivative is nonlocal and fractional and therefore takes into account the past. In the typical derivation of the porous medium equation (see [14]) the equation one considers is ∂ t u + div(vu) = 0, with u(t, x) ≥ 0. By Darcy's law in a porous medium v = −∇p arises as a potential where p is the pressure. According to a state law p = f (u). In our case we consider a potential which takes into account long range interactions, namely p = (−∆) −σ u. A porous medium equation with a pressure of this type
(1.1) ∂ t u = div(u(−∆) −σ u)
has been recently studied. For 0 < σ < 1 with σ = 1/2, existence of solutions was shown in [5] while regularity and further existence properites were studied in [4]. Uniqueness for the range 1/2 ≤ σ < 1 was shown in [17]. Another model of the porous medium equation D α t u − div(κ(u)Du) = f was introduced by Caputo in [6]. In the above equation D α t is the Caputo derivative and the diffusion is local. Solvability for a more general equation was recently studied in [15]. The fractional derivative takes into account models in which there is "memory". The Caputo derivative has also been recently shown (see [7], [8] ) to be effective in modeling problems in plasma transport. See also [12] and [16] for further models that utilize fractional equations in both space and time to account for long-range interactions as well as the past.
The specific equation we study is (1.2) D α t u(t, x) − div u∇(−∆) −σ u = f (t, x). The operator D α t is of Caputo-type and is defined by
D α t u := α Γ(1 − α) t −∞
[u(t, x) − u(s, x)]K(t, s, x) ds.
When K(t, s, x) = (t − s) −1−α this is exactly the Caputo derivative -see Section 2 -which we denote by D α t . We assume the following bounds on the kernel K
(1.3) 1 Λ(t − s) 1+α ≤ α Γ(1 − α) K(t, s, x) ≤ Λ (t − s) 1+α
Our kernel in time then can be thought of as having "bounded, measureable coefficients". We also require the following relation on the kernel (1.4) K(t, t − s) = K(t + s, t).
The relation (1.4) allows us to give a weak -in space and in time -formulation of (1.2). This weak formulation is given in Section 2.
In this paper we also restrict ourselves to the range 0 < σ < 1/2. In [4] use of a transport term was made to work in the range 1/2 < σ < 1. We have not yet found the correct manner in which to prove our results for 1/2 < σ < 1 when dealing with the nonlocal fractional time derivative D α t . 1.1. Accounting for the Past. Nonlocal equations are effective in taking into account long-range interactions and taking into account the past. However, the nonlocal aspect of the equation provides both advantages and disadvantages in studying local aspects of the equation. One advantage is that there is a relation between two points built into the equation. Indeed, we utilize two nonlocal terms that are not present in the classical porous medium equation to prove Lemma 5.3. One disadvantage of nonlocal equations is that when rescaling of the form v(t, x) = Au(Bt, Cx), the far away portions of u cannot be discarded, and hence v begins to build up a "tail". Consequently, the usual test function (u − k) + or F ((u − k) + ) for some function F and a constant k is often insufficient. One must instead consider F ((u−φ) + ) where φ is constant close by but has some "tail" growth at infinity. This difficulty of course presents itself with the Caputo derivative. One issue becomes immediately apparent. If we choose F ((u − φ) + ) as a test function, then
T a F ((u − φ) + )D α t u dt = T a F ((u − φ) + )D α t ((u − φ) + − (u − φ) − ) dt + T a F ((u − φ) + )D α t φ dt.
The second term will no longer be identically zero if φ is not constant. When using energy methods, this second term can be treated as part of the right hand side, and hence it becomes natural to consider an equation of the form (1.2) with a right hand side. The main challenge with accomodating a nonzero right hand side is that the natural test function ln u used in [5] and [4] is no longer available since the function u can evaluate zero. Indeed if the initial data for a solution is compactly supported, then the solution is compactly supported for every time t > 0 (see Remark 4.6). We choose as our basic test function u γ for γ > 0. For σ small we will have to choose γ small. We then can accomodate a right hand side as well as avoiding delicate integrability issues involved when using ln u as a test function. Using careful analysis, it is still most likely possible to utilize ln u as a test function for our equation (1.2) with zero right hand side, but we find it more convenient to use u γ and prove the stronger result that includes a right hand side. Our method using u γ should also work for the equation (1.1) to be able to prove existence and regularity with a right hand side. One benefit of accomodating a right hand side in L ∞ is that we obtain immediately regularity up to the initial time for smooth initial data, see Theorem 1.4.
1.2.
Overview of the Main results. We will prove our results for a class of weak solutions (2.4) later formulated in Section 2. Our first main result is existence. We use an approximating scheme as in [5] as well as discretizing in time as in [1]. We prove
Theorem 1.1. Let 0 ≤ u 0 (x), f (t, x) ≤ Ae −|x| for some A ≥ 0. Then there exists a solution u to (2.4) in (0, ∞) that has initial data u(0, x) = u 0 (x= mT 1 for m ∈ N, it is immediate that if u i is the solution constructed on (0, T i ), then u 2 = u 1 on (0, T 1 ).
Remark 1.3. For technical reasons seen in the proof of Lemma 4.4, when n = 1 we make the further restriction 0 < σ < 1/4.
The main result of the paper is an interior Hölder regularity result. As expected the Hölder norm will depend on the distance from the interior domain to the initial time t 0 . However, if we assume the intial data u 0 is regular enough -say for intance C 2 -then we obtain regularity up to the initial time. This is a benefit of allowing a right hand side. By extending the values of our solution u(t, x) = u(0, x) for t < 0, we satisfy (2.4) on (−∞, ∞) × R n with a right hand side in L ∞ . The right hand side f for t ≤ 0 will not necessarily satisfy f ≥ 0; however, this nonnegativity assumption on f was only necessary to guarantee the existence of a solution u ≥ 0. It is not a necessary assumption to prove regularity. From Remark 1.2 the solution constructed on (−∞, T ) will agree with the original solution over the interval (0, T ). Theorem 1.4. Let u be a solution to (2.4) obtained via approximation from Theo-
rem 1.1 on [0, T ]×R n with 0 ≤ u 0 (x), f (t, x) ≤ Ae −|x| . Assume also u 0 ∈ C 2 .
Then u is C β continuous on [0, T ] × R n -for some exponent β depending on α, Λ, n, σwith a constant that depends on the L ∞ norm of u and f and C 2 norm of u 0 .
1.3. Future Directions. We prove existence and regularity for solutions obtained via limiting approximations. In this paper we do not address the issue of uniqueness. As mentioned earlier, uniqueness for (1.1) for the range 1/2 ≤ σ < 1 was shown in [17]. The issue of uniqueness for (1.2) is not trivial because of the nonlinear aspect of the equation as well as the lack of a comparison principle. The equation (2.4) which we consider also should present new difficulties because of the weak/veryweak formulation in time as well as the minimal "bounded, measurable" assumption (1.3) on the kernel K(t, s, x). An interesting problem would be to then address the issue of uniqueness for solutions of (2.4). Theorems 1.1 and 1.4 can most likely be further refined by making less assumptions on u(0, x), assuming a right hand side f ∈ L p as was done for a similar problem in [15], and proving the estimates uniform as σ → 0 and recoving Hölder continuity for the local diffusion problem. Also, as mentioned earlier the Theorems can be improved to include the range 1/2 ≤ σ < 1.
Finally, just like in the local porous medium equation [14] as well as for (1.1), the equation (2.4) has the property of finit propagation, see Remark 4.6. Therefore, it is of interest to study the free boundary ∂{u(t, x) > 0}. 1.4. Outline. The outline of this paper will be as follows. In Section 2 we state basic results for the Caputo derivative. We also give the weak formulation of the equation we study. In Section 3 we state some results for the discretized version of D α t that we will use to prove the existence of solutions. In Section 4 we follow the method approximation method and use the estimates from [5] combined with the method of discretization and the estimates presented in [1] to prove existence. In Section 5 we state the main Lemmas that we will need to be able to prove Hölder regularity. In Section 6 we prove the most technically difficult Lemma 5.1 of the paper. This Lemma 5.1 most directly handles the degenerate nature of the problem. In Section 7 we prove an analogue of 5.1. In Section 8 we prove the final Lemmas we need which give a one-sided decrease in oscillation from above. The one-sided decrease in oscillation combined with Lemma 5.1 is enough to prove the Hölder regularity and this is explained in Section 9.
1.5. Notation. We list here the notation that will be used consistently throughout the paper. The following letters are fixed throughout the paper and always refer to:
• α -the order of the Caputo derivative.
• σ -the order of inverse fractional Laplacian (−∆) −σ . We use σ for the order because s will always be a variable for time. • a -the initial time for which our equation is defined.
• D α t -the Caputo derivative as defined in Section 2. • D α t -the Caputo-type fractional derivative with "bounded, measurable" coefficients with bounds (1.3) and relation (1.4).
• Λ the constant appearing in (1.3).
• D α ǫ -the discretized version of D α t as defined in (3.1) • ǫ -will always refer to the time length of the discrete approximations as defined in Section 3 • n -will always refer to the space dimension.
• Γ m -the parabolic cylinder (−m, 0) × B m .
• W β,p -the fractional Sobolev space as defined in [9]. • u ± -the positive and negative parts respectively so that u = u + − u − . •ũ -the extensionũ(t) = u(ǫj) for ǫj − 1 < t, ǫj.
Caputo Derivative
In this section we state various properties of the Caputo derivative that will be useful. The Caputo derivative for 0 < α < 1 is defined by
a D α t u(t) := 1 Γ(1 − α) t a u ′ (s) (t − s) α ds
By using integration by parts we have
(2.1) Γ(1 − α) a D α t u(t) = u(t) − u(a) (t − a) α + α t a u(t) − u(s) (t − s) α+1 ds.
For the remainder of the paper we will drop the subscript a when the initial point is understood. We now recall some properties of the Caputo derivative that were proven in [1]. For a function g(t) defined on [a, t], it is advantageous to define g(t) for t < a. Then we have the formulation
a D α t g(t) = −∞ D α t g(t) = α Γ(1 − α) t −∞ g(t) − g(s) (t − s) 1+α ,
utilized in [1]. (See also [2] for properties of this one-sided nonlocal derivative.) This looks very similar to (−∆) α except the integration only occurs for s < t. In this manner the Caputo derivative retains directional derivative behavior while at the same time sharing certain properties with (−∆) α . This is perhaps best illustrated by the following integration by parts formula for the Caputo derivative
Proposition 2.1. Let g, h ∈ C 1 (a, T ). Then (2.2) T a gD α t h + hD α t g = T a g(t)h(t) 1 (T − t) α + 1 (t − a) α dt + α T a t a [g(t) − g(s)][h(t) − h(s)] (t − s) 1+α ds dt − T a g(t)h(a) + h(t)g(a) (t − a) α dt.
Formula (2.1) is based on the following formal computation
T a t a g(t) − g(s) (t − s) 1+α ds dt = T a t a g(t) (t − s) 1+α ds dt − T a t a g(s) (t − s) 1+α ds dt = T a t a g(t) (t − s) 1+α ds dt − T a T s g(s) (t − s) 1+α dt ds = T a t a g(t) (t − s) 1+α ds dt − T a T t g(t) (s − t) 1+α ds dt = T a g(t) t−a 0 ds s 1+α − T −t 0 ds s 1+α dt = α −1 T a g(t) 1 (T − t) α − 1 (t − a) α dt.
In the above computation to utilize the cancellation we only need a kernel K(t, s) satsifying
K(t, t − s) = K(t + s, t).
To make the above computation rigorous we will use the discretization in Section 3. An alternative, equivalent integration by parts formula is to extend g(t) = g(a)
for t < a. Then for any h ∈ C 1 with h(t) = 0 for any t < b for some b we have . For smooth initial data u 0 ∈ C 2 , we assign u(t, x) = u(a, x) for t < a. Then as stated earlier in the Introduction, for t ≤ a, a solution u will have right hand side
(2.3) T −∞ h(t)D α t g(t) dt = c α T −∞ 2t−T −∞ [g(t) − g(s)][h(t) − h(s)] (t − s) 1+α ds dt + c α T −∞ t −∞ g(t)h(t) (t − s) 1+α ds dt − T −∞ g(t)D α t h(t) dt, with c α = αΓ(1 − α) −1 .div(u 0 (−∆) −σ u 0 ) ∈ L ∞ .
We say that u is a weak solution if for any φ ∈ C ∞ 0 (−∞, T ) × R n we have
(2.4) R n T −∞ t −∞ [u(t, x) − u(s, x)][φ(t, x) − φ(s, x)]K(t, s, x) ds dt dx + R n T −∞ 2t−T −∞ u(t, x)φ(t, x)K(t, s, x) ds dt dx − R n T −∞ u(t, x)D α t φ(t, x) dt dx + T −∞ R n ∇φ(t, x)u(t, x)∇(−∆) −σ u dx dt = T −∞ BR f (t, x)φ(t, x).
We will also utilize a fractional Sobolev norm that arises from the fractional derivative.
Lemma 2.2. Let u be defined on [a, T ]. We have for two constants c 1 , c 2 depending on α, |T − a|
u L 2 1−α (a, T ) ≤ c 1 u 2 H α/2 (a, T ) ≤ c 2 α T a t a |u(t) − u(s)| 2 |t − s| 1+α ds dt + T a u 2 (t) (T − a) α
The following estimate will be needed for our choice of cut-off functions.
Lemma 2.3. Let h(t) := max{|t| ν − 1, 0} with ν < α. Then D α t h ≥ −c ν,α,Λ for t ∈ R.
Here, c ν,α,Λ is a constant depending only on α, ν, Λ.
Finally, we point out that if g = g + − g − the positive and negative parts respectively, then
(2.5) T a g ± (t)D α t g ∓ (t) ≥ 0.
Discretization in time
To prove existence of solutions to (2.4) we will discretize in time. The discretization also allows us to make the computations involving the fractional derivative rigorous. This section contains properties of a discrete fractional derivative which we will utilize.
For future reference we denote the discrete Fractional derivative with kernel K as
(3.1) D α ǫ u(a + ǫj) := ǫ −∞<i<j [u(a + ǫj) − u(a + ǫi)]K(ǫj, ǫi)
The following is the discretized argument for the cancellation that appears in the formal computation of the version of (2.1) that we will need.
ǫ 2 j≤k i<j [g(a + ǫj) − g(a + ǫi)]K(a + ǫj, a + ǫi) = ǫ 2 j≤k i<2j−k−1 g(ǫj)K(ǫj, ǫi).
Letg(t) := g(ǫj) for ǫj − 1 < t ≤ ǫj. If g ≥ 0, then there exists c depending only on α, Λ such that if ǫ < 1, then
ǫ 2 j≤k i<j [g(a + ǫj) − g(a + ǫi)]K(a + ǫj, a + ǫi) ≥ c T −∞g (t) (T − t) α dt.
Proof. For notational simplicity we assume a = 0.
ǫ 2 j≤k i<j [g(ǫj) − g(ǫi)]K(ǫj, ǫi) = ǫ 2 j≤k i<j g(ǫj)K(ǫj, ǫi) − ǫ 2 j≤k i<j g(ǫi)K(ǫj, ǫi) = ǫ 2 j≤k i<j g(ǫj)K(ǫj, ǫi) − ǫ 2 i<k i<j≤k g(ǫi)K(ǫj, ǫi) = ǫ 2 j≤k i<j g(ǫj)K(ǫj, ǫi) − ǫ 2 j<k j<i≤k g(ǫj)K(ǫi, ǫj) = ǫ 2 j≤k i<2j−k−1 g(ǫj)K(ǫj, ǫi)
The second inequality follows from the estimates in [1]. Lemma 3.1 combined with the estimates in [1] can be used to show.
ǫ 2 j≤k i<j u(ǫj)[u(ǫj)−u(ǫi)]K(ǫj, ǫi) ≥ c u 2 H α/2 ≥ c u 2 H α/2 c T −∞ u 2 (1−α) 1−α =
where c depends on α and Λ. This next lemma is analogous to Lemma 2.3 and was shown in [1].
Lemma 3.3.
Let h be as in Lemma 2.3. Then for 0 < ǫ < 1 there exists c ν,α depending on α and ν but independent of a such that
D α ǫ h(t) ≥ −c ν,α for t ∈ ǫZ and a < t < 0.
This last estimate we will use often
Lemma 3.4. Let F be a convex function with F ′′ ≥ γ, F ′ ≥ 0, F (0) = 0. Assume g ≥ 0, g(a) = 0. Then there exists c depending on α, Λ such that ǫ j≤k F (g(ǫj))D α ǫ g(ǫj) ≥ cǫ j≤k F (g(ǫj)) (ǫ(j − i)) 1+α + c γ 2 ǫ 2 j≤k e<j [g(ǫj) − g(ǫi)] 2 (ǫ(j − i)) 1+α
Proof. Since F is convex,
F (g(ǫj))[g(ǫj) − g(ei)] ≥ F (g(ǫj)) − F (g(ǫi)) + γ 2 [g(ǫj) − g(ǫi)] 2 .
The result then follows from applying Lemma 3.1.
Finally we point out that if g is a limit ofg ǫ which are discretized problems with the assumptions in Lemma 3.4 it follows that
(3.2) T a F (g(t))D α t g(t) dt ≥ c T a F (g(t)) (T − a) α + c γ 2 T a t a [g(t) − g(s)] 2 (t − s) 1+α ds dt.
Existence
In this section we prove the existence of weak solutions following the construction given in [5]. We will also discretize in time. We first consider a smooth approximation of the kernel (−∆) −σ as K ζ . We start with the smooth classical solution to the elliptic problem
(4.1) gu − δdiv((u + d)∇u) − div((u + d)∇K ζ u) = f on B R with u ≡ 0 on ∂B R . g, f ≥ 0 and smooth.
The sign of f, g guarantees the solution is nonnegative. δ, d > 0 are constants. The nonlocal part is computed in the expected way by extending u = 0 on B c R . To find such a solution we first consider the linear problem
gu − δdiv((v + d)∇u) − div((v + d)∇K ζ u) = f on B R for v ∈ C 0,β 0
with v ≥ 0. With fixed d, δ, R, ζ > 0 one can apply Schauder estimate theory to conclude
u C 1,β 0 ≤ C v C 0,β 0 with u ≥ 0. The map T : v → u is then a compact map. The set {v} with v ≥ 0
and v ∈ C 0,β 0 is a closed convex set, and hence we can apply the fixed point theorem (Corollary 11.2 in [11]), to conclude there is a solution to (4.1). By bootstrapping we conclude u is smooth. Now we use the existence of solutions to (4.1) to obtain -via recursion -solutions to the discretized problem
(4.2) D α ǫ u − δdiv((u + d)∇u) − div((u + d)∇K ζ u) = f on [0, T ] × B R .
with u(0, x) = u 0 (x) an initially defined smooth function with compact support. ǫ = T /k for some k ∈ N. We will eventually let k → ∞, so that ǫ → 0. For the next two Lemmas we will utilize the solution to
(4.3) D α t Y (t) = cY (t) + h(t)
which as in [10] is given by
Y (t) = Y (0)E α (ct α ) + α t 0 (t − s) α−1 E ′ α (c(t − s) α )h(t) dt,
where E α is the Mittag-Leffler function of order α. We will utilize in the next Lemmas two specific instances of (4.3). We define Y 1 (t) to be the solution to
(4.3) with Y (0) = sup u(0, x), c = 0, h = 2Λf . We define Y 2 (t) to be the solution to (4.3) with c = CΛ −1 , Y 2 (0) = 2.
and h = 0. The constant C will be chosen later.
Lemma 4.1. Let u be a solution to (4.2). Let Y 1 (t) be defined as above. Then there exists ǫ 0 depending only on T, α, f L ∞ such that if ǫ ≤ ǫ 0 , then u(ǫj) ≤ Y 1 (ǫj) Proof. Since Y 1 (t) is an increasing function D α ǫ Y (ǫj) ≥ Λ −1 D α ǫ Y (ǫj).
Depending on T and f L ∞ , there exists ǫ 0 such that if ǫ ≤ ǫ 0 , then
D α ǫ Y (ǫj) ≥ Λ −1 D α ǫ Y (ǫj) ≥ 2 3Λ D α t Y (ǫj) = 4 3 f We use (u(t, x) − Y (t)) + as a test function. Since (u − Y ) + (0) = 0 it follows from (2.5) and Lemma 3.4 that ǫ j<k (u − Y ) + D α ǫ [(u − Y ) + − (u − Y ) − ] ≥ 0.
We define
B ζ (u, v) = R n R n H ζ (x, y)[u(x) − u(y)][v(x) − v(y)] dx dy, where H ζ (x, y) = ∆K ζ . We have the identity [4] B ζ (u, v) = R n R n ∇u(x)∇K ζ v(y) dx dy.
Then for ǫ small enough and j > 0
ǫ j≤k BR f (ǫj, x)(u(ǫj, x) − Y (ǫj)) + = BR ǫ j≤k (u − Y ) + D α ǫ [(u − Y ) + − (u − Y ) − + Y ] + ǫ j≤k BR (u + d)∇(u − Y ) + ∇u + ǫ j≤k BR (u + d)∇(u − Y ) + ∇K ζ u ≥ Br ǫ j≤k 4 3 f (u − Y ) + + R n (u + d)χ {u>Y } ∇u∇K ζ u = BR ǫ j≤k 4 3 f (u − Y ) + + ǫ 2 j≤k B ζ (χ {u>Y } (u + d) 2 , u) ≥ 4 3 ǫ j≤k BR f (ǫj, x)(u(ǫj, x) − Y (ǫj)) + . Thus (u − Y ) + ≡ 0. Lemma 4.2. Let u be a solution to (4.2) in [0, T ] × B R . Assume that (4.4) 0 ≤ u(0, x) ≤ Ae −|x| , f ≤ Ae −|x| .
If A is large there exists constants µ 0 , δ 0 , ζ 0 depending only on R, n, σ ǫ 0 depending only on T, α, Λ C depending only on n, σ,
u(0, x) L ∞ , f L ∞ such that if µ < µ 0 , δ < δ 0 , ζ < ζ 0 , ǫ < ǫ 0 , and Y 2 (t)
is the solution defined earlier with constant C given above, then
0 ≤ u(ǫj, x) ≤ AY 2 (ǫj)e −|x| ,
for any t = ǫj.
Proof. As before there exists ǫ 0 depending only on T, α such that for ǫ ≤ ǫ 0 we have
D α ǫ Y 2 (ǫj) ≥ 2 3Λ D α t Y 2 (ǫj).
Since u is smooth and hence continuous, u ≤ LY 2 (ǫj)e −|x| for some L > A. We lower L ≥ A until it touches u for the first time. Since u = 0 on ∂B R this cannot happen on the boundary. Since u is smooth this cannot happen at a point (ǫj, 0). Also, LY 2 ≥ 2A ≥ 2u(0, x), so this cannot occur at the initial time. We label a point of touching as (t c , r c ). We compute the operator in nondivergence form and write K ζ (u) = p and use the estimates in [5] to conclude for ǫ small enough that
2 3 CLY 2 (t c )e −rc = Λ 3 LD α t Y 2 (t c )e −rc ≤ D α ǫ LY 2 (t c )e −rc ≤ D α ǫ u(t c ) = δdiv((u + d)∇u) + div((u + d)∇K ζ u) + f (t c , r c ) = δ2[LY 2 (t c )e −rc ] 2 + δdLY 2 (t c )e −rc − LY 2 (t c )e −rc ∂ r p + (LY 2 (t c )e −rc + d)∆p + f (t c , r c ),
where in the equation the bar above means evaluation at r c Then using again the estimates from [5], for small enough ζ we have a universal constant M depending only on n, σ such that
|∂ r p|, |∆p| ≤ Y 1 (T )M. Now recalling also that LY 2 (t c )e −rc ≤ Y 1 (T ) 2 3 C ≤ δ(2 + d)Y 1 (T ) + M Y 1 (T ) + 1 + d LY 2 (t c ) e rc M Y 1 (T ) + f (r c )e rc LY 2 (t c ) ≤ δ(2 + d)Y 1 (T ) + 2M Y 1 T 1 + d L e rc + A L .
Choosing δ, d small enough the above inequality implies
C ≤ 4M Y 1 (T ) + 4.
If we choose now C > 2M Y 1 (T ) + 4 we obtain a contradiction. We note that C will only depend on n, σ, u(0, x) L ∞ , f L ∞ .
We now give some Sobolev estimates. Because we have a right hand side we choose to not use ln(u) as the test function. For 0 < γ < 1, we use (u + d) γ − d γ as a test function. The function
F (t) = 1 γ + 1 (t + d) γ+1 − d γ t
will satisfy the conditions in Lemma 3.4. We now assume u is a solution to (4.2) with assumptions as in Lemma 4.2, so that |u| ≤ M e −|x| for some large M . As discussed in the introduction we can extend u(ǫj, x) = u(0, x) for j < 0, and u will be a solution to (4.2) on (−∞, T ) × R n with right hand side
δdiv((u(0, x) + d)∇u(0, x)) + div((u(0, x) + d)∇(−∆) −σ u(0, x)).
for j ≤ 0. This right hand side is not necessarily nonnegative; however, we only required the nonnegativity of the right hand side to guarantee that our solution is nonnegative. In this case we already know our solution is nonnegative. We fix a smooth cut-off φ(t) with φ(t) ≥ M for t ≤ −2 and φ(t) = 0 for t ≥ −1. We now take our test function as ǫF ′ ([u(t, x) − φ(t)] + ). We define
u = (u − φ) + − (u − φ) − + φ =: u + φ − u − φ + φ.
We defineũ = u(t) for ǫj − 1 < t ≤ ǫj. From Lemma 3.4 and the estimates in Section 3, there exist two constans c, C depending on α, T, Λ such that for ǫ < 1,
ǫ j≤k F ′ (u + φ (ǫj))D α ǫ u(ǫj) ≥ c T −∞ t −∞ [ũ + φ (t) −ũ + φ (s)] 2 (t − s) 1+α + c T −∞ F (ũ + φ (t)) (T − t) α dt − C T −∞ F ′ (ũ + φ (t))D α t φ(t) dt
We now consider the nonlocal spatial term. We will also use the following property:
For an increasing function V and a constant l
B ζ (V ((u − l) + ), u) ≥ B ζ (V ((u − l) + ), (u − l) + ) ≥ 0.
We have for the nonlocal spatial terms
ǫ j≤k BR ∇F ′ (u + φ (ǫj, x))(u + d)∇K ζ u = ǫ j≤k BR ∇F ′ (u + φ (ǫj, x))[(u + φ ) + d + φ]∇K ζ u = γ γ + 1 T −2 B ζ ((ũ + φ + d) γ+1 , u) + T −2 φ(t)B ζ ((ũ + φ + d) γ , u) ≥ γ γ + 1 T −2 B ζ ((ũ + φ + d) γ+1 , u) ≥ γ γ + 1 T −2 B ζ ((ũ + φ + d) γ+1 , u + φ ) From Proposition 10.1, if u + φ (x) − u + φ (y) ≥ 0, then (u + φ + d) γ+1 (x) − (u + φ + d) γ+1 (y) ≥ (u + φ (x) − u + φ (y)) γ+1 . Then ǫ j≤k BR ∇F ′ (u + φ (ǫj, x))(u+d)∇K ζ u ≥ cγ γ + 1 T 0 R n R n H ζ (x, y)|ũ(x)−ũ(y)| 2+γ dx dy dt.
For the local spatial term we have
ǫ j≤k BR ∇F ′ (u + φ (ǫj, x))(u + d)∇u ≥ γ T 0 BR (ũ + d) γ |∇ũ| 2 .
Now combining the previous estimates with the right hand side term f we have for a certain constant C depending on n, σ, α, Λ, γ, M, T which can change line by line.
δγ T 0 BR (ũ + d) γ |∇ũ| 2 + cγ γ + 1 T 0 R n R n H ζ (x, y)|ũ(x) −ũ(y)| 2+γ dx dy dt + c BR T 0 t 0 [ũ(t) −ũ(s)] 2 (t − s) 1+α + c BR T 0 F (ũ(t)) (T − t) α dt ≤ C BR T −2 F ′ (ũ + φ (t))D α t φ(t) dt + T −2 BR f (t, x)F ′ (ũ + φ ) ≤ C T −2 BR (ũ + φ + d) γ − d γ ≤ C T −2 BR (M e −|x| + d) γ − d γ ≤ C T −2 BR M γ e −γ|x| ≤ C.
The second to last inequality comes from Proposition 10.2. The value C is independent of ζ, d, R, ǫ, δ if ζ, ǫ, δ, d < 1 and R > 1. Then as ζ, d → 0 we have uniform control and obtain the estimate
(4.5) δγ T 0 BR (ũ + d) γ |∇ũ| 2 + T 0 ũ 2+γ W (2−2σ)/(2+γ),2+γ (BR) + BR ũ 2 W α/2,2 (0,T ) ≤ C
Notice that the constant C only depends on the exponential decay of f, u 0 and on σ, α, n, T , but not on R, δ. Letting d, ζ → 0 we obtain
(4.6) D α ǫ u − δdiv(u∇u) − div(u∇(−∆) −σ u) = f on [0, T ] × B R .
We now give a compactness result.
Lemma 4.3. Assume for any v ∈ F , (4.7) T 0 v(t, x) 2+γ W (2−2σ)/(2+γ),2+γ (BR) + Br v(t, x) 2 W α/2,2 (0,T ) ≤ C. Then F is totally bounded in L p ([0, T ] × B R ) for 1 ≤ p ≤ 2.
Proof. We utilize the proof provided in [9] for compactness in fractional Sobolev spaces. We will show the result for p = 2, and it will follow for p < 2 since B R is a bounded set. We divide T into k increments. (This k is unrelated to the number k for the ǫ approximations)
. Let l = T /k. We define v l (x, t) := 1 l j(l+1) jl v(x, s) ds.
From [9] (4.8)
BR T 0 [v l (x, t) − v(x, t)] 2 ≤ c α l α BR v(x, ·) 2 W α/2,2 (0,T ) ≤ Cl α .
The above estimate is uniform for any v j . We now utilize that [0, T ] is a finite measure space as well as Minkowski's inequality: the norm of the sum is less than or equal to the sum of the norm.
C ≥ k−1 j=0 l(j+1) lj v(·, t) 2+γ W (2−2σ)/(2+γ),2+γ (BR) ≥ k−1 j=0 1 l 1+γ l(j+1) lj v(·, t) W (2−2σ)/(2+γ),2+γ (BR) 2+γ ≥ k−1 j=0 1 l 1+γ l(j+1) lj v(x, t) W (2−2σ)/(2+γ),2+γ (BR) 2+γ = l k−1 j=0 1 l l(j+1) lj v(x, t) W (2−2σ)/(2+γ),2+γ (BR) 2+γ
It then follows from the result in [9] that for every j and λ > 0 there exists finitely many {β 1 , . . . , β Mj } such that for any fixed j and v ∈ F there exists
β i ∈ {β 1 , . . . , β Mj } such that BR β i − 1 l l(j+1) lj v(x, t) 2 ≤ λ.
Then combining the above estimate with (4.8) we obtain that
BR T 0 |v − β i,j | 2 ≤ BR T 0 |v − v l | 2 + BR T 0 |v l − β i,j | 2 = BR T 0 |v − v l | 2 + l k−1 j=0 BR |v l − β i,j | 2
Cl α + T λ.
Since l, λ can be chosen arbitrarily small, F is totally bounded.
The following result will guarantee that ∇(−∆) −σ u ∈ L p as δ → 0, R → ∞. Proof. u is extended to be zero outside of B R . The proof is a consequence of the following results found in [13].
W β,p (R n ) = B β p,p (R n ) = L p (R n ) ∩Ḃ β p,p .
We also have the lifting property of the Riesz potential for the homogeneous Besov spaces
(−∆) −σ u Ḃ β+2σ p,p ≤ C u Ḃ β p,p
To bound u in the nonhomogeneous Besov space we recall
(−∆) −σ u L nq/(n−2σq) (R n ) ≤ C u L q (R n ) .
for any 1 ≤ q < n/(2σ). From the exponential bounds (4.4) and growth we have that u is uniformly in L q for all 1 ≤ q ≤ ∞. Letting q = (2 + γ)n n + 2σ(2 + γ) > 1 for σ < 1/2 and n ≥ 2, (or let σ < 1/4 for n = 1), we obtain by the finite length of
T T 0 (−∆) −σ u 2+γ L 2+γ (R n ) ≤ C.
Using again the characterization of homogeneous besov spaces we obtain the result.
Corollary 4.5. Let u k be a sequence of solutions to (4.6) with R → ∞ and δ → 0.
For fixed ρ > 0, there exists a subsequence and limit with
u k → u 0 ∈ L p (B ρ ) for 1 ≤ p ≤ 2 and u k ⇀ u 0 ∈ W (2−2σ)/(2+γ),2+γ .
Furthermore, for any compactly supported φ
(4.9) ǫ j≤k R n φ(x, ǫj)D α ǫ u 0 (ǫj, x) + u 0 ∇φ∇(−∆) −σ u 0 = ǫ j≤k R n f φ
Proof. The strong and weak convergence is an immediate result of the bound (4.5) and Lemma 4.3. For γ small enough depending on σ, then 2 − 2σ 2 + γ + 2σ > 1.
Then from Lemma 4.4 we have that
∇(−∆) −σ u k ⇀ ∇(−∆) −σ u 0 ∈ W (2−2σ)/(2+γ)+2σ−1,2+γ ,
And in particular
(4.10) ∇(−∆) −σ u k ⇀ ∇(−∆) −σ u 0 ∈ L 2+γ (R n ).
Then it is immediate from the weak and strong convergence that u 0 is a solution.
We now show the Proof of Theorem 1.1. We first assume f, u 0 smooth and satisfying the exponential bounds (4.4). Consider solutions u ǫ to (4.9) over a finite interval (0, T ). As before, as ǫ → 0 there exists a subsequence and a limit u ǫ → u 0 with the weak convergence as in (4.10) and strong convergence over compact sets for 1 ≤ p ≤ 2 just as in Lemma 4.3. Then for fixed φ ∈ C ∞ 0 , that u 0 is a solution follows from this convergence. The spatial piece and right hand side is straightforward to show, and the nonlocal time piece is taken care of as in [1]. We now consider a sequence of solution {u j } with {f j }, {(u 0 ) j } ∈ C ∞ with f j → f and (u 0 ) j → u 0 in weak * L ∞ . Then again there exists a limit solution u with right hand side f . From Remark 1.2 and Lemma 4.1 we can let T → ∞.
Remark 4.6. In this Section we have shown how the estimates in [5] work for equations of the form (2.4). In the same way one can show that the method of "true (exaggerated) supersolutions" as shown in [5] for σ < 1/2 will also work to prove the property of finite propagation for solutions to (2.4). As the main result of this paper is Hölder regularity of solutions we will not make this presentation here.
Continuity: Method and Lemmas
In this Section we outline the method used to prove Hölder regularity of solutions to (2.4). We follow the method used in [4] which is an adaption of the ideas originally used by De Giorgi. We prove a decrease in oscillation on smaller cylinders and then utilize the scaling property that if u is a solution to (2.4) , then v(t, x) = A(Bt, Cx) is also a solution to (2.4)
if A = B α C 2−2σ .
Because of the degenerate nature of the problem the decrease in oscillation will only occur from above. Since we do not have a decrease in oscillation from below we will need a Lemma that says in essence that if the solution u is above 1/2 on most of the space time, then u is a distance from zero on a smaller cylinder. To prove the Lemmas in this section we will use energy methods, and thus we will want to use as a test function F (u) for some F . If u is a solution to (2.4), then u ∈ W (2−2σ)/(2+γ),2+γ
and it is not clear that ∇F (u) will be a valid test function. We therefore prove the Lemmas for the approximate problems
(5.1) D α t u − δdiv(D(u)∇u) − div(u∇(−∆) −σ u) = f on B R ,
for some large R > 0 and small δ > 0 with u ≡ 0 on ∂B R . It is actually only necessary to prove the energy inequalities that we will utilize with constants uniform as δ → 0 and R → ∞. We could also prove the Lemmas for the approximate problems (4.6); however, for notational convenience and to make the proofs more transparent we have chosen to let ǫ → 0. Because our solution is a limit of discretized solutions we then are allowed to make the formal computations involved with D α t u even though u may not be regular enough for D α t u to be defined. One simply proves the energy inequalities (and hence the Lemmas) for the discretized solutions as was done in [1].
Because of the one-sided nature of our problem we prove the Lemmas for solutions to the equation with the modified term div(D(u)∇(−∆) −σ u), where D(u) = d 1 u + d 2 . We assume 0 ≤ d 1 , d 2 ≤ 2 and either d 1 = 1 or d 2 ≥ 1/2. As will be seen later, when d 2 ≥ 1/2, the proofs are simpler because the problem is no longer degenerate. We now define the exact class of solutions for which we prove the Lemmas of this section. u is a solution if u ≡ 0 on ∂B r and for every φ ∈ C ∞ 0 ((−∞, T ) × B R ), we have
(5.2) BR T −∞ t −∞ [u(t, x) − u(s, x)][φ(t, x) − φ(s, x)]K(t, s) ds dt dx + BR T −∞ 2t−T −∞ u(t, x)φ(t, x)K(t, s) ds dt dx − BR T −∞ u(t, x)D α t φ(t, x) dt dx + T −∞ BR ∇φ(t, x)D(u)∇u(t, x) dx dt + T −∞ BR ∇φ(t, x)D(u)∇(−∆) −σ u dx dt = T −∞ BR f (t, x)φ(t, x),
By Lemmas 4.3 and 4.4, the Lemmas stated in this section will be true when R → ∞ and δ → 0. Before stating the Lemmas we define the following function for small 0 < τ < 1/4.
Ψ(x, t) := 1 + (|x| τ − 2) + + (|t| τ − 2) + .
We now state the Lemmas we will need.
Lemma 5.1. Let u be a solution to (5.2) with R > 4 and assume
1 − Ψ ≤ u ≤ Ψ for τ < τ 0
Given µ 0 ∈ (0, 1/2) and τ 0 < 1/4, there exists κ > 0 depending on µ 0 , τ 0 , σ, α, n such that if
|{u ≥ 1/2} ∩ Γ 4 | ≥ (1 − κ)|Γ 4 |
then u ≥ µ 0 on the smaller cylinder Γ 1 .
We have a similar Lemma from above Lemma 5.2. Under the same assumptions as Lemma 5.1, given µ 1 ∈ (0, 1/2) and τ 0 < 1/4, there exists κ > 0 depending on µ 1 , τ 0 , σ, α, n such that if
|{u > 1/2} ∩ Γ 2 | ≤ κ|Γ 2 |
then u ≤ 1 − µ 1 on the smaller cylinder Γ 1 .
Lemma 5.2 is not sufficient. We need the stronger Lemma 5.3. Under the same assumptions as Lemma 5.1, assume further for fixed k 0
(5.3) |{u < 1/2} ∩ Γ 4 | ≥ (1 − κ 0 )|Γ 4 |,
then u ≤ 1 − µ 2 on Γ 1 for some µ 2 depending on κ 0 .
We will choose κ 0 to equal the κ in Lemma 5.1.
Pull-up
In this section we provide the proof of Lemma 5.1. This Lemma is the most technical to prove. We first prove the Lemma in the most difficult case when D(u) = u + d with 0 ≤ d ≤ 2. Afterwards, we show how the proof is much simpler when D(u) = d 1 u + d 2 with d 2 ≥ 1/2 and 0 ≤ d 1 ≤ 2.
We will need the following technical Lemma. The proof is found in the appendix.
Lemma 6.1. Let u, φ be two functions such that 0 ≤ u ≤ φ ≤ 1. Let 0 < γ < 1 be a constant. If |u(x) − u(y)| ≥ 4|φ(x) − φ(y)|, then (6.1) 2 5 4 5 γ |u − φ (x) − u − φ (y)| 1+γ ≤ u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) ≤ 14 3 |u − φ (x) − u − φ (y)|. Also, if 0 ≤ u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) then 0 ≤ u − φ (x) − u − φ (y). If instead we assume |u(x) − u(y)| ≤ 4|φ(x) − φ(y)|, then (6.2) u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) ≤ 14|φ(x) − φ(y)|.
Remark 6.2. When 0 ≤ u ≤ φ ≤ 3, Lemma 6.1 will hold with new constants by applying the Lemma to u/3, φ/3.
We will use a sequence of cut-off functions {φ k } which will be chosen to be smooth cut-off functions in space, and smooth increasing cut-off functions in time. We recall that for small 0 < τ < 1/2, Ψ(x, t) := 1 + (|x| τ − 2) + + (|t| τ − 2) + .
We now recall the construction of a sequence of smooth radial cut-offs θ k from [4] that satisfy
• θ k (x) ≤ θ k−1 (x) ≤ . . . ≤ θ 0 (x), • |∇θ k |/θ k ≤ C k θ −1/m k , with m ≥ 2 • θ k−1 − θ k ≥ (1 − µ 0 )2 −k in the support of θ k , • θ k → µ 0 χ B2 as k → ∞,
• the support of θ k is contained in the set where θ k−1 achieves its maximum. We also have θ 0 ≡ 1 on B 3 and the support of θ 0 is contained in B 4 . As a cut-off in time we consider a sequence {ξ k } satisfying
• ξ k (t) ≤ ξ k−1 (t), • ξ ′ k (t) ≤ C k , • ξ k → χ {t>−2} as k → ∞, • ξ k = max ξ k = 1 on the interval [−2 − 2 −k , 0].
• the support of ξ k is contained in the set where ξ k−1 achieves its maximum. We now define
φ k (x, t) := 1 − Ψ(x, t) + 1 2 ξ k (t)θ k (x).
We use the convention for negative part that u = u + − u − . We also write u − φ k := (u − φ k ) − . We now consider the convex function
(6.3) F (x) := 1 γ + 1 (1 − x) γ+1 + x − 1 γ + 1 .
Because of the degenerate nature of our equation we will want to utilize the test function
(6.4) − F ′ (u − φ /(φ + d)) = 1 − (u − φ) − φ + d γ − 1 = − u + d φ + d γ − 1 − .
Proof of Lemma 5.1. First
Step: Obtaining an energy in time. We note that for 0 ≤ x ≤ 1, F (x) is convex,F ′ (x) ≥ 0, and F ′′ (x) ≥ γ. From the convexity and second derivative estimate we also conclude for 0 ≤ x, y ≤ 1
F ′ (x)(x − y) ≥ F (x) − F (y) + (γ/2)(x − y) 2 (6.5) F (x) ≈ x 2 (6.6) We now consider −F ′ (u − φ k /(φ k +d))D α t u, and rewrite u = u + φ k −u − φ k +φ k .
To obtain an energy in time we first consider
0 −∞ F ′ u − φ (t) φ(t) + d D α t u − φ (t) = 0 −∞ 0 −∞ F ′ u − φ (t) φ(t) + d u − φ (t) − u − φ (s) K(t, s) = 0 −∞ 0 −∞ (φ(t) + d)F ′ u − φ (t) φ(t) + d u − φ (t) − u − φ (s) φ(t) + d K(t, s) ≥ 0 −∞ 0 −∞ (φ(t) + d)F ′ u − φ (t) φ(t) + d u − φ (t) φ(t) + d − u − φ (s) φ(s) + d K(t, s) ≥ 0 −∞ 0 −∞ (φ(t) + d) F u − φ (t) φ(t) + d − F u − φ (s) φ(s) + d K(t, s) + 0 −∞ 0 −∞ (φ(t) + d) γ 2 u − φ (t) φ(t) + d − u − φ (s) φ(s) + d 2 K(t, s) ≥ 0 −∞ 0 −∞ (φ(t) + d) F u − φ (t) φ(t) + d − F u − φ (s) φ(s) + d K(t, s) + 0 −2−2 −k 0 −2−2 −k γ 2 u − φ (t) − u − φ (s) 2 K(t, s) = (1) + (2).
In the first inequality we used that φ is increasing in t and positive for t ≥ −4 as well as u − φ (s) = 0 for s ≤ −4, and in the second inequality we used (6.5). Term (2) is half of what we will need for the Sobolev embedding (see Lemma 2.2). To gain the other half we consider term (1). For c, C k depending on Λ, α and the Lipschitz
constant of φ k we have 0 −∞ 0 −∞ (φ k (t) + d) F u − φ k (t) φ k (t) + d − F u − φ k (s) φ k (s) + d K(t, s) = 0 −∞ 0 −∞ (φ k (t) + d)F u − φ k (t) φ k (t) + d − (φ k (s) + d)F u − φ k (s) φ k (s) + d K(t, s) + 0 −∞ 0 −∞ [φ k (s) − φ k (t)]F u − φ k (s) φ k (t) + d K(t, s) ≥ c 0 −∞ (φ(t) + d)F u − φ k (t) φ k (t) + d 1 (0 − t) α − C k 0 −∞ χ {u(t)<φ k (t)} dt ≥ c 0 −2−2 −k (u − φ k ) 2 (t) (φ k (t) + d)(0 − t) α − C k T −∞ χ {u(t)<φ k (t)} dt
The last inequality coming from (6.6). Now
− 0 −∞ F ′ u − φ (t) φ(t) + d D α t u + φ (t) ≥ 0,
and in this proof we ignore this term which will be on the left hand side. Now for the term involving φ k we have
− 0 −∞ F ′ u − φ k (t) φ k (t) + d D α t φ k (t) ≥ −C k 0 −∞ χ {u<φ k } .
Then utilizing the embedding theorem for fractional Sobolev spaces [9] combined with the above inequalities we obtain (6.7)
0 −∞ F ′ u − φ k φ k + d D α t u(t) ≥ c 0 −2−2 −k (u − φ k ) 2 (t) (0 − t) α + c 0 −2−2 −k t −2−2 −k u − φ k (t) − u − φ k (s) 2 (t − s) 1+α − C k 0 −∞ χ {u<φ k } dt ≥ c 0 −2−2 −k u − φ k 2 1−α 1−α − C k 0 −∞ χ {u<φ k } dt.
After integrating in the spatial variable we have
c R n T −2−2 −k (u − φ k ) 2 1−α − 1−α − 0 −∞ R n (u + d)∇F ′ u − φ k φ k + d ∇(−∆) −σ u − 0 −∞ R n (u + d)∇F ′ u − φ k φ k + d ∇u ≤ − 0 −∞ R n f F ′ u − φ k φ k + d + C k R n 0 −∞ χ {u<φ k } dt dx ≤ C k 0 −∞ R n χ {u<φ k } dt dx Second
Step: Obtaining an energy in space. We now turn our attention to the elliptic portion of the problem. We recall from [4] the identity
B(v, w) = R n R n [v(x) − v(y)][w(x) − w(y)] |x − y| n+2−2σ dxdy = c n,σ R n ∇v∇(−∆) −σ u.
We multiply by our test function (6.4) and integrate by parts. On the left hand side of the equation we have
χ {u<φ} ∇ u + d φ + d γ − 1 (u + d)∇(−∆) −σ u = χ {u<φ} γ u + d φ + d γ−1 ∇((u + d)/(φ + d))(u + d)∇(−∆) −σ u = χ {u<φ} γ γ + 1 ∇ u + d φ + d γ+1 − 1 (φ + d)∇(−∆) −σ u = χ {u<φ} γ γ + 1 ∇ (u + d) γ+1 (φ + d) γ − (φ + d) ∇(−∆) −σ u − γ γ + 1 χ {u<φ} u + d φ + d γ+1 − 1 ∇φ∇(−∆) −σ u := (1) + (2).
We now focus on (1) which will give us the energy term we need. For the term
(−∆) −σ u, we rewrite u = (u − φ k ) + − (u − φ k ) − + φ k := u + φ k − u − φ k + φ k .
Then we rewrite (1) = (1a) + (1b) + (1c). We focus on the term (1b). We rewrite (1b) = (1bi) + (1bii)
:= −χ {u<φ} γ γ + 1 ∇ (u + d) γ+1 (φ + d) γ ∇(−∆) −σ u − φ + χ {u<φ} γ γ + 1 ∇φ∇(−∆) −σ u − φ .
The term (1bi) will give us the energy term in space that we will need.
(1bi) = R n R n γ γ + 1 ∇ (u + d) γ+1 (φ + d) γ (x) 1 |x − y| n−2σ ∇u − φ (y)dxdy = c n,σ γ γ + 1 B(χ {u<φ} (u + d) γ+1 /(φ + d) γ , −u − φ ).
We define the set
A k := {|u(x) − u(y)| ≥ 4|φ k (x) − φ k (y)|}.
It is clear that A k contains the set V k × V k where we define V k as the set on which θ k achieves its maximum. From Lemma 6.1 and Remark 6.2 we have
A k (u + d) γ+1 (y) (φ + d) γ (y) − (u + d) γ+1 (x) (φ + d) γ (x) [u − φ k (x) − u − φ k (y)] |x − y| n+2−2σ ≥ c A k |u − φ k (x) − u − φ k (y)| 2+γ |x − y| n+2−2σ
We now label U k as the set where φ k achieves its maximum. Notice that
U k = [−2 − 2 −K , 0] × V k .
To utilize the fractional Sobolev embedding on V k × V k , we also will need an L p norm of u − φ k on V k . We utilize half of the integral of u − φ k that we gained from the fractional time term:
R n 0 −2−2 −k (u − φ k ) 2 (t) (0 − t) α ≥ 1 2 U k (u − φ k ) 2 (t) (0 − t) α + 1 2 U k (u − φ k ) 2+γ (t) (0 − t) α .
The inequality comes from the fact that 0 ≤ u − φ ≤ 1. Now from the fractional sobolev embedding [9],
(6.8) T −2−2 −k V k ×V k |u − φ k (x) − u − φ k (y)| 2+γ |x − y| n+2−2σ + 1 2 U k (u − φ k ) 2+γ − (t) ≥ c n,σ,γ T −2−2 −k V k (u − φ k ) n(2+γ)/(n−2+2σ) (n−2+2σ)/n
This is the helpful spatial term on the left hand side that we will return to later. Third
Step: Bounding the remaining terms. We will now show that everything left in our equation can be bounded by
(6.9) C k T −∞ R n χ {u<φ k } dx.
We will denote
X k (x, y) := χ {u(x)<φ k (x)} + χ {u(y)<φ k (y)}
For the remainder of term (1bi) we have
A c k (u + d) γ+1 (y) (φ + d) γ (y) − (u + d) γ+1 (x) (φ + d) γ (x) [u − φ k (x) − u − φ k (y)] |x − y| n+2−2σ ≤ C A c k X k (x, y) |φ k (x) − φ k (y)| 2 |x − y| n+2−2σ ≤ C k R n χ {u<φ k } .
The last inequality is due to the Lipschitz constant of φ k when x, y are close, and the tail growth of φ k when x, y are far apart.
We now control the term (1bii). Again, we split the region of integration over A k and A c k . Using Hölder's inequality (provided 2σ > γ/(1 + γ) and therefore we must choose γ small when σ is small) as well as the Lipschitz and sup bounds on φ k we have
(1bii) = c n,σ A k [φ k (x) − φ k (y)][u − φ k (x) − u − φ k (y)] |x − y| n+2−2σ dxdy. ≤ η A k [u − φ k (x) − u − φ k (y)] 2+γ |x − y| n+2−2σ
dxdy.
+ C A k [φ k (x) − φ k (y)] (2+γ)/(1+γ) |x − y| n+2−2σ X k (x, y)dxdy. ≤ η A k [u − φ k (x) − u − φ k (y)] 2+γ |x − y| n+2−2σ
dxdy.
+ C k R n χ {u<φ k } dx.
The first term is absorbed into the left hand side and the second term is controlled exactly as before.
We now consider the integration over A c k .
(1bii) = A k [φ k (x) − φ k (y)][u − φ k (x) − u − φ k (y)] |x − y| n+2−2σ dxdy. ≤ A k [φ k (x) − φ k (y)] 2 |x − y| n+2−2σ X k (x, y)dxdy. ≤ C k R n χ {u<φ k } dx.
We now turn our attention to the term (1c). By Lemma 6.1 we have
B(χ {u<φ} ((u + d) γ+1 /(φ k + d) γ − φ k ), φ k ) ≤ B(χ {u<φ} (u + d) γ+1 /(φ k + d) γ , φ k ) + B(χ {u<φ} φ k ), φ k )
Both of the above terms are handled exactly as before by using Lemma 6.1 and splitting the region of integration over A k and A c k . The term (1a) is (6.10)
(1a) = B(χ {u<φ k } u γ+1 /φ γ k − (φ k + d), u + φ k ) = 2 R n R n χ {u(x)<φ k (x)} φ k (x) + d − (u + d) γ+1 (x) (φ k + d) γ (x) u + φ k (y) |x − y| n+2−2σ dx dy ≥ 0
The factor of 2 comes form the symmetry of the kernel. We will utilize this nonnegative term shortly.
We now consider the term (2) which we recall as − R n R n γ γ + 1 χ {u<φ k } (((u+d)/(φ k +d)) γ+1 −1)∇φ k ∇L(x−y)[u(y)−u(x)] dx dy.
In the above L = ∇(−∆) −σ and we have
∇L(x − y) ≈ |x − y| −(n+1−2σ) .
We
again write u = u + φ k − u − φ k + φ k .
To control the term involving φ k we integrate over the two sets {|x − y| ≤ 8} and {|x − y| > 8}. We use that |φ k (x) − φ k (y)| ≤ C k |x − y| when |x − y| ≤ 8 and |φ k (x) − φ k (y)| ≤ |x − y| τ when |x − y| > 8 as well as the bound |∇φ k | ≤ C k to obtain
R n R n γ γ + 1 χ {u<φ k } u + d φ k + d γ+1 − 1 ∇φ k ∇L(x − y)[φ k (y) − φ k (x)] dx dy ≤ C k |x−y|≤8 χ {u<φ k } |x − y| −(n−2σ) dx dy + C k |x−y|>8 χ {u<φ k } |x − y| −(n+1−2σ−τ ) dx dy ≤ C k R n χ {u<φ k } dx.
We now use the same set decomposition with −u − φ k , the inequality |u − φ k | ≤ 1 as well as Hölder's inequality
R n R n γ γ + 1 χ {u<φ k } u + d φ k + d γ+1 − 1 ∇φ k ∇L(x − y)[u − φ k (y) − u − φ k (x)] dx dy ≤ C k |x−y|≤8 χ {u<φ k } |x − y| −(n−2σ+γ/(1+γ)) dx dy + ζ |x−y|≤8 [u − φ k (y) − u − φ k (x)] 2+γ |x − y| n+2−2σ dx dy C k + |x−y|>8 χ {u<φ k } |x − y| −(n+1−2σ−τ ) dx dy
The third term is bounded by
C R n χ {u<φ k } dx
provided τ < 1 − 2σ as well as the first term provided again that 2σ > γ/(1 + γ).
The second term can be bounded as before by splitting the region of integration over A k and A c k and absorbing the region over A k into the left hand side. We now turn our attention to the last term involving u + φ k . We first remark that the integral becomes
R n R n χ {u(x)<φ k (x)} (((u + d)/(φ k + d)) γ+1 − 1)∇φ k ∇L(x − y)u + φ k (y) dx dy.
We first consider the set |x − y| > 8. Since
u + φ k ≤ Ψ, |x−y|>8 χ {u(x)<φ k (x)} (((u + d)/(φ k + d)) γ+1 − 1)∇φ k ∇L(x − y)u + φ k (y) dx dy ≤ C k |x−y|>8 χ {u(x)<φ k (x)} |x − y| −(n+1−2σ+τ ) dx dy ≤ C k R n χ {u<φ k } dx.
When |x − y| < 8, we make the further decomposition
|∇φ k (x)| φ k (x) |x − y| ≤ η
to absorb the integral by the nonnegative quantity (6.10). In the complement when
|∇φ k (x)| φ k (x) |x − y| > η we use φ −1/m k C k ≥ |∇φ k |/φ k and integrate in y B8 ∇L(x − y)u + φ k (y)dy ≤ 8 ηφ 1/m k C −k r n−1 r n+1−2σ ≤ max{C, (ηC k ) 2σ−1 φ (2σ−1)/m k }.
The remainder of the terms are bounded by |∇φ k | ≤ C k φ 1−1/m k By multiplying by the term χ {u<φ} and integrating, we end up in the worst case with
C k R n χ {u<φ k } φ 1−1/m+(2σ−1)/m k dx ≤ C k R n χ {u<φ k } dx,
since m ≥ 2. The last term to consider is the local spatial term. We use Cauchy-Schwarz
δ BR χ {u<φ k } ∇ u + d φ k + d γ − 1 (u + d)∇u = δ BR γ u + d φ k + d γ |∇u| 2 φ k + d − δ BR γ u + d φ k + d γ+1 ∇u∇φ k ≥ (1 − η)δγ BR u + d φ k + d γ |∇u| 2 − Cδ BR u + d φ k + d γ+2 |∇φ k | 2 χ {u<φ k } ≥ (1 − η)δγ BR u + d φ k + d γ |∇u| 2 − C k δ BR χ {u<φ k } dx
Retaining the energy from (6.8) on the left hand side and moving everything else to the right hand side which is bounded by (6.9), our energy inequality becomes
(6.11) c V k T −2−2 −k (u − φ) 2 1−α − 1−α + c T −2−2 −k V k (u − φ k ) n(2+γ)/(n−2+2σ) (n−2+2σ)/n ≤ C k R n T −∞ χ {u<φ k } ≤ C k U k−1 χ {u<φ k } Fourth
Step: The nonlinear recursion relation. We now (as in [1]) use Hölder's inequality twice with the relations
β p 1 + 1 − β p 2 = 1 p = β p 3 + 1 − β p 4 . for a function v to obtain v p ≤ v pβ v p(1−β) ≤ v p1 pβ/p1 v p2 p(1−β)/p2 ≤ v p1 p3/p1 βp/p3 v p2 p4/p2 (1−β)p/p4
We now choose
p 1 = 2, p 2 = n(2 + γ) n − (2 − 2σ) , p 3 = 2 1 − α , p 4 = 2 + γ,
so that if r = 2 − 2σ p = 2 r + αn(2 + γ)/2 (1 − α)r + αn and β = r r + αn(2 + γ)/2 .
We now use Hölder's inequality one more time to obtain
v p b/p ≤ v p1 p3/p1 βb/p3 v p2 p4/p2 (1−β)b/p4 ≤ 1 ω v p1 p3/p1 βbω/p3 + ω − 1 ω v p2 p4/p2 (1−β)bω p 4 (ω−1)
We choose ω = (2 + γβ) (2 + γ)β and b = 2(2 + γ) 2 + γβ < p so that
(6.12) v p b/p ≤ 1 ω v 2 1/(1−α) 1−α + ω − 1 ω v n(2+γ) n−r n−r n ≤ 1 ω v 2 1−α 1−α + ω − 1 ω v n(2+γ) n−r n−r n ,
where we used Minkowski's inequality in the last inequality. Substituting u − φ k for v in (6.12) and utilizing (6.8) we obtain
(6.13) U k (u − φ k ) p − b/p ≤ C k U k−1 χ {u<φ k } . We first recall that φ k−1 ≥ φ k + (1 − µ 0 )2 −k . We now utilize Tchebychev's inequality U k χ {u<φ k } ≤ (2 k /(1 − µ 0 )) p U k−1 (u − φ k−1 ) p − .
Combining the above inequality with (6.13) we conclude
U k (u − φ k ) p − ≤ C k U k−1 (u − φ k−1 ) p − p/b .
If we define
M k := U k (u − φ k ) p − , Then M k ≤ C k M p/b k−1 . Since p > b,
if M 0 is sufficiently small -depending on C and p/b -we obtain that U k → 0, and hence u ≥ µ 0 .
We now prove Lemma 5.1 in the case when D(x) = (d 1 x + d 2 ) with d 1 ≤ 2 and d 2 ≥ 1/2. This is actually much simpler because we can utilize the test function −(u − φ k ) − as when dealing with a linear equation.
Proof. We choose as the test function −(u − φ k ) − . Notice that
∇u − φ k (d 1 u + d 2 ) = ∇u − φ k d 1 u + ∇u − φ k d 2 .
The fact that d 2 ≥ 1/2 gives a nondegenerate linear term which we utilize. From the computations in [1] we then have
R n 0 −∞ u − φ k D α t u ≥ c R n 0 −∞ u − φ k 2 1−α 1−α − C k R n 0 −∞ χ {u<φ k } dt,
and even more importantly
(6.14) 0 −∞ R n ∇u − φ k ∇(−∆) −σ u ≥ c 0 −∞ R n R n [u − φ k (x) − u − φ k (y)] 2 |x − y| n+2−2σ dx dy dt − C k 0 −∞ R n χ {u<φ k } dx dt.
Notice that in the above inequality we have the power | · | 2 rather than | · | 2+γ . We now show how to bound the terms invovling d 1 u.
−∇u − φ k [u + φ k − u − φ k + φ k ] = ∇[(u − φ k ) 2 /2 − u − φ k φ k ] + u − φ k ∇φ k = (1) + (2)
Then multiplying (1) by ∇(−∆) −σ u and integrating over R n we have
B((u − φ k ) 2 /2 − u − φ k φ k , u + φ k − u − φ k + φ k ). Now [(u − φ k ) 2 /2 − u − φ k φ k ](x) − [(u − φ k ) 2 /2 − u − φ k φ k ](y) = 1 2 [u − φ k (y) − u φ − k (x)][φ k (x) + φ k (y) − u − φ k (x) − u − φ k (y)] − 1 2 [u − φ k (y) + u φ − k (x)][φ k (x) − φ k (y)] = (1a) + (1b) We write u = u + φ k − u φ k .
We break up our set into the two regions
F k := {(x, y) : |u − φ (x) − u − φ (y)| ≥ 2|φ(x) − φ(y)|} We notice that on the set F we have that φ(x) + φ(y) − u − φ (x) − u − φ (y) ≥ |u − φ (x) − u − φ (y)|/2. Then integrating over F we have for the term (1a) with right term −u − φ k − F k ([(u − φ k ) 2 /2 − u − φ k φ k ](x) − [(u − φ k ) 2 /2 − u − φ k φ k ](y))(u φ − k (x) − u − φ k (y)) |x − y| n+2−2σ ≥ F k |u φ − k (x) − u − φ k (y)| 3 |x − y| n+2−2σ ≥ 0.
This is the nonnegative energy piece which we actually do not need having obtained a better piece in (6.14). All of the remaining terms in (1) can be bounded by breaking up the region of integration over F k , F c k . Over F k we use Hölder's inequality with p = 2 rather than with p = 2 + γ and absorb the small pieces by the term in (6.14). We use the same methods as before to bound the integration over F c k . Bounding the term (2) is done as before with slightly easier computations.
The local spatial term is bounded in the usual manner.
Pull-down
In this section we prove Lemma 5.2. We will need the following estimate that is analogous to Lemma 6.1.
Lemma 7.1. Let u, φ be two functions such that 1/2 ≤ φ ≤ u ≤ 1. Let 0 < γ < 1 be a constant. If |u(x) − u(y)| ≥ 4|φ(x) − φ(y)|, then (7.1) c 1 |u + φ (x) − u + φ (y)| 1+γ ≤ u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) ≤ c 2 |u + φ (x) − u + φ (y)|. Also, if 0 ≤ u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) then 0 ≤ (u − φ) − (x) − (u − φ) − (y).
If instead we assume |u(x) − u(y)| ≤ 4|φ(x) − φ(y)|, then
(7.2) u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) ≤ 14|φ(x) − φ(y)|.
The proof is similar to the proof of Lemma 6.1. In this case since u > φ one uses the bound above on u and the fact that φ is bounded by below.
Proof of Lemma 5.2. The proof is nearly identical. We mention the differences. We only consider D(u) = u since the modifications for handling D(u) = d 1 u + d 2 have already been shown in the proof of Lemma 5.1. We consider a similar test function
F (x) = 1 γ + 1 (1 + x) γ+1 − x + 1 γ + 1
We then utilize
(7.3) F ′ (u + φ k /φ k ) = 1 + (u − φ k ) + φ γ − 1
This time we consider the same test functions θ k (x) in the space variable, but this time we multiply only by a single cut-off in time ξ 0 (t). We define our φ k as
φ k := Ψ(x, t) − ξ 0 (t)θ k (x/2).
To obtain the same estimate in time we only need to recognize that φ k is now decreasing in time and bounded by below by 1/2.
(7.4) F ′ u + φ (t) φ(t) u + φ (t) − u + φ (s) = φ(t)F ′ u + φ (t) φ(t) u + φ (t) − u + φ (s) φ(t) = φ(t)F ′ u + φ (t) φ(t) u + φ (t) φ(t) − u + φ (s) φ(s) + φ(t)u s φ F ′ u + φ (t) φ(t) 1 φ(t) − 1 φ(s) ≥ 1 2 F (u + φ (t)/φ(t)) − F (u + φ (s)/φ(s)) + γ 2 [u + φ (t) − u + φ (s)] 2 − Cu + φ (s)(t − s)
The negative constant comes from the fact that φ −1 is Lipschitz. Then everything proceeds as before. Since our cut-off is bounded by below our L p norm in time occurs over all of (−∞, 0). We obtain as before
c R n 0 −∞ (u − φ) 2 1−α + 1−α + 0 −∞ R n u∇F ′ u + φ φ ∇(−∆) −σ u ≤ R n T −∞ χ {u>φ} dt dx
The spatial portion of the problem is handled exactly as before.
Decrease in Oscillation
We define
F (t, x) := 1 4 sup{−1, inf{0, |x| 2 − 9}} + 1 4 sup{−1, inf{0, |t| 2 − 9}}.
We point out that F is Lipschitz, compactly supported in [−3, 0] × B 3 and equal to −1/2 in [−2, 0] × B 2 . We also define for 0 < λ < 1/4,
ψ λ (t, x) := ((|x| − λ −1/ν ) ν − 1) + + ((|t| − λ −1/ν ) ν − 1) + for |t|, |x| ≥ λ −1,ν
and zero otherwise. The value of ν will be determined later. Finally, we define for i ∈ {0, 1, 2, 3, 4} φ i = 1 + ψ λ 3 + λ i F. There exists a small constant ρ > 0 depending only on n, σ, α and λ 0 depending only on n, σ, α, ρ, δ such that for any solution u defined in (a, 0) × R n with a < −4 and −ψ λ 3 ≤ u(t, x) ≤ 1 + ψ λ 3 in (a, 0) × R n with λ ≤ λ 0 , and f ≤ λ 3 , then if
Then 1/2 ≤ φ 0 ≤ . . . ≤ φ 4 ≤ 1 in Γ 4 .|{u < φ 0 } ∩ (B 1 × (−4, −2))| ≥ ρ, then |{u > φ 4 } ∩ (R n × (−2, 0))| ≤ κ.
Proof. We will show the computations for D(u) = u. The general situation is handled as before as in Lemma 5.1.
First
Step: Revisiting the energy inequality. We return again to the energy inequality. This time, however, we will make use of the nonnegative terms. We seek to obtain a bounde on the right hand side of the form Cλ (2+γ)/(1+γ) .
We now consider the test function as in (7.3), but with cut-off φ 1 . If u > φ i , then 1/2 ≤ φ i ≤ u ≤ 1, and so
F ′ (u/φ i ) = χ {u>φi} u γ − φ γ φ γ ≤ 2γu + φi ≤ 2γλ 2i .
To take care of the piece in time we first note that φ 1 is Lipschitz in time for t ∈ [0, 4] with Lipschitz constant 2λ. Then as before
F ′ (u/φ 1 )D α t u dt = F ′ (u/φ 1 )D α t (u + φ1 − u − φ1 + φ) = (T 1) + (T 2) + (T 3).
For (T 1) we return to the inequality (7.4) and utilize the Lipschitz nature of φ −1 1 , to obtain
F ′ (u/φ 1 )D α t u + φ1 ≥ c 0 −∞ (u − φ 1 ) 2 1−α + 1−α + 0 −∞ t −∞ φ 1 (t)u s φ1 F ′ (u + φ1 (t)/φ(t))(φ −1 1 (t) − φ −1 1 (s)) dt ≥ c 0 −∞ (u − φ 1 ) 2 1−α + 1−α − C 0 −4 λ 2 .
The nonnegative piece (T2) will be utilized in the second step of this proof. For (T 3) we note that since φ i is decreasing, we have
0 ≤ −D α t φ i ≤ −Λ −1 D α t φ i = −D α t ψ λ 3 − D α t λ i F. Clearly, −D α t λ i F ≤ Cλ i for t ≤ 0 from the Lipschitz nature of F . For −4 ≤ t ≤ 0 we have −D α t ψ λ 3 ≤ λ −1/ν −∞ |s| |s| 1+α ≤ C α λ (α−ν)/(ν) .
We therefore pick ν small enough that (α − ν)/ν > 2.
0 −∞ F ′ (u/φ 1 )D α t φ 1 ≤ C 0 −4 λu + φ ≤ Cλ 2 .
Our energy inequality becomes
c 0 −∞ (u − φ) 2 1−α + 1−α + c R n [u + φ1 (t) − u + φ1 (s)][u − φ1 (t) − u − φ1 (s)] (t − s) 1+α + "Spatial Terms" ≤ Cλ 2 + f u + φ1 .
Since f ≤ λ 3 , everything is bounded on the right hand side by Cλ 2 . We now turn our attention to the elliptic portion. We consider the terms (1a), (1bi), (1bii), (1c), (2) as the analogous terms for those defined in the proof of Lemma 5.1. As before we obtain a nonnegative energy from the term (1bi). Everything else we will absorb into this energy or bound by Cλ (2+γ)/(1+γ) . The term from (1bi) over A c 1 is bounded as follows:
c n,γ,σ A c 1 [φ 1 (x) − φ 1 (y)][u + φ1 (x) − u + φ1 (y)] |x − y| n+2−2σ dx dy ≤ C A c 1 [φ 1 (x) − φ 1 (y)] 2 X x,y |x − y| n+2−2σ dx dy
We have the following inequality from the computations given in [3]:
(8.1) [φ 1 (x) − φ 1 (y)] (2+γ)/(1+γ) X x,y |x − y| n+2−2σ dx dy ≤ Cλ (2+γ)/(1+γ) ,
with (2 − 2σ − 2ν)/ν ≥ 2. In particular, (8.1) will hold for γ = 0. For the term (1bii) we break up the region of integration into A 1 and A c 1 . On A 1 we use Hölder's inequality as before
c n,γ,σ A1 [φ 1 (x) − φ 1 (y)][u φ1 (x) − u φ1 (y)] |x − y| n+2−2σ dx dy ≤ C A1 [φ 1 (x) − φ 1 (y)] (2+γ)/(1+γ) |x − y| n+2−2σ X x,y dx dy + η A1 [u φ1 (x) − u φ1 (y)] 2 |x − y| n+2−2σ dx dy ≤ Cλ (2+γ)/(1+γ) + η A1 [u φ1 (x) − u φ1 (y)] 2 |x − y| n+2−2σ dx dy
The last term is absorbed into the left hand side. The other term is bounded again from (8.1). The term (1c) is bounded in exactly the same way. (1a) is nonnegative and will be utilized later. We now turn our attention to the term (2). We rewrite u = u + φ1 − u − φ1 + φ 1 . The term involving u + φ1 with |x − y| ≤ η is absorbed by the nonnegative term (1a) on the left hand side. We now utilize the inequalities:
• u + φ1 ≤ λ • |∇L(x − y)| ≈ 1/|x − y| n+1−2σ • |∇φ 1 | ≤ C for all x • |∇φ 1 | ≤ Cλ in the support of u + φ1 • χ {u>φ1} [(u/φ 1 ) 1+γ − 1] ≤ 4u + φ1 ≤ 4λ. Then R n R n γ γ + 1 χ {u<φ} ((u/φ) γ+1 − 1)∇φ 1 ∇L(x − y)[u(y) − u(x)] dx dy ≤ Cλ 2 R n R n χ {u<φ} ∇L(x − y)[u(y) − u(x)] dx dy
These terms are all bounded as before. Notice that we have λ 2 on the outside. Then all nonloacl terms on the right hand side are bounded by Cλ (2+γ)/(1+γ) . The local term div(D(u)∇u) is handled in the usual manner by use of Cauchy-Schwarz. Second
Step: Using the "good" spatial piece. We now utilize the two nonnegative pieces. From Proposition 10.1 we have
u φ 1 γ+1 − φ 1 + ≥ 4(u + φ1 ) 1+γ .
Then we conclude that
(8.2) R n 0 −4 0 −4 u + φ1 (t)u − φ1 (s) t − s 1+α + 0 −4 R n R n (u + φ1 ) γ+1 (x)u − φ1 (y) |x − y| n+2−2σ ≤ Cλ (2+γ)/(1+γ) .
Since we used Ψ λ 3 , replacing φ 1 with φ 3 we have the same inequality but with the bound Cλ 3(2+γ)/(1+γ) .
We now show how the inequality (8.2) and its analogue for φ 3 are enough to prove the remainder of the Lemma as in [1]. We note that for the proof as written in [1] to work we need 3(2 + γ)/(1 + γ) > 5 which is achieved for γ small enough. We first utilize 0 −4 R n R n (u + φ1 ) γ+1 (x)u − φ1 (y) |x − y| n+2−2σ ≤ Cλ (2+γ)/(1+γ) .
From our hypothesis
|{w < φ 0 } ∩ ((−4, −2) × B 1 )| ≥ ρ.
Then the set of times Σ ∈ (−4, −2) for which |{u(t, ·) < φ 0 } ∩ B 1 | ≥ ρ/4 has atleast measure ρ/(2|B 1 |). And so
Cλ (2+γ)/(1+γ) ≥ cρ Σ R n (u + φ1 ) 1+γ dx dt Now ({u − φ 2 > 0} ∩ (Σ × B 2 )) ⊂ ({u − φ 1 > λ/2} ∩ (Σ × B 2 )
), and so from Tchebychev's inequality
|{u − φ 2 > 0} ∩ (Σ × B 2 )| ≤ C ρ λ 2+γ 1+γ −(1+γ) .
The exponent on λ is positive for γ small enough. We write this as
|{u ≤ φ 2 } ∩ (Σ × B 2 )| ≥ |Σ × B 2 | − C ρ λ 2+γ 1+γ −(1+γ) ≥ ρ/2 − C ρ λ 2+γ 1+γ −(1+γ) .
This will be positive for λ small enough depending on n, σ, α, γ, ρ.
The proof then proceeds just as in [1] where we then utilize 3(2 + γ)/(1 + γ) > 5 as well as the analogue of (8.2) for φ 3 .
This next Lemma will imply Lemma 5.3. For this next lemma we define ψ τ,λ = ((|x| − 1/λ 4/σ ) τ − 1) + + ((|t| − 1/λ 4/α ) τ − 1) + Lemma 8.2. Given ρ > 0 there exist τ > 0 and µ 1 such that for any solution to (5.2) in R n × (a, 0) with a < −4 and |f | ≤ λ 3 satisfying −ψ τ,λ ≤ u ≤ 1 + ψ τ,λ , Then from Lemma 5.2 we conclude that w ≤ 1 − µ 1 on (−1, 0) × B 1 , and so u ≤ 1 − λ 4 µ 1 = 1 − µ 2 .
Proof of Regularity
With Lemmas 5.1, 5.2, and 5.3 we are ready to finish the proof of Theorem 1.4. We first mention that solutions of (2.4) satisfy the following scaling property: If u is a solution on (a, 0) × R n , then v(t, x) = Au(Bt, Cx) is a solution on (a/B, 0) × R n if A = B α C 2σ−2 . The method of proof is given in [4] which we now briefly outline. We take any point p = (x, t) ∈ R n × (a, T ) and prove that u is Hölder continuous around p. The Hölder continuity exponent will depend only on α, σ, n. The constant will depend on the L ∞ norm of u, f and on the C 2 norm of u(a, x). By translation we assume that p = (0, 0). By scaling we assume that 0 ≤ u(t, x) ≤ Ψ(t, x) and |f | ≤ λ 3 for λ as defined in
We now take a positive constant M < 1/4 such that for 0 < K ≤ M 1 1 − µ 2 /2 ψ τ,λ 3 (Kt, M x).
M will depend only on λ, µ 2 and τ > 0. During the iteration we have the following alternative. Alternative 1. Suppose that we can apply Lemma 5.3 repeatedly. We then consider the rescaled functions Notice that M 1 < M . All the u j satisfy the same equation. If we can apply Lemma 5.3 at every step, then u j ≤ 1−µ 2 on the cylinder Γ 1 . This implies Hölder regularity around p and also implies u(p) = 0 . Alternative 2. If at some point the assumption (5.3) fails, then we are in the situation of Lemma 5.1 and 0 < µ 0 ≤ u j (t, x) ≤ 1.
Scaling the above situation our equations will have D(u) = d 1 u + d 2 with d 2 > 0. We may then repeat the procedure since Lemma 5.1 and 5.3 apply also in this situation.
appendix
Proof of Lemma 6.1. Since throughout the paper we only require γ small when σ is small, we will prove the Lemma for γ = 1/k for k ∈ N. Now we assume without loss of generality that u γ+1 (y)
φ γ (y) − u γ+1 (x) φ γ (x) ≥ 0.
We first assume that |u(x) − u(y)| ≤ 4|φ(x) − φ(y)|. We need to bound
(10.1) u γ+1 (y) φ γ (y) − u γ+1 (x) φ γ (x) .
We first notice that term above in (10.1) will be larger if we assume that u(y) ≥ u(x) and φ(x) ≥ φ(y) without changing |u(x) − u(y)| and |φ(x) − φ(y)|. Furthermore, the term in (10.1) will still be greater if u(y) = φ(y) and not changing |u(y) − u(x)|.
We are then looking for the bound u(y) − u 1+γ (x) φ γ (x) ≤ c 2 |φ(x) − φ(y)| = c 2 (φ(x) − u(y)).
Thus, for a constant l we need the bound (10.2) l − (l − υ) 1+γ (l + µ) γ ≤ Cµ.
Recalling that we are assuming 4|φ(x) − φ(y)| ≥ |u(x) − u(y)| (or υ ≤ 4µ) the above term is maximized when υ is largest or when υ = 4µ. Now
l − (l − 4µ) 1+γ (l + µ) γ = l[(l + µ) γ − (l − 4µ) γ ] (l + µ) γ + 4 µ(l − 4µ) γ (l + µ) −γ := L 1 + L 2
It is clear that L 2 ≤ 4µ. To control L 1 we first consider when l − 4µ ≤ l/2. Then l ≤ 8µ and it is clearly true that L 1 ≤ 8µ. Now when l − 4µ ≥ l/2, from the concavity of x γ we have
L 1 ≤ l l − 4µ (l − 4µ) γ (l + µ) γ 5µ ≤ 10µ.
Proof. For fixed h > 0, d dx
F (x + h) − F (x) h = F ′ (x + h) − F ′ (x) h ≥ 0.
Then for x, h ≥ 0
F (x + h) − F (x) h ≥ F (0 + h) − F (0) h = F (h) h .
Let h = y − x, and multiply both sides of the equation by y − x.
Proposition 10.2. Let 0 < γ < 1. Let x, d ≥ 0.
Then
(x + d) γ − d γ ≤ 2 γ x γ
Proof. First assume x ≤ d. From the concavity of x γ we have
(x + d) γ − d γ ≤ γd γ−1 x = γd γ−1 x 1−γ x γ ≤ γx γ .
If on the other hand x > d, then
(x + d) γ − d γ ≤ (x + d) γ ≤ (2x) γ .
Lemma 3 . 1 .
31Assume g(a) = 0 and define g(t) := 0 for t < a. Assume relation (1.4) on K. Then
Lemma 3 . 2 .
32Let u(0) = 0 and assume u ≥ 0. For fixed 0 < ǫ < 1, letũ be the extension defined as in Lemma 3.1.Let K satisfy conditions (1.3) and (1.4). Then there exists two constants c 1 , c 2 depending only on α such that
Lemma 4. 4 .
4Let u be a solution to (4.6) with right hand side f and u 0 both satisfying the exponential bound (4.4). Then
T 0 (
0−∆) −σ u(t, ·) 2+γ W (2−2σ)/(2+γ)+2σ,2+γ dt ≤ Cwith the constant C depending only on the exponential bounds in (4.4), n, γ, T .
Lemma 8. 1 .
1Let κ be the constant defined in Lemma 5.2. Let u be a solution to (5.2).
u
≤ 1 − µ 1 .Proof. We consider the rescaled function w(t, x) = (u − (1 − λ 4 ))/λ 4 . We fix τ small enough such that(|x| τ − 1) + λ 4 ≤ (|x| σ/4 − 1) + and (|t| τ − 1) + λ 4 ≤ (|t| σ/4 − 1) + Then w satisfies equation (5.2) with D 2 (x) = D 1 (λ 4 x + (1 − λ 4 )) where D 1is the coefficient for the equation u satisfies. From our hypothesis and Lemma 8.1 |{w > 1/2} ∩ (B 2 × [−2, 0])| = |{u > φ 4 } ∩ (B 2 × [−2, 0])| ≤ ρ.
u j+1 (t, x) = 1 1 − µ 2 /4 u j (M 1 t, M x), M 1 = M 2−2σ 1 − µ 2 /4 1/α .
).Remark 1.2. Our constructions are made via recursion over a finite time interval
(0, T 1 ). Since our constructions are made via recursion, if T 2
Then(10.3)l − (l − υ) 1+γ (l + µ) γ ≤ 14µ, and (6.2) is proven with constant c 2 = 14.We now assume 4|φ(x) − φ(y)| ≤ |u(x) − u(y)|, and the left hand side of (10.2) is maximized again when 4µ = υ and so we havewhich is just (10.3) rewritten with the substitution µ = υ/4. Thenand the right hand side of (6.1) is shown. NowWe suppose γ = 1/k. By factoring we haveThus, M 1 is the dominant term. We then have from the convexity of x γ+1.
A parabolic problem with a fractional time derivative. Mark Allen, Luis Caffarelli, Alexis Vasseur, preprint available on arxiv.orgMark Allen, Luis Caffarelli, and Alexis Vasseur, A parabolic problem with a fractional time derivative, preprint available on arxiv.org, 2015.
Maximum principles, extension problem and inversion for nonlocal one-sided equations. F J Bernardis, P R Martin-Reyes, J Stinga, Torrea, preprint available on arxiv.orgA Bernardis, F.J. Martin-Reyes, P.R. Stinga, and J. Torrea, Maximum principles, extension problem and inversion for nonlocal one-sided equations, preprint available on arxiv.org, 2015.
Regularity theory for parabolic nonlinear integral operators. Luis Caffarelli, Chi Hin Chan, Alexis Vasseur, 849-869. MR 2784330J. Amer. Math. Soc. 24345024Luis Caffarelli, Chi Hin Chan, and Alexis Vasseur, Regularity theory for parabolic nonlinear integral operators, J. Amer. Math. Soc. 24 (2011), no. 3, 849-869. MR 2784330 (2012c:45024)
Regularity of solutions of the fractional porous medium flow. Luis Caffarelli, Fernando Soria, Juan Luis Vázquez, MR 3082241J. Eur. Math. Soc. 5JEMS)Luis Caffarelli, Fernando Soria, and Juan Luis Vázquez, Regularity of solutions of the fractional porous medium flow, J. Eur. Math. Soc. (JEMS) 15 (2013), no. 5, 1701-1746. MR 3082241
Nonlinear porous medium flow with fractional potential pressure. Luis Caffarelli, Juan Luis Vazquez, 537-565. MR 2847534Arch. Ration. Mech. Anal. 202276096Luis Caffarelli and Juan Luis Vazquez, Nonlinear porous medium flow with fractional potential pressure, Arch. Ration. Mech. Anal. 202 (2011), no. 2, 537-565. MR 2847534 (2012j:76096)
Diffusion of fluids in porous media with memory. Michele Caputo, Geothermics. 281Michele Caputo, Diffusion of fluids in porous media with memory, Geothermics 28 (1999), no. 1, 113-130.
Fractional diffusion in plasma turbulence. D Castillo-Negrete, B A Carreras, V E Lynch, Physics of Plasmas. D. del Castillo-Negrete, B.A. Carreras, and V.E. Lynch, Fractional diffusion in plasma tur- bulence, Physics of Plasmas (2004).
Nondiffusive transport in plasma turbulene: A fractional diffusion approach. Physical Review Letters. , Nondiffusive transport in plasma turbulene: A fractional diffusion approach, Physical Review Letters (2005).
Hitchhiker's guide to the fractional Sobolev spaces. Eleonora Di Nezza, Giampiero Palatucci, Enrico Valdinoci, MR 2944369Bull. Sci. Math. 1365Eleonora Di Nezza, Giampiero Palatucci, and Enrico Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math. 136 (2012), no. 5, 521-573. MR 2944369
An application-oriented exposition using differential operators of Caputo type. Kai Diethelm, MR 2680847Lecture Notes in Mathematics. 200434005Springer-VerlagThe analysis of fractional differential equationsKai Diethelm, The analysis of fractional differential equations, Lecture Notes in Mathematics, vol. 2004, Springer-Verlag, Berlin, 2010, An application-oriented exposition using differential operators of Caputo type. MR 2680847 (2011j:34005)
Elliptic partial differential equations of second order. David Gilbarg, Neil S Trudinger, Classics in Mathematics. 35004Springer-VerlagReprint of the 1998 edition. MR 1814364David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second or- der, Classics in Mathematics, Springer-Verlag, Berlin, 2001, Reprint of the 1998 edition. MR 1814364 (2001k:35004)
The random walk's guide to anomalous diffusion: a fractional dynamics approach. Ralf Metzler, Joseph Klafter, 77. MR 1809268Phys. Rep. 339182082Ralf Metzler and Joseph Klafter, The random walk's guide to anomalous diffusion: a frac- tional dynamics approach, Phys. Rep. 339 (2000), no. 1, 77. MR 1809268 (2001k:82082)
Theory of function spaces. Hans Triebel, Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG. Reprint of 1983 edition [MR0730762], Also published in 1983 by Birkhäuser Verlag [MR0781540]. MR 3024598Hans Triebel, Theory of function spaces, Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel, 2010, Reprint of 1983 edition [MR0730762], Also published in 1983 by Birkhäuser Verlag [MR0781540]. MR 3024598
The porous medium equation. Juan Luis Vázquez, Mathematical theory. MR 2286292. OxfordOxford University Press35003Oxford Mathematical MonographsJuan Luis Vázquez, The porous medium equation, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, Oxford, 2007, Mathematical theory. MR 2286292 (2008e:35003)
Global strong solvability of a quasilinear subdiffusion problem. Rico Zacher, MR 3000457J. Evol. Equ. 124Rico Zacher, Global strong solvability of a quasilinear subdiffusion problem, J. Evol. Equ. 12 (2012), no. 4, 813-831. MR 3000457
Chaos, fractional kinetics, and anomalous transport. G M Zaslavsky, 461-580. MR 1937584Phys. Rep. 371670030G. M. Zaslavsky, Chaos, fractional kinetics, and anomalous transport, Phys. Rep. 371 (2002), no. 6, 461-580. MR 1937584 (2003i:70030)
Fractional porous medium and mean field equations in Besov spaces. Xuhuan Zhou, Weiliang Xiao, Jiecheng Chen, 14. MR 3273082Electron. J. Differential Equations. 199Xuhuan Zhou, Weiliang Xiao, and Jiecheng Chen, Fractional porous medium and mean field equations in Besov spaces, Electron. J. Differential Equations (2014), No. 199, 14. MR 3273082
|
[] |
[
"F & H MONOPOLES",
"F & H MONOPOLES"
] |
[
"Klaus Behrndt ",
"Renata Kallosh ",
"\nInstitut für Physik\nPhysics Department\nHumboldt-Universität\n10115BerlinGermany\n",
"\nStanford University\n94305-4060StanfordCAUSA\n"
] |
[
"Institut für Physik\nPhysics Department\nHumboldt-Universität\n10115BerlinGermany",
"Stanford University\n94305-4060StanfordCAUSA"
] |
[] |
Supersymmetric monopoles of the heterotic string theory associated with arbitrary non-negative number of the left moving modes of the string states are presented. They include H monopoles and their T dual partners F monopoles (ALE instantons). Massive F = H monopoles are T self-dual. Solutions include also an infinite tower of generic T duality covariant non-singular in stringy frame F&H monopoles with the bottomless throat geometry. The massless F = -H monopoles are invariant under combined T duality and charge conjugation converting a monopole into anti monopole.All F&H monopoles can be promoted to the exact supersymmetric solutions of the heterotic string theory since the holonomy group is the compact SO(9). The sigma models for M 8 monopoles, which admit constant complex structures, have enhanced world-sheet supersymmetry: (4,1) in general and (4,4) for the left-right symmetric monopoles. The space-time supersymmetric GS light-cone action in monopole background is directly convertible into the world-sheet supersymmetric NSR action.
|
10.1016/0550-3213(96)00121-6
|
[
"https://arxiv.org/pdf/hep-th/9601010v1.pdf"
] | 1,551,911 |
hep-th/9601010
|
615b334d3ba260e0fec482e89822f7c7eb8d9d79
|
F & H MONOPOLES
4 Jan 1996 October 3, 2018
Klaus Behrndt
Renata Kallosh
Institut für Physik
Physics Department
Humboldt-Universität
10115BerlinGermany
Stanford University
94305-4060StanfordCAUSA
F & H MONOPOLES
4 Jan 1996 October 3, 20181
Supersymmetric monopoles of the heterotic string theory associated with arbitrary non-negative number of the left moving modes of the string states are presented. They include H monopoles and their T dual partners F monopoles (ALE instantons). Massive F = H monopoles are T self-dual. Solutions include also an infinite tower of generic T duality covariant non-singular in stringy frame F&H monopoles with the bottomless throat geometry. The massless F = -H monopoles are invariant under combined T duality and charge conjugation converting a monopole into anti monopole.All F&H monopoles can be promoted to the exact supersymmetric solutions of the heterotic string theory since the holonomy group is the compact SO(9). The sigma models for M 8 monopoles, which admit constant complex structures, have enhanced world-sheet supersymmetry: (4,1) in general and (4,4) for the left-right symmetric monopoles. The space-time supersymmetric GS light-cone action in monopole background is directly convertible into the world-sheet supersymmetric NSR action.
Introduction
Supersymmetric gravity has been studied extensively over some number of years. The properties of the electrically charged supersymmetric solutions have been compared with the properties of the states in string theories [1], [2]. The results of such comparisons indicate that many of the electrically charged solutions have the interpretation as the string states. Much less is known about the magnetically charged solutions. From the point of view of the low-energy effective actions of supergravity theories, electric as well as magnetic and dyon configurations come out as solutions of semi-classical non-linear equations. None of these soliton-type solutions is directly and unambiguously related to a linear system of excitations describing the quantum states of the string theory. However, the supersymmetric string theories and electrically charged solutions seem to have a particular knowledge about each other. The predictions about the properties of BPS string states sometimes were obtained in the framework of soliton solutions and sometimes vice-versa. One of the most striking examples of such predictions was the one from the string theory. The massless states with N L = 0 (where N L is the number of left moving modes) were expected to describe the T-self-dual point of the theory [3]. Indeed, the corresponding "solitons" with the vanishing ADM mass were found to be a T-self-dual solutions of the supergravity theory [4]. In addition, they were identified with N L = 0 states of the heterotic string theory [4], [5] [6].
The massless magnetic monopoles are also very interesting solutions, however not much is known about them since the magnetic solutions have not been identified directly with excitations of any kind of a linear system. The best information comes from the conjectured S-duality which tell us that S-dual partners of massless electric solutions exist. The asymptotic form of magnetic solutions of the heterotic string with N L ≥ 1 was found in [1]. The complete multi center magnetic solutions have been found recently [7] in the form of the T-covariant magnetically charged solutions of the heterotic string, defined by 28 harmonic functions. They have one half of unbroken spacetime supersymmetry of the heterotic string theory.
The interplay between the space-time supersymmetries and the world-sheet supersymmetries for the BPS states was first studied in connection with the heterotic instantons and solitons by Callan, Harvey and Strominger [8]. The analysis was performed for the five-branes and related to them H monopoles [9], [10]. Now the large variety of more general magnetic solutions is available for which the interplay between the space-time and the world-sheet supersymmetries was not studied yet.
The purpose of this paper is to study the generic class of the monopole solutions of the heterotic string theory. This means that we would like to find the exact ten-dimensional supersymmetric solutions which become monopoles of the four dimensional theory upon dimensional reduction. Some of them are expected to be massive, some massless.
One of the purpose of such uplifting of the four-dimensional monopoles was to study the issue of anomaly related α ′ corrections to these monopoles and the corresponding world-sheet supersymmetric sigma models. Another reason was to understand better the massless monopoles. There was no information available about the behavior of the massless monopoles under T duality. Moreover, the T self-dual solutions in this class is already known [11]: the uplifted a = 1 extreme massive magnetic black holes have such property. It seemed unlikely that both a = 1 as well as the massless black holes can be both T self dual. Thus, we wanted to clarify what happens with massless monopoles under T duality.
So far three types of monopole solutions of the heterotic string theory with half of unbroken supersymmetry were known to be exact. For all of them the non-trivial part is a 4-dimensional Euclidean manifold or the 10-dimensional manifold with 5 flat spatial directions.
i) The first type, known in the literature as H-monopoles was first discovered by Khuri [9]. The embedding of the spin connection into the gauge group required the presence of the non-Abelian field in the solution. At the time of their discovery these solutions were interpreted as non-Abelian monopoles. Soon after this work Gauntlett, Harvey and Liu [10] have established the relation between these monopole solutions and the five-brane solutions. They have also observed that the exact stringy monopoles of [9] are actually not non-Abelian monopoles since the non-Abelian vector fields fall down faster than the monopole field would. Rather they are monopoles of a U(1) group resulting from the compactification of the antisymmetric tensor field B, which explains why these solutions are called H-monopoles. The world-sheet supersymmetry of this solution, including the non-Abelian fields, is known to be (4,4) which provides the proof of the absence of α ′ corrections [8], [12]. The relation between H-monopoles and extreme a = √ 3 magnetic black holes was realized in [13].
ii) The second type of known monopoles [14] if treated in the same spirit has the right to be called F-monopoles. Those solutions are stringy instantons with a constant dilaton, vanishing 3form H and self-dual curvature in the four-dimensional Euclidean subspace of the five-dimensional Minkowski geometry. They represent the Asymptotically Locally Euclidean (ALE) gravitational instantonic backgrounds coupled to gauge instantons through the so-called "standard embedding". These solutions were found to be T-dual partners of the H-monopoles [14]. The non-Abelian fields are required to be present in the solution for the exactness. The relevant non-Abelian fields also fall down as the dipole rather than a monopole field, only the U(1) field F = dA has a magnetic charge. The U(1) field A originates not in the antisymmetric tensor field but in the non-diagonal component of the metric in the uplifted solution. The world-sheet supersymmetry of this solution, including the non-Abelian fields, was found to be (4,4). These solutions from the point of view of the four-dimensional geometry may be also associated with the extreme magnetic a = √ 3 black holes.
iii) The third type of exact magnetic solutions of the heterotic string is the uplifted a = 1 magnetic extreme black holes [15], [16] , supplemented by the proper non-Abelian field for the exactness [17]. They were called "exact SU(2) × U(1) stringy black holes". Besides one Abelian vector field U(1) they had a non-Abelian SU(2) vector field. These solutions were found to be T self dual [11]. In the spirit of giving to monopoles the name according to the name of the gauge fields with magnetic charges, this solution can be called F=H monopole. The world-sheet supersymmetry of this solution, including the non-Abelian fields, was found to be (4,1) [17], [18] which is sufficient to prove the absence of α ′ corrections [12].
In short, the first and the second type of monopoles related by T duality are
(F magn = 0, H magn ) ⇐= T =⇒ (F magn , H magn = 0)(1)
The third one is T self dual
(F magn = H magn ) ⇐= T =⇒ (H magn = F magn )(2)
This picture of heterotic monopoles is obviously incomplete. One may expect to find supersymmetric monopoles with 6 U(1) fields H and with 6 U(1) fields F for the heterotic string compactified on a 6-dimensional torus. Those are solutions which we have found. Since they have both F and H fields with the proper fall off at infinity, and they interpolate between all three types of monopoles presented above, we call them F & H monopoles. Under the S duality our F & H monopoles transform into the electrically charged solutions. The electrical charge of the F-fields originates in the magnetic charge of the H fields and vise versa. In particular, solutions with only electric F fields become H monopoles and the electric solutions with only H fields become F monopoles.
(F el , H el = 0) ⇐= S =⇒ (F magn = 0, H magn ) (F el = 0, H el ) ⇐= S =⇒ (F magn , H magn = 0)(3)
Having these solutions with unbroken supersymmetry in the leading approximation we may address the problem: which of these solutions are exact? We will find a simple answer: all of them, with SO(9) gauge group for embedding of the spin connection. (For all of those with SO(8) gauge group the enhanced world-sheet supersymmetry takes place). For all solutions which we will find, the non-Abelian part always falls off faster than a monopole. The SO(9) vector field far away from the core falls off as V Y M ∼ 1/r 2 and hence the corresponding field strength as ∼ 1/r 3 , whereas the Abelian field strength F ij and H ij fall off as ∼ ǫ ijk x k /r 3 . Therefore the name F & H monopoles remains valid for these solutions even after they have been promoted to the exact one. The world-sheet supersymmetry of F & H monopoles will be found to be at least (4,1) which is sufficient to prove the absence of α ′ corrections.
Many new features of the F & H monopoles with none of F or H vanishing or equal to each other, can be seen already at the level of one F and one H field, i.e. at the level of the solutions which is non-trivial only on the Euclidean four manifold. In particular we will study the massless monopoles upon uplifting and the issues of T duality for this case and the structure of the non-Abelian vector fields.
The paper is organized as follows. In Sec. 2 we present the most general known to us solution of the heterotic string theory in ten dimensions which is supersymmetric and magnetically charged asymptotically flat solutions upon dimensional reduction to four dimension. At this stage we consider only the leading order string equations and do not study the α ′ corrections; there are no non-Abelian fields present. However we have 6 U(1) fields H and 6 U(1) fields F as promised. In Sec. 3 we study the issue of the exactness of the general solution. We calculate the spin connection for the uplifted monopole solution and find how to promote it to the exact one. We find that the holonomy group of the spin connections of monopole solutions is SO (9). This comes as a nice surprise since the electric partners of some of our monopoles have a holonomy group in the non-compact part of the Lorentz group [17], [19]. Therefore the issue of exactness of these electric solutions is not clear. However, all magnetic solutions are fine and can be made exact by supplementing them by the non-Abelian fields. In Sec. 4 we study the world-sheet supersymmetric sigma models. For the most general M 9 monopoles we find (1, 1) supersymmetry. To get the extended ones we study the M 8 monopoles and find (4,1) or (4,4) supersymmetry. In Sec. 5 the M 4 monopoles are studied in detail. Finally in the Appendix A we have the spin connections and the details about the holonomy group of the monopoles. In Appendix B we focus on subtleties of a multi-monopole solutions with more than two centers.
Heterotic monopoles
The leading order heterotic string equations can be derived from the following Lagrangian (we write the 10d fields with a hat):
S ∼ dx 10 Ĝ e −2φ [R + 4(∂φ) 2 − 1 12Ĥ 2 ](4)
This action is the bosonic part of the pure d = 10, N = 1 supergravity. We do not add any Abelian vector fields which would be responsible for the 16 vector multiplets in toroidal compactification of the heterotic string on T 6 . From this action we would get only 6 vector multiplets in d = 4. The reason for not working from the beginning with Abelian vector multiplets in addition to the gravitational multiplet is that we will need the non-Abelian vector multiplets in d = 10 to keep supersymmetry with account of quantum corrections. Let us start with the solution of this 10-dimensional theory and later we will discuss the corresponding 4-dimensional theory.
A. Solution in D = 10
We assume that the fields depend only on three coordinates (x i ) and denote the internal 6 coordinates by x α . Thus we are looking for the static solutions with isometries in all 6 internal directions. All fields are constructed out of 12 harmonic functions.
The 10d metric is then given by 3
ds 2 = −dt 2 + e −4U dx i dx i + (dx α + A (1)α i dx i ) G αβ (dx β + A (1)β i dx i ) e −4U = 2(| χ R | 2 − | χ L | 2 ) , G αβ = δ αβ − 2χ L (α χ R β) | χ R | 2 +( χ R χ L )(5)
where χ R and χ L define the 12-dimensional harmonic O(6, 6)-vector
χ(x) = χ L (x) χ R (x) , ∂ i ∂ i χ(x) = 0 .(6)
For the dilaton we find
e −2φ = e 2U 1 √ det G = √ 2 e 4U | χ R | 2 + ( χ R χ L ) | χ R |(7)
3 We use the notation
χ L (α χ R β) = 1 2 (χ L α χ R β + χ L α χ R β ) and χ L [α χ R β] = 1 2 (χ L α χ R β − χ L α χ R β ).
The 10d antisymmetric tensor components are given by 4 .
B αβ = 2χ L [α χ R β] | χ R | 2 +( χ R χ L ) B αµ = A (2) µα +B αβ A (1)β µ(8)
Both Kaluza-Klein gauge fields are pure magnetic and given by
F (1) ij F (2) ij = √ 2ǫ ijm ∂ m 1 1 −1 1 χ L χ R = √ 2ǫ ijm ∂ m χ L + χ R − χ L + χ R(9)
The first six vector field strengths
F (1)α ij = ∂ i A (1)α j − ∂ j A (1)α i ≡ F(10)
are build out of the non-diagonal component of the metric
g i α = A (1)α i
. When this magnetic field is present in the solution we will call it F part of the monopole solution. The second set of magnetic vector fields is based on the antisymmetric tensor fields
F (2) ij α = ∂ i A (2) j α − ∂ j A (2) i α ≡ H .(11)
In the simplest case whenB αβ or A
(1)α i fields are absent F (2) ij α = ∂ iBα j − ∂ jBα i ,(12)
as it follows from eq. (8). Thus we will call the non-vanishing magnetic charges in the F (2) ij sector the H part of the solution.
All solutions above have one half of d = 10, N = 1 supersymmetry unbroken. This follows from the fact that we have found these solutions by performing the supersymmetric uplifting of the BPS solutions of d = 4, N = 4 theory [7]. For the choice of asymptotically flat configurations which we adopt in this paper, the supersymmetric uplifting can be performed by using the procedure and notation of Maharana-Schwarz theory [20] and developed by Sen [1]. If we would be interested in configurations which are not asymptotically flat, one would have to switch to the more general case of dimensional reduction and use the work by Chamseddine [21]. This would give the correct assignment of the fields to various supersymmetric multiplets.
As was already explained, the reason we call solutions (5) F& H monopoles is the fact that they become magnetically charged solution with two sets of vector fields, F and H, upon dimensional reduction. Specifically, the O(6, 6)-invariant bosonic action in the form of Maharana-Schwarz [20] and Sen [1] is
S = 1 16π d 4 x √ − det G e −2φ R G + 4G µν ∂ µ φ∂ ν φ + 1 8 G µν T r(∂ µ ML∂ ν ML) − 1 12 (H µνρ ) 2 − 1 4 G µµ ′ G νν ′ F a µν (LML) ab F b µ ′ ν ′ .(13)
This is a bosonic part of N = 4 supergravity interacting with six N = 4 vector multiplets. There is one vector field in each vector supermultiplet and 6 vector fields in supergravity multiplet. We are looking for the solutions in which all vector fields are magnetic and there are no axions H µνρ = 0. The magnetic potentials for 6 graviphotons are χ R α and the magnetic potentials for the six vector multiplets are χ L α . They are all harmonic functions, as shown in eq. (6).
B. Solution in D = 4
The four-dimensional supersymmetric solution corresponding to the uplifted supersymmetric solution in eq. (5) is given by [7] ds 2
str = −dt 2 + e −4U d x 2 , e −4U = 2 χ T Lχ = e 4φ = 2(| χ R | 2 − | χ L | 2 ) , M = 1 12 + 4e 4U χ L α χ L β χ L α χ R β χ R α χ L β ξχ R α χ R β , F (L) ij F (R) ij = 2ǫ ijm ∂ m χ L χ R(14)
where
ξ = | χ L | 2 | χ R | 2 . The canonical four-dimensional metric is ds 2 can = −e 2U dt 2 + e −2U d x 2 .(15)
The one center solutions can be taken in the simplest form
χ L = P vec 2| x| , χ R = n √ 2 + P gr 2| x| , n 2 = 1 , n · P gr ≥ 0(16)
For this spherically symmetric solution | x| = (x 1 ) 2 + (x 2 ) 2 + (x 3 ) 2 = r. It may be useful to rewrite the expressions for F and H in an explicit form in terms of magnetic charges.
F ij H ij = 1 √ 2 ǫ ijm x m | x| 3 P gr + P vec P gr − P vec = ǫ ijm x m | x| 3 P F P H (17)
where we have defined
P F ≡ 1 √ 2 ( P gr + P vec )(18)P H ≡ 1 √ 2 ( P gr − P vec ) .(19)
Using S duality [1], [7] one can convert the monopole solutions into electrically charged ones. The electric solution is given by the following formula
ds 2 str = −e 4U dt 2 + d x 2 , e 4U = e 4φ , E (a) i = 1 2 e 4U (ML) ab ∂ i χ b ,(20)
where U and M are defined in eq. (14) and
E (a) i is the electric field. E (a) i = F (a)
ti . One can see from this formula that the reason why the vector multiplet charge changes the sign during S duality is the following. The asymptotic value of the matrix ML is
ML = −1 0 0 1 .(21)
Therefore the upper part related to vector multiplets χ L changes the sign whereas the lower part related to χ R does not change the sign. It follows that the magnetic charges of the graviphotons become the electric charges of the graviphotons P gr ⇐= S =⇒ Q gr (22) and the magnetic charges of the vector multiplets become electric charges of the vector multiplets with the opposite sign
P vec ⇐= S =⇒ − Q vec(23)
Thus S duality trades F for H fields and vise versa.
F magn ⇐= S =⇒ H el (24) H magn ⇐= S =⇒ F el(25)
We have presented in [7] the classification of the monopoles via the classification of their S dual electric partners for which the relation to the elementary string excitation is available [1], [2]. Since all solutions which we consider are supersymmetric the right-moving oscillator modes have N R = 1 2 . For the left-moving part we obtain
N L − 1 = 1 2 (Q 2 gr − Q 2 vec ) M 2 = 1 2 Q 2 gr .(26)
The electric solution (20) describes the following states:
1) N L = 0 massive and massless white holes 2) N L = 1 extremal a = √ 3 black holes. 3) N L ≥ 2 discrete set of extremal black holes (for M 2 = N L − 1 they reduce to a = 1 black holes).
Our magnetic configurations (5) can be also associated with various values of N L via the relation (we consider one center solution here)
N L − 1 = 1 2 (Q 2 gr − Q 2 vec ) = 1 2 (P 2 gr − P 2 vec ) = ( P F · P H ) , M 2 = 1 2 P 2 gr .(27)
1) N L = 0 massive and massless monopoles
P F · P H = −1 , (P 2 gr − P 2 vec ) = −2 ,(28)
where the massless limit is given by
P F + P H = 0 , P gr = 0 , P 2 vec = 2 .(29)
2) N L = 1 monopoles
P F · P H = 0 , P 2 gr = P 2 vec .(30)
Obviously H monopoles satisfy this constraint since for them F = 0. However, any solution with F · H = 0 also represents an N L = 1 state. In particular, H = 0 or any solution with non-vanishing but orthogonal F and H also belongs to N L = 1 state.
3) N L ≥ 2 monopoles P F · P H ≥ 1 .(31)
In the special case M 2 = N L − 1 they reduce to a = 1 extreme magnetic black holes with
P F − P H = 0 , P vec = 0 .(32)
The remarkable feature of all F & H monopoles with N L ≥ 2 was observed in [7]. In the stringy frame in the four-dimensional geometry they are completely non-singular solutions. We get for the metric at r → 0
ds 2 str = −dt 2 + 2(|χ R | 2 − |χ L | 2 )d x 2 → −dt 2 + dρ 2 + ( P F · P H )d 2 Ω(33)
where dρ = P F · P H dr/r. Hence, in this limit the 4-dimensional solution is given by a bottomless throat (M 2 × S 2 ), with the radius squared given by the scalar product of both charge vectors. By identification with the string excitation we find that the radius squared has to be quantized ( P F · P H = (N L − 1) ≥ 1) and we get the limiting metric in the form
ds 2 str → −dt 2 + dρ 2 + (N L − 1)d 2 Ω(34)
Obviously, for any N L = 1 ( P F · P H = 0) excitation the radius of the throat vanishes (singularity) and for N L = 0 the throat shrinks to zero already at finite r = r c > 0. The expression for the scalar curvature was calculated in [7]. Using our new notation which focus on the presence of the F and H charges, defined in eqs. (19) we have
R str = 2(4 P F · P H ) 2 − 8M(4 P F · P H )r + 4(6M 2 − (4 P F · P H ))r 2 (4 P F · P H ) + 4Mr + r 2 3 .(35)
Looking on the denominator of this expression one can see that the solution is non-singular for all positive ( P F · P H ). In terms of dual string states this scalar product has to be a positive integer defined by ( P F · P H ) = (N L − 1) for N L ≥ 2. The maximum curvature for these solutions is reached at r = 0 and is equal to
R str (0) = 1 2(N L − 1) .(36)
Stringy α ′ corrections to F & H monopoles
There is a strong belief that supersymmetry established at the classical level will be preserved with the account of the quantum corrections in absence of anomalies. However, when anomalies are present the preservation of supersymmetry and the issue of BPS states in general are not clear. In some particular situations one can study the quantum corrections to the supersymmetry transformations which are due to anomalies. This has been worked out specifically for the anomaly related α ′ corrections in the heterotic string theory. The relevant supplement to the 10-dimensional action (4) includes the Yang-Mills field and the generalized curvature coupling (with torsion)
α ′ T r(R − ) 2 − T rF 2(37)
where the Trace operation in the T r(R − ) 2 is over the non-compact Lorentz group SO (1,9) and the one in the Yang-Mills T rF 2 is over the compact SO(32) or E 8 × E 8 . The BPS configuration which solves the leading order equations supplies the information about the generalized curvature, which gives the first term in eq. (37). When the generalized ten-dimensional curvature R −AB CD has a nonvanishing value in the non-compact direction of the Lorentz group, i.e.
R −AB 0D = 0 (38)
there is no possibility to use the standard procedure of the spin embedding into the gauge group. However, when
R −AB 0D = 0 ,(39)
the spin connection can have a maximum holonomy group SO(9) which is compact. In such situation one can use the standard procedure of embedding of the spin connection into the gauge group. The advantages of this are 1. The anomaly related α ′ corrections to the equations of motion in the effective theory, to the action and to the space-time supersymmetry rules vanish. The detailed description of the procedure can be found in [22].
2. In terms of the world-sheet supersymmetry embedding of the spin connection into the gauge group means the following: one can start with the (1,0) supersymmetric model and enhance this supersymmetry to (1,1) world-sheet supersymmetry. This is believed to be necessary for avoiding chiral anomalies in the left-right asymmetric case. The details of this procedure can be found in [8].
The procedure of correcting the supersymmetric solutions via spin embedding into the gauge group was applied before to many BPS solutions, starting with the symmetric version of the fivebrane [8]. The same procedure was also applied to H monopoles [9], to F monopoles [14] and to T self dual monopoles [17]. Typically for all these solutions the Yang-Mills field to be added to the solutions was part of the SO(4) gauge theory. In application to the gravitational waves [22] and to the generalized fundamental strings [23] the corresponding gauge group was found to be SO (8).
It was known however, that the uplifted electrically charged a = 1 black holes have the noncompact holonomy group of the generalized spin connections [17]. In more general models, chiral null models of Horowitz and Tseytlin [19], the holonomy group of the generalized spin connections is also not compact [19] since it includes the non-compact Abelian subgroup of the Lorentz group. For these solutions the possibilities of restoring the unbroken supersymmetry of the classical solution in presence of α ′ corrections are not clear.
For the generic F & H monopoles the non-Abelian group was not known before, since to find the gauge field one has to calculate the spin connection with account of the torsion. We will first study the general solution and find that the holonomy group of the generalized spin connection is the compact group SO (9). This solution has a non-trivial metric in all 9 directions but time, therefore we will sometimes call it M 9 . We will describe this most general solution and the corresponding (1,1) supersymmetric sigma model. Afterwards we will focus on the slightly less general solution which can be embedded into SO(8) and whose geometry is non-trivial on M 8 Euclidean manifold. For this solution we will study the extended supersymmetries on the world-sheet. The reason for this is that to enhance the single world-sheet supersymmetry (1,0) to (2,0) one has to consider a manifold of even dimension and to enhance it to (4,0) one need a manifold of dimension 4n with the integer n.
Finally we will perform the detailed study of F & H monopoles with account of α ′ corrections in the simplest case of M 4 monopoles which have a non-trivial geometry in the four-dimensional Euclidean space. The relevant gauge group will be again the SO(4).
In this section we will have to introduce the set of notations suitable for dealing with vielbeins and spin connections. We will use the base manifold coordinates on M 10 and denote them
x M = {t; x i = (x 1 , x 2 , x 3 ); x α = (x 4 , . . . , x 9 )}.
The tangent space will be introduced via the zehnbeinŝ
E A =Ê A M dx M ,Ê A = {e 0 ; e i ; E a } i = 1, 2, 3; a = 4, . . . , 9(40)
The uplifted monopoles metric (5) can be rewritten in the form:
ds 2 = −(e 0 ) 2 + (e i ) 2 + (E a ) 2 =Ê A η ABÊ B(41)
where
e 0 = dt (42) e i = e −2U dx i δ i i (43) E a = E a α (dx α + A (1)α i dx i )(44)
and the tangent space metric is η AB = {−1, +1, . . . , +1}. The explicit expressions for the sixdimensional vielbein E a α and its inverse E α a is defined in terms of our 12 harmonic functions and can be found in the Appendix A. Thus it is clear from eq. (42) that the most general monopole solution in this class has a non-trivial nine-dimensional Euclidean manifold M 9 . The spin connection one-form will be defined as
W AB ≡ W AB,CÊ C(45)
with the standard definition of W AB,C in terms of zehnbeins and its derivatives. To build the generalized spin connections we need also the tangent space 3-form H ABC .
The objects of interest are the torsionful spin connections
Ω ±AB = (W AB,C ± H ABC )E C ≡ W AB ± H AB(46)
Our tangent space group is the Lorentz group SO (1,9)
Ω ±0I = 0 Ω ±IJ = 0 .(48)
The holonomy algebra of the generalized spin connections is the algebra generated by the M IJ and the holonomy group is SO (9). We have performed the explicit calculation of the metric part of the spin connections adapting to our case the well known formulas of Scherk and Schwarz [24]. The SO(9) Yang-Mills one-form field is given by the non-vanishing components of Ω −IJ spin connection
V IJ = V IJ,K E K = Ω −IJ
The details of the calculation and the expression for Ω ±IJ can be found in the Appendix A. The net result for the Yang-Mills vector field is
V IJ, 0 = 0 V IJ, K = (W IJ, K + H IJK )
The first two indices are the indices of the SO(9) gauge group and the third one is a space-time index (in the tangent frame). The zero space-time component of the tangent space vector field vanishes for all solutions: this means that the non-Abelian field is also of a magnetic nature. It is therefore tempting to find out if the SO(9) field V IJ,K carries any magnetic charge. For this purpose we note that far from the core of the monopole (we study the one-center solution here) the vielbeins behave as
E A M ∼ c + c 1 r + c 2 r 2 + .
. . Therefore the metric spin connections, which depends on vielbeins and derivatives of the vielbeins, behave as W IJ, K ∼ d 1 r 2 + d 2 r 3 + . . . The same large distance behavior can be observed for the torsion part of the spin connection. Indeed the curved space 3-form consists of the derivative of a 2-form field which behave as
B M N ∼ f + f 1 r + f 2 r 2 + . . .
Thus all F & H monopoles
have the non-Abelian vector field which comes from the embedding of the spin connection into the gauge group SO(9) and at large distances falls off as
V IJ,K ∼ a 2 r 2 + a 3 r 3 + . . .
The Yang-Mills field strength would fall off as 1 r 3 . The situation is exactly the same as described in [10] for the H monopoles: there is no magnetic charge associated with the non-Abelian part of the solution, we have only F & H magnetic charges associated with the Abelian vector fields, which we have discussed above.
Since we have established that all F & H monopoles can be supplemented by the non-Abelian SO(9) field via spin embedding one can address the issue of the world-sheet supersymmetry of the heterotic string theory in the generic monopole background. After the four dimensional monopole solutions have been interpreted as solutions with unbroken supersymmetries of the effective action of the heterotic string theory in critical dimension, one can construct a supersymmetric sigma model in an uplifted monopole target space. The details for the special case of the uplifted a = 1 black holes can be found in [18]. The most general monopole solutions with SO(9) non-Abelian gauge field defined by 12 magnetic charges suggest the following (1, 1) supersymmetric sigma model [12].
I (1,1) = d 2 zd 2 θ(G IJ + B IJ )D + X I D − X J ,(49)
where the unconstrained (1, 1) superfield is given by
X I (x I , θ + , θ − ) = x I (z) + θ + λ + I (z)E I I (x) − θ − λ − I (z)E I I (x) − θ + θ − F I (z) .(50)
This theory is defined in the Euclidean 9 dimensional manifold M 9 given by the components of the 9 × 9 sector of the monopole solution (5), (8).
To find out the class of monopole solutions for which the extended supersymmetry can be established we will limit ourself to the case when one of the directions in the internal 6 manifold is flat. Let χ 9 R = χ 9 L = 0. This solution is the one defined in eq. (5) but characterized by ten harmonic functions instead of twelve. This monopole solution has very special properties. The non-trivial background is in M 8 only
G M N dx M dx N = −dt 2 + (dx 9 ) 2 + e −4U dx i dx i + 8 4 (dx α + A (1)α i dx i ) G αβ (dx β + A (1)β i dx i )(51)
The 10d antisymmetric tensor components B M N = (B αβ , B iα ) , α = 4, . . . 8 are given in eq. (8).
We may rewrite the geometry (51) of the M 8 monopoles as the function of ten harmonic functions To prove this we will study the conditions of such equivalence, as given by Hull in [25].
ds 2 = dudv + G IJ (χ α L , χ α R )dx I dx J I, I = 1, . . . , 8 u = −t + x 9 , v = t + x 9 (52) B IJ = B IJ (χ α L , χ α R ) α = 4,
i) The non-trivial part of the background in the stringy frame has to describe an 8 dimensional Euclidean manifold
ii) The background has to admit a sufficient number of supercovariantly constant Killing spinors. This allows to construct at least 3 almost complex structures in the right and/or in the left-moving sectors of the theory in which Killing spinors exist.
The equivalence theorem proved by Hull [25] and also the study of supersymmetric sigma models by Hull and Witten [26] and Howe and Papadopolous [12] indicate also that for some backgrounds for which in addition iii) the Nijenhuis tensor vanishes, one may find the enhancement of supersymmetry from 1 up to 4 in each of the right or left moving sectors of the theory where these complex structures have been found.
All three conditions are met for M 8 monopoles. We will find that the world-sheet action in the M 8 monopole background has an extended (4, 4) supersymmetry for all solutions with N L = 1 and (4, 1) for the rest. We may again use the (1, 1) action as given in eq. (49), however now I, I = 1, . . . , 8. Upon integration over the fermionic variables and elimination of the auxiliary fields we get
S = dτ dσ[(G IJ + B IJ )∂ z x I ∂zx J + iλ + I (∇ z (+) λ + ) I − iλ − I (∇z (−) λ − ) I − 1 4 R (+) IJ,KL λ + I λ + J λ − K λ − L + 1 4 R (−) IJ,KL λ − I λ − J λ + K λ + L ] .(53)
The right(left)-handed fermions λ + I (λ − I ) have covariant derivatives with respect to torsionful spin connections Ω + (Ω − ). The torsionful curvatures R ± = dΩ ± + Ω ± ∧ Ω ± have the exchange properties R This action has more of the extended supersymmetries for our monopole solutions. Indeed, one can find the set of three almost complex structures by building them in terms of the bilinear combinations of Killing spinors of our background. The corresponding commuting normalized spinors are αṗ, β (m)ṗ , m ≤ 7 whereṗ is the spinorial index of the SO(8). The complex structure J KL with all required properties was found in [25] to be
J KL (m) = β (m) p σṗq KL αq .(55)
Although the number of such almost complex structures can be as large as 7, we are interested only to find at least two, the third one being defined by the two. If there are more than three, the target manifold is reducible. The counting of complex structure proceeds as follows. For solutions with N L = 1 for which the square of the right-handed magnetic charge equals the square of the left-handed magnetic charge, the theory can be embedded into the type II superstring theory (or into N = 8 supergravity at the level of the effective four-dimensional action). This solution has left-right symmetry and therefore it has one half of unbroken supersymmetry in both leftand right-handed spinors, i.e. for our P F · P H = 0 monopoles we have the double set of almost complex structures. Thus for N L = 1 monopoles one can expect the enhancement of world-sheet supersymmetry from the manifest one in eq. (49) which is (1, 1) up to (4,4) if all necessary properties of the complex structures will be established.
For all infinite tower of other solutions with N L = 0, N L = 2, 3, . . . , n the left-right symmetry is broken since P 2 R = P 2 L + 2(N L − 1)) and therefore P 2 R = P 2 L . The space-time Killing spinors exist only in the right moving sector, the left moving one does not have unbroken supersymmetries. These solutions have one half of unbroken supersymmetry of the heterotic string (and only one quarter of type II string). Therefore one can construct the complex structures out of space-time Killing spinors according to Hull's prescription (55) only for the right-moving modes. Therefore, for these solutions the expected world-sheet supersymmetry enhancement is going to be (4, 1) under condition that the algebra of these extended supersymmetries closes. Now that we have established that we do have enough of almost complex structures, since we deal with configurations with unbroken space-time supersymmetries, the crucial question remains whether the commutator of two supersymmetries closes for some of our monopoles or for all of them. This does not seem to be a property of an arbitrary background even with unbroken supersymmetries. The right hand side of the commutator of the first supersymmetry with the one induced by the existence of a covariantly constant almost complex structure J depends of the Nijenhuis tensor
N K IJ = J L I J K [J,L] − J L J J K [I,L] .(56)
In this expression comma means a derivative. For a generic solution with unbroken supersymmetry the complex structure J K J is covariantly constant, but not necessarily constant, which would force terms like J K [J,L] to vanish. Therefore the Nijenhuis tensor does not vanish in general and therefore one does not find the enhancement of supersymmetry for any supersymmetric background. However, we have found that for our M 8 monopole solution the Nijenhuis tensor does vanish, which provide the closure of the algebra of the extended supersymmetries on the world-sheet. The reason is that all magnetic solutions of the heterotic string have Killing spinors in the stringy frame which are constant. This property of monopoles provides constant complex structures J K with J K [J,L] = 0. Indeed, in canonical frame
ǫ can (x) = e U (x) 2 ǫ 0 ,(57)
where ǫ 0 is a constant spinor. For all magnetic solutions with U + φ = 0 this means that the covariantly constant spinor in stringy frame is a constant spinor! ǫ str = ǫ 0 (58)
This property was observed for the five-branes and H monopoles before [8], [9], where it was also used for the enhancement of world-sheet supersymmetry. For a = 1 magnetic black holes it was also found [27] that in the stringy frame the Killing spinors exists globally since they are constant. Now we have verified that for the total family of pure magnetic solutions in the stringy frame the Killing spinors are constant. This can be done for example by observing that the Killing spinor for the O(6, 22) covariant electrically charged solutions was found by Peet [28] to be of the form (57). By S duality it follows that for all magnetic solutions Killing spinors are constant in stringy frame.
Thus M 8 monopoles (51) with SO(8) non-Abelian gauge fields formulated in the stringy frame have the following properties: i) non-trivial 8 dimensional Euclidean geometry and unbroken space-time supersymmetry which exists globally: the Killing spinors as well as the complex structures are constant ii) N L = 1 solutions correspond to sigma models with (4, 4) world-sheet supersymmetry; the GS formulation of the type II superstring theory with manifest space-time supersymmetry is equivalent to the NSR form with the world-sheet supersymmetry: the world-sheet (1,1) spinor λ I + , λ I − , which is also an SO(8) vector is converted into the space-time SO(8) spinors S q , Sq using the normalized commuting Killing spinors αṗ, α p of the monopole background:
λ I + = αṗγ İ pq S q , λ I − = α p γ I pq Sq .(59)
iii) N L = 0, N L = 2, 3, . . . , n solutions correspond to the sigma models with (4, 1) worldsheet supersymmetry; the GS formulation of the heterotic string theory with manifest space-time supersymmetry is equivalent to the NSR form with the world-sheet supersymmetry. However, for these backgrounds only the right-handed space-time supersymmetry is available. All left-handed supersymmetries are broken. Therefore only the right-moving world-sheet spinor can be converted into the space-time spinor SO(8) vector
λ I + = αṗγ İ pq S q .(60)
All solutions in the group N L = 2, 3, . . . , n are described by the non-singular bottomless throat geometry.
The M 8 monopoles seem to provide the best laboratory for the exploration of the space-time supersymmetry versus world-sheet supersymmetry.
Exact F & H monopoles on M 4
In this section we are going to discuss special examples of our solution for the case that the spatial part is 4-dimensional, i.e. if we have only one non-trivial internal direction, x 4 . Most of the relevant features of F & H monopoles are already present in this case. Similar to the M 8 for special configurations we can here expect an enhancement of the world sheet supersymmetry.
We start with general 5-dimensional solution and then we will consider special examples. For that it is reasonable to rotate our harmonic functions into a new basis by performing the O(1, 1) transformation
χ (1) χ (2) = 1 √ 2 1 1 −1 1 χ L χ R(61)
The metric and the dilaton are then given by (y = x 5 , . . . x 9 )
ds 2 = −dt 2 + d y 2 + 4χ (1) χ (2) dx i 2 + χ (2) χ (1) dx 4 + A (1) i dx i 2 e 4φ = 4(χ (2) ) 2 .(62)
The non-diagonal term in the metric is defined as
F = F (1) ij ≡ ∂ i A (1) j − ∂ j A (1) i = 2ǫ ijm ∂ m χ (1)(63)
The nontrivial part of the 3-form field defines the second gauge field
H = F (2) ij = ∂ iB4j − ∂ jB4i = 2ǫ ijm ∂ m χ (2)(64)
We would like to understand the T duality properties of the uplifted F & H monopoles. For this purpose we will perform Buscher [29] transformation over the solution. The result is very simple: one has to change χ (1) into χ (2) and back:
χ (1) ⇐= T =⇒ χ (2)(65)
i.e. the compactification radius g 44 is inverted and the two gauge fields F&H are exchanged.
The Yang-Mills fields which have to be added to the solutions for avoiding α ′ corrections to supersymmetry parameterize an SO(4). We will have the space-time indices in a lower position and the Yang-Mills one in the upper position. In the tangent space there are no zero components of the vector field, which means also that there are no time components in the curved space.
V 0 ij = V 0 i4 = V t ij = V t ij = 0
The vector components are
V l ij = e 2U δ l [i ∂ j ln χ (1) χ (2) V l i4 = 1 2 e 2U ǫ lim ∂ m ln χ (1) χ (2)(66)
The fourth components of the vector fields which become scalars in four dimensions are
V 4 ij ≡ Φ ij = − 1 2 e 2U ǫ ijm ∂ m ln χ (1) χ (2) V 4 i4 ≡ Φ i4 = 1 2 e 2U ∂ i ln χ (1) χ (2)(67)
Under T duality we have again to exchange χ (1) and χ (2) which means that only V l i4 and Φ i4 change the sign.
Let us establish the connection with the previously known exact heterotic monopoles 5 .
A. H monopoles (N L = 1)
For this example the F field (63) is absent,
F = 0 .(68)
We choose for the harmonic functions
χ (1) = 1/2 , χ (2) = 1/2 + 1/ √ 2 s P s | x − x s | .(69)
We have taken here the general ansatz as a multi center solution. This, however, is only consistent if we restrict the positions or the charges at the centers. We will come back to this point at the end of this section. Our solution on M 5 becomes then
ds 2 = −dt 2 + V −2 (dx i dx i + dx 4 dx 4 ) , F = 0 e 2φ = 2 χ (2) = V −2 , ∂ iB4j − ∂ jB4i = ǫ ijm ∂ m e 2φ(70)
and only M 4 is non-trivial. The Yang-Mills fields are
V l ij = −2δ l [i ∂ j] V V l i4 = ǫ lim ∂ m V (71) Φ ij = ǫ ijm ∂ m V Φ i4 = ∂ i V(72)
This solution with N L = 1 has self-dual Yang-Mills fields as different from the general case (66),
.
V l i4 = 1 2 ǫ ikm V l km , Φ i4 = 1 2 ǫ ikm Φ km(73)
This self-duality is the source of the enhancement of the left-handed supersymmetry, since the integrability condition for the Killing spinors is available. This allows to promote this N L = 1 solution to a supersymmetric solution of the type II string with one half of supersymmetry unbroken. It results also in (4,4) world-sheet supersymmetry for the corresponding sigma model.
B. F monopoles (N L = 1)
This is another example with the same left-handed oscillation number. Now, the H field (64) is absent
H = 0 .(74)
We take the harmonic functions
χ (2) = 1/2 , χ (1) = ǫ/2 + 1/ √ 2 s P s | x − x s |(75)
Now the 3-form and the dilaton is absent and the non-trivial metric on M 4 is the self-dual multi center metricd
s 2 = −dt 2 + V −2 dx i dx i + V 2 (dx 4 + ω i dx i ) 2 , H = 0, e 2φ = 1 (76) where V −2 ≡ χ (1) χ (2) = ǫ + s √ 2P s | x − x s | (77) ∇(V −2 ) = ∇ × ω(78)
This is the multi center Gibbons-Hawking metric [32]. Special cases are: ǫ = 0, s = 1 (one center) which is the flat Minkowski space, ǫ = 0, s = 2 (two center) is the Eguchi-Hanson instanton and for ǫ = 1 this metric corresponds to the multi-Taub-NUT spaces. Again we have certain restriction for the charges or positions of the center (see below).
Under Buscher duality transformations [29] F monopoles are transformed into the H monopoles and back. This is obvious from the fact that under this duality transformation both gauge fields are exchanged.
The F monopoles Yang-Mills fields are
V l ij = −2δ l [i ∂ j] V V l i4 = −ǫ lim ∂ m V (79) Φ ij = ǫ ijm ∂ m V Φ i4 = −∂ i V(80)
This solution with N L = 1 has anti-self-dual Yang-Mills fields as different from the general case (66), (67).
V l i4 = − 1 2 ǫ ikm V l km , Φ i4 = − 1 2 ǫ ikm Φ km(81)
Here again we are dealing with enhanced supersymmetries. The solution has one half of unbroken supersymmetries in the type II string and on the world-sheet we have a (4,4) supersymmetric sigma model.
C. T self dual monopoles (N L ≥ 2)
We know that under T-duality both gauge fields get exchanged. Therefore to have a self dual solution we assume that both gauge fields (63) and (64) are equal (
χ (1) = χ (2) ) F = H .(82)
It corresponds to uplifted a = 1 extreme massive magnetic black holes [11]. In notation of this paper they have χ L = 0 and
χ (1) = χ (2) = 1/2 + 1/ √ 2 s P s | x − x s | ≡ χ(83)
The non-trivial metric on M 4 and the dilaton are
ds 2 = −dt 2 + e 4φ dx i 2 + dx 4 + A (1) i dx i 2 e 4φ = 4(χ) 2 .(84)
The Yang-Mills part of the solution is
V l ij = −2δ l [i ∂ j] e −2φ V l i4 = 0 (85)
The fourth components of the vector fields which become scalars in four dimensions are
Φ ij = ǫ ijm ∂ m e −2φ Φ i4 = 0(86)
This is an SU(2) non-Abelian field in agreement with [17]. The Yang-Mills field here is not selfdual. Therefore this solution can be embedded into type II string only as a solution with one quarter of supersymmetry unbroken, since all left-handed supersymmetries are broken. This leads to (4,1) supersymmetry on the world sheet, i.e. the enhancement of supersymmetries takes place only in the right-handed sector of the theory.
Thus, we define our harmonic functions as
χ (1) = 1/2 + 1/ √ 2 s (P vec ) s | x − x s | , χ (2) = 1/2 − 1/ √ 2 s (P vec ) s | x − x s |(88)
The metric and the dilaton are now
ds 2 = −dt 2 + 4χ (1) χ (2) dx i 2 + χ (2) χ (1) dx 4 + A (1) i dx i 2 e 4φ = 4(χ (2) ) 2 .(89)
This solution describes in four dimensions a massless monopole that was uplifted into the 5dimensional stringy geometry.
Under T duality transformation the massless uplifted monopoles are not invariant since χ (1) = χ (2) . If we would perform Buscher [29] transformation over the solution and combine it with the charge conjugation 6
χ (1) ⇐= T d =⇒ χ (2) (90) (P vec ) s ⇐= C =⇒ −(P vec ) s(91)
we would find that the solution is invariant. Indeed, for the massless monopoles
χ (1) ⇐= T d × C =⇒ χ (1) (92) χ (2) ⇐= T d × C =⇒ χ (2)(93)
This property of the massless monopoles was not predicted before and the fact that it involves T duality and changing the monopole into the anti-monopole T d × C does not seem to follow from any known principles. It is observed here as a property of the explicit solution.
The YM fields fields for the massless monopoles are given in eqs. (85), (86) with the harmonic functions defined in eq. (88). There is no special simplifications: the non-Abelian fields belong to the SO(4) gauge group. This solution breaks one half of the supersymmetry of the heterotic string, and corresponds to (4,1) supersymmetric sigma model.
E. The moduli matrix M
Thus we have listed here four special type of F & H monopoles. Two of them, H monopoles and F monopoles have the four-dimensional projection in which the metric and the dilaton are the same for both solutions and coincide with those of the extreme magnetic a = √ 3 black holes
ds 2 str = −dt 2 + e −4U d x 2 , e −4U = χ = e 4φ = 1 + 2M r , M = P/ √ 2(94)
The difference comes in moduli M and in the vector fields since the right-handed harmonic function χ R is the same for both solutions but the left-handed one differs by the sign of the charge. For our case here the moduli metric M is given by
M = χ (1) χ (2) 0 0 χ (2) χ (1) (95)
For the H monopole we have therefore for the non-vanishing component of moduli and vector fields
M H = 1 1+ 2M r 0 0 1 + 2M r , F (L) ij F (R) ij = ǫ ijm ∂ m P r −1 +1 (96)
For the F monopoles the non-vanishing components of moduli and vector fields are
M F = M −1 H = 1 + 2M r 0 0 1 1+ 2M r , F (L) ij F (R) ij = ǫ ijm ∂ m P r +1 +1 (97)
The T self dual solution in four dimensional form becomes equal to the extreme a = 1 dilaton black hole:
ds 2 str = −dt 2 + e −4U d x 2 , e −4U = χ = e 4φ = (1 + 2M r ) 2 , M = P/ √ 2(98)
The moduli fields are constant and vector fields are only the right-handed ones.
M = 1 12 , F (L) ij F (R) ij = ǫ ijm ∂ m P r 0 +1 (99)
Finally, the massless monopoles in the four-dimensional world are the ones whose metric and the dilaton are:
ds 2 str = −dt 2 + e −4U d x 2 , e −4U = e 4φ = 1 − 1 2 P vec r 2 , M = 0(100)
There are some non-vanishing components of the moduli fields and and the vector fields are only the left-handed ones.
M = r+2M r−2M 0 0 r−2M r+2M , F (L) ij F (R) ij = ǫ ijm ∂ m P vec r +1 0 (101)
F. Remark about the multi center case
All our magnetic (Abelian) gauge fields are given by Dirac monopole solutions. As a consequence in order to formulate the solution consistently one has to remove the so-called Dirac-singularities. This is in principle not difficult, one has only to take care that it can be done for all centers simultaneously. We are describing this procedure in detail in the Appendix B.
Discussion
The large family of monopole solutions described in this paper provides an interesting area of study of non-perturbative effects in supersymmetric gravity. The fact that these configurations have a non-trivial Euclidean geometry on an eight dimensional manifold M 8 has made these new solutions a most interesting objects realizing the relations between unbroken space-time supersymmetry and world-sheet supersymmetry. The role of SO(8) gauge group with its particularly remarkable relations between vector and spinor representations acquires a new and beautiful aspect when applied to supersymmetric stringy monopoles. The unique property of such monopoles to admit complex structures in the corresponding sigma models is related immediately to the fact that the space-time Killing spinors for magnetic configurations are constant in the stringy frame. Therefore we have observed the enhancement of world-sheet supersymmetries for all stringy monopoles which live on the Euclidean M 8 manifold. These monopoles, although discovered as a soliton type solutions of the classical field equations of supergravity seem to have the most stable properties concerning the quantum corrections. The supersymmetry in the space-time as well as the supersymmetry at the world-sheet are free of anomalies. The supersymmetric non-renormalization theorems do not have any obvious obstructions and therefore the M 8 monopoles give us an example of rather reliable non-perturbative BPS states of the superstring theory.
Appendix A: Spin connections
Now we define the vielbeins. The zehnbein can be written aŝ
E A M = e 0 t 0 0 e i i 0 E a β A (1)β i E a α (102)
The curved three dimensional indices are underlined. The same can be rewritten aŝ
E A =Ê A M dX M = {e 0 = dt , e i = e i i dx i , E a = E a β A (1)β i dx i + E a α dx α }(103)
The corresponding vierbeine are given by
e 0 t = 1 , e i j = e −2U δ i j(104)
The six-dimensional vielbein is defined in terms of our 12 harmonic functions as
E a α = δ aβ δ βα − 1 | χ R | 2 + ( χ R χ L ) (1 − 1 − | χ L | 2 | χ R | 2 )χ R β χ R α + χ L β χ R α (105)
The inverse quantities are:
G αβ = δ αβ + 2e 4U [χ L α χ L β + | χ L | 2 | χ R | 2 χ R α χ R β + 2χ L (α χ R β) ] E α a = δ αβ + √ 2e 2U | χ R | (1 − 1 − | χ L | 2 | χ R | 2 )χ R α χ R β − χ R α χ L β δ β a
The results for the metric spin connection one-form consist of three type of terms representing the SO(3) rotation M [ij] , the SO(6) rotation M [ab] and the non-diagonal terms M [ai] .
W ij = 2(∂ [i e 2U )e j] − 1 2 F ij,a E a(106)W ab = E [a α δ i i (∂ i E αb] )e i(107)W ai = F ik α E αa e k − 1 2 E a α e 2U δ i i (∂ i G αβ )E β c E c(108)
All components of spin connection related to the boosts are vanishing:
W 0A = 0(109)
To calculate the tangent space three form H ABC we will use the expression for the curved space three form and the zehnbeins
H ABC = E A M E B N E C L H M N L(110)H 0BC = 0 H 0B ≡ H 0BC E C = 0 (111)
We will get 3 types of terms for H ABC E C . Since none of the indices in H ABC E C takes the value 0 we could rewrite it as H IJK E K . We will get 3 types of terms for it.
H ij = H ijk e k + H ija E a(112)H ab = H abk e k + H abc E c(113)H ai = H aik e k + H aic E c(114)
For the one center solution only the term H ijk vanishes, for the multi center solution the situation is more complicated. Thus in general all components of H IJ are non-vanishing.
There are two combinations of spin connection and three form which we need to know
Ω ±IJ = W IJ ± H IJ(115)
They are both taking values in SO(9) group, in general. For some particular solution they may take values in much smaller groups which are subgroups of SO(9) but they do not need more than SO(9) and since there are no boost components in neither metric spin connections nor in the three forms, this concerns both generalized spin connections Ω ±IJ .
where r 2 s = (x − x s ) 2 + (y − y s ) 2 + (z − z s ) 2 and θ s and φ s are the angular variables of the center s. For every gauge field we have a "+" and a "-" part which are singular for θ s = 0 or π and we have to take the non-singular one in the different patches. In the overlapping region both parts are connected by a gauge transformation which is equivalent to a shift in the x 4 coordinates (see (62) and also [32])
x 4 (s) → x 4 (s) ∓ p F s φ s .(119)
(the "∓" ambiguity comes from the fact that one can approach the overlapping region from two different patches). Since we have to identify the field configuration in the overlapping region we find that x 4 s has to be periodic with the period of 2πp F s which means that the x 4 direction is a circle with the radius p F s . This has to be done for every center. This procedure can be done only if the compactification radius of x 4 is the same for all centers. Otherwise one could not put together all the the different coordinate patches for all centers. But since the period or radius of the x 4 differs if the magnetic charges differs from center to center we get the result that all magnetic charges are the same up to a sign or zero, i.e.
p F s = p F η F s , η F s = 0, ±1(120)
There is a second possibility to remove all Dirac singularities consistently. Namely if all centers are on a line, e.g. if x s and y s are equal all centers line-up parallel to the z-direction. Then we can introduce for all centers the same coordinate system with the same angular variable φ,
A (1) d x ≡ A(1)
φ dφ. Consequently, the periodicity condition for x 4 is now given by
x 4 ≃ x 4 + P F φ(121)
where P F = k+1 s=1 p F s , the total magnetic charge. Again the x 4 direction is compactified on a circle with radius proportional to the total magnetic charge.
These results were related to the fact that A (1) was the KK gauge field in the metric and thus restricts the p F s . A simple T duality, however, exchange both gauge fields A (1) and A (2) and thus yields the same restrictions for p H s as well.
To summarize, in order not to have Dirac singularities in the M 4 metric we have two possibilities i) all centers have up to the sign the same magnetic charges (or zero) p F and p H , or ii) all centers have to line-up. For the harmonic functions for the multi center solutions this means i)
χ (1) = a (1) + p F k+1 s=1 η F s | x− xs| , χ (2) = a (2) + p H k+1 s=1 η H s | x− xs| ii) χ (1) = a (1) + k+1 s=1 p F s | x−zs| , χ (2) = a (2) + k+1 s=1 p H s | x−zs|(122)
where η s = 0, ±1. In both cases the x 4 coordinate has to be compact. Obviously, the two-center case falls into the second possibility. The first non-trivial case are three centers.
Now we turn to the question what do these restrictions mean for the torsion or antisymmetric tensor. We start with the calculations of the Chern-Simons term, which is part of the torsion in D
Since our D = 4 torsion has to vanish there are two possibilities. Either the Chern-Simons term and the antisymmetric tensor vanish each or they cancel against each other 7 . The Chern-Simons term vanishes under the two conditions:
i) p F s p H t = −p F t p H s
ii) all centers line-up, i.e. x s = x t , y s = y t ∀ s, t .
The first condition means that either p F s or p H s is identical zero for all centers, i.e. only F or H monopoles. The line-up condition ii) coincides with the condition ii) in (122). If these conditions are fulfilled we find that the D = 4 antisymmetric tensor has to be zero, too. Also the D = 5 antisymmetric tensor (see footnote) is zero, since in the line-up case the gauge fields have only one unique φ-component and in the other case one gauge field vanishes. Thus,
B ij = B ij = 0 , i, j = 1, 2, 3(125)
Finally, we have to investigate the possibility that the antisymmetric tensor part cancels the Chern-Simons part. This is in our case possible since for our gauge fields F (1) ∧ F (2) = 0.
3∂ [m B np] = 6(A
[m F (2) np] + A
[m F (1) np] ) = ǫ mnp (A (1) l ∂ l χ (2) + A (2) l ∂ l χ (1) ) .
This procedure is possible independently on the positions of the centers, but it is somehow unsatisfactory since it is gauge dependent (only for Coulomb gauge). The situation becomes even worse when we remember that the gauge invariance of KK gauge fields is related to general covariance of the 5-dimensional theory (translations in the fourth coordinate). From this point of view it seems to be that the multicenter solution is consistent in 4 as well as embedded in 5 dimensions only i) if one gauge field vanishes (i.e. F or H monopoles) or ii) if all centers line-up. In the last case the charges could be arbitrary whereas in the case of F or H monopoles it is necessary that all charges may differ only by a sign (to remove all Dirac singularities). 7 Note, the 4-dimensional torsion is given by [20] H µνρ = ∂ µ B νρ − 2(A
µ F
νρ + A
µ F
νρ ) + cycl.perm., with B µν =B µν + 2(A
(1) µ A (2) ν − A (1) ν A (2) µ )
with 45 generators, 9 of them are boosts B I and 36 are SO(9) rotations M IJ : M AB = B I ≡ M [0I] , M IJ , I, J = 1, . . . , 9. (47) The boosts generators B I = {M [0i] , M [0a] } are responsible for the non-compact directions whereas M IJ = {M [ij] , M [ab] , M [ia] } are responsible for the SO(9) rotations. In principle the generalized spin connections may take values in any part of the Lorentz group, in compact part as well as in a non-compact one. We have found that for all F & H monopoles both the metric spin connections W AB as well as torsion part of the spin connection H AB ≡ H ABC E C take values only in the SO(9) rotation part of the Lorentz group.
The 3 -
3form therefore behaves as H M N L ∼ e 1 r 2 + e 2 r 3 + . . .
4
World-sheet actions for M 9 and M 8 string monopoles
may also see that only B IJ are non-vanishing and are given in eq. (8) The main property of M 8 monopole background is: one can prove that the Green-Schwarz and Ramond-Neveu-Schwarz formulation of the superstring are equivalent in this background. The light-cone action of one theory can be transformed into the light-cone action of the other theory by converting SO(8) spinors into SO(8) vectors.
IJ,KL = R (−) KL,IJ . (54) For our monopoles with the non-Abelian SO(8) fields the torsionful curvatures are given by the Yang-Mills field strength which due to spin embedding is equal to R (−) IJ,KL and by the gravitational torsionful curvature R (+) IJ,KL , which are related to each other by eq. (54).
are given by χ R = const, which means that both gauge fields (63) and (64) differ only by a sign F = −H .
As result we find that for all F & H monopoles i) the position of the centers can be arbitrary, but all charges may differ only by the sign and for the other cases ii) all centers have to line-up. For the second case the charges are arbitrary. By fixing the gauge one could also relax the line-up restriction, but not in a gauge invariant way. The two-center solutions can always be placed on a line, therefore there are no restrictions. The subtlety is relevant starting from three centers. As we have already pointed out above, we have listed here four particular special cases of F & H monopoles. The singularities of these solutions have been studied before. The N L = 0 massless monopoles and N L = 1 F & H monopoles are singular in four-dimensional geometry. The majority of solutions, i.e. the generic N L = 2, 3, 4, . . . solutions have an arbitrary non-vanishing values of F and H and are non-singular bottomless holes from the point of view of the four dimensional stringy geometry.
where H M N L = 3∂ [M B N L] and the components of the two-form B M N are defined in eq. (8) For all monopoles H tN L = 0. It follows that
jl + cycl.perm.) ∼ ∼ ǫ ijl A (1) m ∂ m χ (2) + A (1) m ∂ m χ (2) = = ǫ ijl st p F s p H t +p H s p F t rs(z−zs±rs) r 3 t [(x − x s )(y − y t ) − (y − y s )(x − x t )] .
has a solution if the gauge fields are in the Coulomb gauge (∂ l A this expression into the D = 5 antisymmetric tensor (see footnote) yieldŝ B np = ǫ npl (A
For the multi monopole solution of the generic type with more than two centers there is a subtlety concerning the status of B ik terms. This will be discussed in Appendix B.
The M 4 magnetic solution without the non-Abelian part was presented recently in[30].
In what follows we will use a notation T d for T duality since in the context of charge conjugation C one may expect the symbol T to be associated with time reflection.
AcknowledgementsThe work of K.B. is supported by the DFG and by a grant of the DAAD. He would also like to thank the Physics Department of the Stanford University for its hospitality. The work of R.K. is supported by the NSF grant PHY-8612280.Appendix B: Multi center solutionsHere we are going to discuss possible restrictions for parameters of the multi center solution. We consider for simplicity the M 4 monopoles presented in Sec. 5 . The solution is defined in terms of two harmonic functions. To describe a multi center solution we take these harmonic functions in the formwhere x s are the positions and p s are the charges of the centers. For the gauge fields in the multi-center case we can also make an ansatz as a sum over different one-center solutionsWe are describing here Dirac monopole solutions which are globally not defined. To remove the Dirac singularities one has to introduce for every center two different coordinate patches. In each of them one defines the gauge field without a singularity and finally one has to glue together all patches. For one center the different gauge fields are then given by[31]A (1/2)
. A Sen, hep-th/9411187Int. J. Mod. Phys. 9421Nucl. Phys.A. Sen, Int. J. Mod. Phys. A9, 3707 (1994), hep-th/9402002; Nucl. Phys. B440, 421 (1995), hep-th/9411187.
. M J Duff, J Rahmfeld, hep-th/9406105Phys. Lett. 345441M. J. Duff and J. Rahmfeld, Phys. Lett. B345, 441 (1995) , hep-th/9406105.
. C Hull, P Townsend, Nucl. Phys. 438109C. Hull and P. Townsend, Nucl. Phys. B438, 109 (1995);
. Nucl. Phys. 451525Nucl. Phys. B451, 525 (1995).
. K Behrndt, hep-th/9506106Nucl. Phys. 455K. Behrndt, Nucl. Phys. B455, 188 (1995), hep-th/9506106.
. R Kallosh, hep-th/9506113Phys. Rev. 52R. Kallosh, Phys. Rev. D52, 6020 (1995), hep-th/9506113.
. R Kallosh, A Linde, hep-th/9507022Phys. Rev. 52R. Kallosh and A. Linde, Phys. Rev. D52, 7137 (1995), hep-th/9507022.
. K Behrndt, R Kallosh, Phys , Rev. D53, to appear. hep-th/9509102K. Behrndt, R. Kallosh, Phys. Rev. D53, to appear. hep-th/9509102.
. C Callan, J Harvey, A Strominger, ibid. B367Nucl.Phys. 35960C. Callan, J. Harvey and A. Strominger, Nucl.Phys B359 (1991) 611; ibid. B367 (1991) 60.
. R R Khuri, Nucl.Phys. 387315R.R. Khuri, Nucl.Phys. B387 (1992) 315.
. J Gauntlett, J Harvey, J Liu, Nucl. Phys. 409363J. Gauntlett, J. Harvey and J. Liu, Nucl. Phys. B409, 363 (1993).
. E Bergshoeff, R Kallosh, T Ortín, hep- th/9410230Phys. Rev. D. 51E. Bergshoeff, R. Kallosh, and T. Ortín, Phys. Rev. D 51, 3009-3016, (1995), hep- th/9410230.
. P S Howe, G Papadopolous, Nucl. Phys. 289264P. S. Howe and G. Papadopolous, Nucl. Phys. B289, 264 (1987).
. M Duff, R Khuri, R Minasian, J Ramfeld, Nucl. Phys. 418195M. Duff, R. Khuri, R. Minasian, J. Ramfeld, Nucl. Phys. B418, 195 (1994).
. M Bianchi, F Fucito, G C Rossi, M Martellini, Nucl. Phys. 440129M. Bianchi, F. Fucito, G.C. Rossi and M. Martellini, Nucl. Phys. B440, 129 (1995).
. G Gibbons, Nucl. Phys. 207337G. Gibbons, Nucl. Phys. B207, 337 (1982).
. D Garfinkle, G T Horowitz, A Strominger, Phys. Rev. 433140D. Garfinkle, G. T. Horowitz and A. Strominger, Phys. Rev. D43, 3140 (1991).
. R Kallosh, T Ortín, hep-th/9410230Phys. Rev. D. 507123R. Kallosh and T. Ortín, Phys. Rev. D 50, 7123, (1994), hep-th/9410230.
. G Gibbons, R Kallosh, hep-th/9407118Phys. Rev. D. 512839G. Gibbons and R. Kallosh , Phys. Rev. D 51, 2839, (1995), hep-th/9407118.
. G T Horowitz, A A Tseytlin, Phys. Rev. Lett. 732896Phys. Rev. DG. T. Horowitz and A. A. Tseytlin, Phys. Rev. Lett. 73, 3351 (1994); Phys. Rev. D 51, 2896 (1995).
. J Maharana, J H Schwarz, hep-th/9207016Nucl. Phys. 390J. Maharana and J.H. Schwarz, Nucl. Phys. B390, 3 (1993), hep-th/9207016.
. A H Chamseddine, Nucl. Phys. 185403A. H. Chamseddine, Nucl. Phys. B185, 403 (1981).
. E Bergshoeff, R Kallosh, T Ortín, Phys. Rev. 475444E. Bergshoeff, R. Kallosh, and T. Ortín, Phys. Rev. D47, 5444 (1993).
. E Bergshoeff, I Entrop, R Kallosh, Phys. Rev. D. 496663E. Bergshoeff, I. Entrop, and R. Kallosh, Phys. Rev. D 49, 6663 (1994).
. J Scherk, J H Schwarz, Nucl. Phys. 15361J. Scherk and J. H. Schwarz, Nucl. Phys. B153, 61 (1979).
C Hull, Superunification and Extra Dimensions. R. D'Auria and P. FréSingaporeWorld ScientificC. Hull, in: Superunification and Extra Dimensions, eds. R. D'Auria and P. Fré (World Scientific, Singapore, 1986).
. C M Hull, E Witten, Phys. Lett. 160B. 398C. M. Hull and E. Witten, Phys. Lett. 160B, 398, (1985)
. R Kallosh, A Linde, T Ortín, A Peet, A Van Proeyen, hep-th/9205027Phys. Rev. 46R. Kallosh, A. Linde, T. Ortín, A. Peet, and A. van Proeyen, Phys. Rev. D46, 5278 (1992), hep-th/9205027.
A Peet, hep-th/9506200preprint PUPT-1548. A. Peet, preprint PUPT-1548, hep-th/9506200.
. T Buscher, Phys. Lett. 159T. Buscher, Phys. Lett. 159B (1985);
M Cvetič, A A Tseytlin, hep- th/9512031Preprint IASSNS-HEP-95/102 and Imperial/TP/95-96. 14M. Cvetič, A. A. Tseytlin, Preprint IASSNS-HEP-95/102 and Imperial/TP/95-96.14, hep- th/9512031
. T Eguchi, P B Gilkey, A J Hanson, Phys. Rep. 66213T. Eguchi, P.B. Gilkey, A.J. Hanson, Phys. Rep. 66 (1980) 213.
. G W Gibbons, S W Hawking, Phys. Lett. 78430G.W. Gibbons, S.W. Hawking, Phys. Lett. 78B (1978) 430.
|
[] |
[
"FUKUSHIMA'S DECOMPOSITION FOR DIFFUSIONS ASSOCIATED WITH SEMI-DIRICHLET FORMS",
"FUKUSHIMA'S DECOMPOSITION FOR DIFFUSIONS ASSOCIATED WITH SEMI-DIRICHLET FORMS"
] |
[
"M A Li ",
"Wei Sun [email protected] ",
"\nDepartment of Mathematics\nZHI-MING MA Institute of Applied Mathematics AMSS\nHainan Normal University Haikou\n571158China\n",
"\nDepartment of Mathematics and Statistics\nChinese Academy of Sciences\n100190BeijingChina\n",
"\nConcordia University Montreal\nH3G 1M8Canada\n"
] |
[
"Department of Mathematics\nZHI-MING MA Institute of Applied Mathematics AMSS\nHainan Normal University Haikou\n571158China",
"Department of Mathematics and Statistics\nChinese Academy of Sciences\n100190BeijingChina",
"Concordia University Montreal\nH3G 1M8Canada"
] |
[] |
Diffusion processes associated with semi-Dirichlet forms are studied in the paper. The main results are Fukushima's decomposition for the diffusions and a transformation formula for the corresponding martingale part of the decomposition. The results are applied to some concrete examples.
|
10.1142/s0219493712500037
|
[
"https://arxiv.org/pdf/1104.2951v2.pdf"
] | 119,702,005 |
1104.2951
|
9ae830b1b8c15e172c3cd533aa195551167858db
|
FUKUSHIMA'S DECOMPOSITION FOR DIFFUSIONS ASSOCIATED WITH SEMI-DIRICHLET FORMS
15 Aug 2011
M A Li
Wei Sun [email protected]
Department of Mathematics
ZHI-MING MA Institute of Applied Mathematics AMSS
Hainan Normal University Haikou
571158China
Department of Mathematics and Statistics
Chinese Academy of Sciences
100190BeijingChina
Concordia University Montreal
H3G 1M8Canada
FUKUSHIMA'S DECOMPOSITION FOR DIFFUSIONS ASSOCIATED WITH SEMI-DIRICHLET FORMS
15 Aug 2011Fukushima's decompositionsemi-Dirichlet formdiffusiontransformation formula AMS Subject Classification: 31C2560J60
Diffusion processes associated with semi-Dirichlet forms are studied in the paper. The main results are Fukushima's decomposition for the diffusions and a transformation formula for the corresponding martingale part of the decomposition. The results are applied to some concrete examples.
Introduction
It is well known that Doob-Meyer decomposition and Itô's formula are essential in the study of stochastic dynamics. In the framework of Dirichlet forms, the celebrated Fukushima's decomposition and the corresponding transformation formula play the roles of Doob-Meyer decomposition and Itô's formula, which are available for a large class of processes that are not semi-martingales. The classical decomposition of Fukushima was originally established for regular symmetric Dirichlet forms (cf. [4] and [5,Theorem 5.2.2]). Later it was extended to the non-symmetric and quasi-regular cases, respectively (cf. [16,Theorem 5.1.3] and [12,Theorem VI.2.5]). Suppose that (E, D(E)) is a quasi-regular Dirichlet form on L 2 (E; m) with associated Markov process ((X t ) t≥0 , (P x ) x∈E ∆ ) (we refer the reader to [12], [5] and [13] for notations and terminologies of this paper). If u ∈ D(E), then there exist unique martingale additive functional (MAF in short) M [u] of finite energy and continuous additive functional (CAF in short) N [u] of zero energy such that
u(X t ) −ũ(X 0 ) = M [u] t + N [u] t ,
whereũ is an E-quasi-continuous m-version of u and the energy of an AF A := (A t ) t≥0 is defined to be The aim of this paper is to establish Fukushima's decomposition for some Markov processes associated with semi-Dirichlet forms. Note that the assumption of the existence of dual Markov process plays a crucial role in all the Fukushima-type decompositions known up to now. In fact, without that assumption, the usual definition (1.1) of energy of AFs is questionable. To tackle this difficulty, we employ the notion of local AFs (cf. Definition 2.2 below) introduced in [5] and introduce a localization method to obtain Fukshima's decomposition for a class of diffusions associated with semi-Dirichlet forms. Roughly speaking, we prove that for any u ∈ D(E) loc , there exists a unique decompositioñ
u(X t ) −ũ(X 0 ) = M [u] t + N [u] t ,
where M [u] ∈ M [[0,ζ[[ loc and N [u] ∈ N c,loc . See Theorem 2.4 below for the involved notations and a rigorous statement of the above decomposition.
Next, we develop a transformation formula of local MAFs. Here we encounter the difficulty that there is no LeJan's transformation rule available for semi-Dirichlet forms. Also we cannot replace a γ-co-excessive function g in the Revuz correspondence (cf. (2.2) below) by an arbitrary g ∈ B + (E) ∩ D(E), provided the corresponding smooth measure is not of finite energy integral. Borrowing some ideas of [9,Theorem 5.4] and [16,Theorem 5.3.2], but putting more extra efforts, we are able to build up an analog of LeJan's formula (cf. Theorem 3.4 below). By virtue of LeJan's formula developed in Theorem 3.4, employing again the localization method developed in this paper, finally we obtain a transformation formula of local MAFs for semi-Dirichlet forms in Theorem 3.10.
The main results derived in this paper rely heavily on the potential theory of semi-Dirichlet forms. Although they are more or less parallel to those of symmetric Dirichlet forms, we cannot find explicit statements in literature. For the solidity of our results, also for the interests by their own, we checked and derived in detail some results on potential theory and positive continuous AFs (PCAFs in short) for semi-Dirichlet forms. These results are presented in Section 5 at the end of this paper as an Appendix. In particular, we would like to draw the attention of the readers to two new results, Theorem 5.3 and Lemma 5.9.
The rest of the paper is organized as follows. In Section 2, we derive Fukushima's decomposition. Section 3 is devoted to the transformation formula. In Section 4, we apply our main results to some concrete examples. In these examples the usual Doob-Meyer decomposition and Itô's formula for semi-martingales are not available. Nevertheless we can use our results to preform Fukushima's decomposition and apply the transformation formula in the semi-Dirichlet forms setting. Section 5 is the Appendix consisting of some results on potential theory and PCAFs in the semi-Dirichlet forms setting.
Fukushima's decomposition
We consider a quasi-regular semi-Dirichlet form (E, D(E)) on L 2 (E; m), where E is a metrizable Lusin space (i.e., topologically isomorphic to a Borel subset of a complete separable metric space) and m is a σ-finite positive measure on its Borel σ-algebra B(E). Denote by (T t ) t≥0 and (G α ) α≥0 (resp. (T t ) t≥0 and (Ĝ α ) α≥0 ) the semigroup and resolvent (resp. co-semigroup and co-resolvent) associated with (E, D(E)). Let M = (Ω, F , (F t ) t≥0 , (X t ) t≥0 , (P x ) x∈E ∆ ) be an m-tight special standard process which is properly associated with (E, D(E)) in the sense that P t f is an E-quasi-continuous m-version of T t f for all f ∈ B b (E) ∩ L 2 (E; m) and all t > 0, where (P t ) t≥0 denotes the semigroup associated with M (cf. [13,Theorem 3.8]). Below for notations and terminologies related to quasi-regular semi-Dirichlet forms we refer to [13] and Section 5 of this paper.
Recall that a positive measure µ on (E, B(E)) is called smooth (w.r.t. (E, D(E))), denoted by µ ∈ S, if µ(N) = 0 for each E-exceptional set N ∈ B(E) and there exists an E-nest {F k } of compact subsets of E such that
µ(F k ) < ∞ for all k ∈ N.
A family (A t ) t≥0 of functions on Ω is called an additive functional (AF in short) of M if:
(i) A t is F t -measurable for all t ≥ 0.
(ii) There exists a defining set Λ ∈ F and an exceptional set N ⊂ E which is E-exceptional such that P x [Λ] = 1 for all x ∈ E\N, θ t (Λ) ⊂ Λ for all t > 0 and for each ω ∈ Λ, t → A t (ω) is right continuous on (0, ∞) and has left limits on
(0, ζ(ω)), A 0 (ω) = 0, |A t (ω)| < ∞ for t < ζ(ω), A t (ω) = A ζ (ω) for t ≥ ζ(ω), and A t+s (ω) = A t (ω) + A s (θ t ω), ∀ s, t ≥ 0. (2.1)
Two AFs A = (A t ) t≥0 and B = (B t ) t≥0 are said to be equivalent, denoted by A = B, if they have a common defining set Λ and a common exceptional set N such that A t (ω) = B t (ω) for all ω ∈ Λ and t ≥ 0.
An AF A = (A t ) t≥0 is called a continuous AF (CAF in short) if t → A t (ω) is continuous on (0, ∞). It is called a positive continuous AF (PCAF in short) if A t (ω) ≥ 0 for all t ≥ 0, ω ∈ Λ.
In the theory of Dirichlet forms, it is well known that there is a one to one correspondence between the family of all equivalent classes of PCAFs and the family S (cf. [5]). In [3], Fitzsimmons extended the smooth measure characterization of PCAFs from the Dirichlet forms setting to the semi-Dirichlet forms setting. Applying [3,Proposition 4.12], following the arguments of [5, Theorems 5.1.3 and 5.1.4] (with slight modifications by virtue of [13,14,10] and [1,Theorem 3.4]), we can also obtain a one to one correspondence between the family of all equivalent classes of PCAFs and the family S. The correspondence, which is referred to as Revuz correspondence, is described in the following lemma.
Lemma 2.1. Let A be a PCAF. Then there exists a unique µ ∈ S, which is referred to as the Revuz measure of A and is denoted by µ A , such that:
For any γ-co-excessive function g (γ ≥ 0) in D(E) and f ∈ B + (E),
lim t↓0 1 t E g·m ((f A) t ) =< f · µ,g > . (2.2)
Conversely, let µ ∈ S, then there exists a unique (up to the equivalence) PCAF A such that µ = µ A .
See Theorem 5.8 in the Appendix at the end of this paper for more descriptions of the Revuz correspondence (2.2).
From now on we suppose that (E, D(E)) is a quasi-regular local semi-Dirichlet form on L 2 (E; m). Here "local" means that [10,Theorem 4.5]). Here "diffusion" means that M is a right process satisfying
E(u, v) = 0 for all u, v ∈ D(E) with supp[u] ∩ supp[v] = ∅. Then, (E, D(E)) is properly associated with a diffusion process M = (Ω, F , (F t ) t≥0 , (X t ) t≥0 , (P x ) x∈E ∆ ) (cf.P x [t → X t is continuous on [0, ζ)] = 1 for all x ∈ E.
Throughout this paper, we fix a function φ ∈ L 2 (E; m) with 0 < φ ≤ 1 m-a.e. and
set h = G 1 φ,ĥ =Ĝ 1 φ. Denote τ B := inf{t > 0 | X t / ∈ B} for B ⊂ E.
Let V be a quasi-open subset of E. We denote by X V = (X V t ) t≥0 the part process of X on V and denote by (E V , D(E) V ) the part form of (E, D(E)) on L 2 (V ; m). It is known that X V is a diffusion process and (E V , D(E) V ) is a quasi-regular local semi-Dirichlet form (cf. [10]). Denote by (T V t ) t≥0 , (T V t ) t≥0 , (G V α ) α≥0 and (Ĝ V α ) α≥0 the semigroup, co-semigroup, resolvent and co-resolvent associated with (E V , D(E) V ), respectively. One can easily check thatĥ| V is 1-co-excessive w.
r.t. (E V , D(E) V ). Defineh V :=ĥ| V ∧Ĝ V 1 φ. Thenh V ∈ D(E) V andh V is 1-co-excessive. For an AF A = (A t ) t≥0 of X V , we define e V (A) := lim t↓0 1 2t EhV ·m (A 2 t ) (2.3)
whenever the limit exists in [0, ∞]. Definė
M V := {M | M is an AF of X V , E x (M 2 t ) < ∞, E x (M t ) = 0 for all t ≥ 0 and E-q.e. x ∈ V, e V (M) < ∞}, N V c := {N | N is a CAF of X V , E x (|N t |) < ∞ for all t ≥ 0
and E-q.e. x ∈ V, e V (N) = 0}, such that u = u n m-a.e. on V n , ∀ n ∈ N}.
Θ := {{V n } | V n is E-quasi-open, V n ⊂ V n+1 E-q.e., ∀ n ∈ N, and E = ∪ ∞ n=1 V n E-q.e.},
For our purpose we shall employ the notion of local AFs introduced in [5] as follows.
Two local AFs
A (1) , A (2) are said to be equivalent if for E-q.e. x ∈ E, it holds that P x (A (1) t = A (2) t ; t < ζ) = P x (t < ζ), ∀ t ≥ 0. Definė M loc := {M | M is a local AF of M, ∃ {V n }, {E n } ∈ Θ and {M n | M n ∈Ṁ Vn }
such that E n ⊂ V n , M t∧τ En = M n t∧τ En , t ≥ 0, n ∈ N} and N c,loc := {N | N is a local AF of M, ∃ {V n }, {E n } ∈ Θ and {N n | N n ∈ N Vn c } such that E n ⊂ V n , N t∧τ En = N n t∧τ En , t ≥ 0, n ∈ N}.
We use M . We put the following assumption: Assumption 2.3. There exists {V n } ∈ Θ such that, for each n ∈ N, there exists a Dirichlet form (η (n) , D(η (n) )) on L 2 (V n ; m) and a constant C n > 1 such that D(η (n) ) = D(E) Vn and for any u ∈ D(E) Vn , Decomposition (2.4) is unique up to the equivalence of local AFs.
1 C n η (n) 1 (u, u) ≤ E 1 (u, u) ≤ C n η (n) 1 (u, u).u(X t ) −ũ(X 0 ) = M [u] t + N [u] t , t ≥ 0, P x -a.s. for E-q.e. x ∈ E. (2.4) Moreover, M [u] ∈ M [[0,ζ[[ loc .
Before proving Theorem 2.4, we present some lemmas.
We fix a {V n } ∈ Θ satisfying Assumption 2.3. Without loss of generality, we assume that ĥ is bounded on each V n , otherwise we may replace V n by V n ∩{ ĥ < n}.
To simplify notations, we writē Next we give Fukushima's decomposition for the part process X Vn . To obtain the existence of decomposition (2.5), we start with the special case that u = R Vn
1 f for some bounded Borel function f ∈ L 2 (V n ; m), where (R Vn t ) t≥0 is the resolvent of X Vn . Set N n,[u] t = t 0 (u(X Vn s ) − f (X Vn s ))ds, M n,[u] t = u(X Vn t ) − u(X Vn 0 ) − N n,[u] t , t ≥ 0. (2.6)
Then N n, [u] ∈ N Vn c and M n,[u] ∈Ṁ Vn . In fact,
e Vn (N n,[u] ) = lim t↓0 1 2t Eh n·m [( t 0 (u − f )(X Vn s )ds) 2 ] ≤ lim t↓0 1 2 Eh n·m [ t 0 (u − f ) 2 (X Vn s )ds] = lim t↓0 1 2 [ t 0 Vnh n T Vn s (u − f ) 2 dmds] = lim t↓0 1 2 [ t 0 Vn (u − f ) 2T Vn sh n dmds] ≤ u − f ∞ lim t↓0 1 2 [ t 0 Vn |u − f |T Vn sh n dmds] ≤ u − f ∞ lim t↓0 1 2 [ t 0 ( Vn (u − f ) 2 dm) 1/2 ( Vn (T Vn sh n ) 2 dm) 1/2 ds] ≤ u − f ∞ ( Vn (u − f ) 2 dm) 1/2 ( Vnh 2 n dm) 1/2 lim t↓0 t 2 = 0. (2.7)
By Assumption 2.3, u 2 ∈ D(E) Vn,b and uh n ∈ D(E) Vn,b . Then, by (2.6), (2.7), [1, Theorem 3.4] and Assumption 2.3, we get
e Vn (M n,[u] ) = lim t↓0 1 2t Eh n·m [(u(X Vn t ) − u(X Vn 0 )) 2 ] = lim t↓0 { 1 t (uh n , u − T Vn t u) − 1 2t (h n , u 2 − T Vn t u 2 )} = E Vn (u, uh n ) − 1 2 E Vn (u 2 ,h n ) ≤ E Vn 1 (u, uh n ) ≤ KE Vn 1 (u, u) 1/2 E Vn 1 (uh n , uh n ) 1/2 ≤ KC 1/2 n E Vn 1 (u, u) 1/2 η (n) 1 (uh n , uh n ) 1/2 ≤ KC 1/2 n E Vn 1 (u, u) 1/2 ( u ∞ η (n) 1 (h n ,h n ) 1/2 + h n ∞ η (n) 1 (u, u) 1/2 ) ≤ KC n E Vn 1 (u, u) 1/2 ( u ∞ E Vn 1 (h n ,h n ) 1/2 + h n ∞ E Vn 1 (u, u) 1/2 ), (2.8)
where K is the continuity constant of (E, D(E)) (cf. (5.1) in the Appendix).
Next, take any bounded Borel function u ∈ D(E) Vn . Define
u l = lR Vn l+1 u = R Vn 1 g l , g l = l(u − lR Vn l+1 u).
By the uniqueness of decomposition (2.5) for u l 's,
we have M n,[u l ] − M n,[u k ] = M n,[u l −u k ] . Then, by (2.8), we get e Vn (M n,[u l ] − M n,[u k ] ) = e Vn (M n,[u l −u k ] ) ≤ KC n E Vn 1 (u l − u k , u l − u k ) 1/2 ( u l − u k ∞ E Vn 1 (h n ,h n ) 1/2 + h n ∞ E Vn 1 (u l − u k , u l − u k ) 1/2 ). Since u l ∈ D(E) Vn , bounded by u ∞ , and E Vn 1 -convergent to u, we conclude that {M n,[u l ] } is an e Vn -Cauchy sequence in the spaceṀ Vn . Define M n,[u] = lim l→∞ M n,[u l ] in (Ṁ Vn , e Vn ), N n,[u] =ũ(X Vn t ) −ũ(X Vn 0 ) − M n,[u] .
Then M n,[u] ∈Ṁ Vn by Lemma 2.5.
It only remains to show that N n, [u] ∈ N Vn c . By Lemma 5.6 in the Appendix and Lemma 2.5, there exists a subsequence {l k } such that for E-q.e. x ∈ V n , P x (N n,[u l k ] converges to N n,[u] uniformly on each compact interval of [0, ∞)) = 1.
From this and (2.6), we know that N n,[u] is a CAF. On the other hand, by
N n,[u] t = A n,[u−u l ] t − (M n,[u] t − M n,[u l ] t ) + N n,[u l ] t , we get e Vn (N n,[u] ) ≤ 3e Vn (A n,[u−u l ] ) + 3e Vn (M n,[u] − M n,[u l ] ),
which can be made arbitrarily small with large l by (2.8). Therefore e Vn (N n,[u] ) = 0 and N n, [u] ∈ N Vn c .
We now fix a u ∈ D(E) loc . Then there exist {V 1 n } ∈ Θ and {u n } ⊂ D(E) such that u = u n m-a.e. on V 1 n . By [13, Proposition 3.6], we may assume without loss of generality that each u n is E-quasi-continuous. By [13,Proposition 2.16], there exists an E-nest {F 2 n } of compact subsets of E such that {u n } ⊂ C{F 2 n }. Denote by V 2 n the finely interior of F 2 n for n ∈ N.
Then {V 2 n } ∈ Θ. Define V ′ n = V 1 n ∩ V 2 n . Then {V ′
n } ∈ Θ and each u n is bounded on V ′ n . To simplify notation, we still use
V n to denote V n ∩ V ′ n for n ∈ N. For n ∈ N, we define E n = {x ∈ E | h n (x) > 1 n }, where h n := G Vn 1 φ. Then {E n } ∈ Θ satisfying E E n ⊂ E n+1 E-q.e
. and E n ⊂ V n E-q.e. for each n ∈ N (cf. [10, Lemma 3.8]). Here E E n denotes the E-quasi-closure of E n . Define f n = n h n ∧ 1. Then f n = 1 on E n and f n = 0 on V c n . Since f n is a 1-excessive function of (E Vn , D(E) Vn ) and f n ≤ n h n ∈ D(E) Vn , hence f n ∈ D(E) Vn by [14,Remark 3.4(ii)]. Denote by Q n the bound of |u n | on V n . Then
u n f n = ((−Q n ) ∨ u n ∧ Q n )f n ∈ D(η) Vn,b = D(E) Vn,b .
For n ∈ N, we denote by {F n } is a {Υ n,l t }-martingale. By the assumption that M is a diffusion, the fact that f n is quasi-continuous and
f n = 1 on E n , we get f n (X s∧τ En ) = 1 if 0 < s ∧ τ En < ζ. Hence X s∧τ En ∈ V n , if 0 < s ∧ τ En < ζ, since f n = 0 on V c n . Therefore X V l s∧τ En = X s∧τ En = X Vn s∧τ En , P x -a.s. for E-q.e. x ∈ V n , (2.9) which implies that {M l,[unfn] t∧τ En } is a {Υ n t }-martingale. Let N ∈ N V j c for some j ∈ N.
Then, for any T > 0,
[rT ] k=1 Eh j ·m [(Nk+1 r − Nk r ) 2 ] ≤ [rT ] k=1 e T (E · (N 2 1 r ), e − k rT V j k rh j ) ≤ [rT ] k=1 e T (E · (N 2 1 r ),h j ) ≤ rT e T Eh j ·m (N 2 1 r ) → 0 as r → ∞. Hence [rT ] k=1 (Nk+1 r − Nk r ) 2 → 0, r → ∞, in P m ,
which implies that the quadratic variation process of N w.r.t. P m is 0. t∧τ En . By the quasi-continuity of u n f n , u l f l and the assumption that M is a diffusion, one finds that M l, [unfn] and M l,
By [10, Proposition 3.3], ( Ĝ 1 φ) 1 V c n =Ĝ 1 φ −Ĝ Vn 1 φ. Since V c n ⊃ V c l , ( Ĝ 1 φ) 1 V c n ≥ ( Ĝ 1 φ) 1 V c l . ThenĜ Vn 1 φ ≤Ĝ V l 1 φ and thus h n ≤h l . (2.10) Therefore e Vn (A) ≤ e V l (A) (2.11) for any AF A = (A t ) t≥0 of X Vn . Note that N l,[unfn] t∧τ En = ( u n f n )(X V l t∧τ En ) − ( u n f n )(X V l 0 ) − M l,[unfn] t∧τ En ∈ Υ n,l t = Υ n t ⊂ F n t∧τt∧τ En ) ≤ e V l (N l,[unfn] t∧τ En ) = 0. Hence (N l,[unfn] t∧τ En ) t≥0 ∈ N Vn c , which implies that the quadratic variation process of {N l,[unfn] t∧τ En } w.r.t. P m is 0. Since for E-q.e. x ∈ V n , by (2.9), M n,[unfn] t∧τ En + N n,[unfn] t∧τ En = u n f n (X Vn t∧τ En ) − u n f n (X Vn 0 ) = u n f n (X V l t∧τ En ) − u n f n (X V l 0 ) = M l,[unfn] t∧τ En + N l,[unfn] t∧τ En , P x − a.t∧τ En , P x -a.s. for m-a.e. x ∈ V n . This implies that E m (< M n,[unfn] ·∧τ En − M l,[unfn] ·∧τ En > t ) = 0, ∀t ≥ 0. Then, by Theorem 5.8(i) in the Appendix, M n,[unfn] t∧τ En = M l,[unfn] t∧τ En , ∀t ≥ 0, P x -a.s. for E-q.e. x ∈ V n . Hence N n,[unfn] t∧τ En = N l,[unfn] t∧τ En , ∀t ≥ 0, P x -a.s. for E-q.e. x ∈ V n . Since u n f n = u l f l = u on E n , similar to [11, Lemma 2.4], we can show that M l,[unfn] t = M l,[u l f l ] t when t < τ En , P x -a.s. for E-q.e. x ∈ V l . If τ En = ζ, then by the fact u n f n (X V l ζ ) = u l f l (X V l ζ ) =[u l f l ] are continuous on [0, ζ), P x -a.s. for E-q.e. x ∈ V l . Hence, if τ En < ζ we have M l,[unfn] τ En = M l,[u l f l ] τ En . Therefore M n,[unfn] t∧τ En = M l,[u l f l ] t∧τ En and N n,[unfn] t∧τ En = N l,[u l f l ] t∧τ En , t ≥ 0, P x -a.s. for E-q.e. x ∈ V n . Proof of Theorem 2.4 We define M [u] t∧τ En := lim l→∞ M l,[u l f l ] t∧τ En and M [u] t := 0 for t > ζ if there exists some n such that τ En = ζ and ζ < ∞; or M [u] t := 0 for t ≥ ζ, otherwise. By Lemma 2.7, M [u] is well defined. Define M n t := M n+1,[u n+1 f n+1 ] t∧τ En for t ≥ 0 and n ∈ N. Then M [u] t∧τ En = M n t∧τ En P x -a.s. for E-q.e. x ∈ V n+1 by Lemma 2.7. Since E E n ⊂ E n+1 ⊂ V n+1 E-q.e. implies that P x (τ En = 0) = 1 for x / ∈ V n+1 , M [u]
t∧τ En = M n t∧τ En P x -a.s. for E-q.e. x ∈ E. Similar to (2.10) and (2.11), we can show that e Vn (M n ) ≤ e V n+1 (M n ) for each n ∈ N. Then M n ∈Ṁ Vn and hence
M [u] ∈Ṁ loc . Define N [u] t =ũ(X t ) −ũ(X 0 ) − M [u] t . Then, we have N [u] t∧τ En = lim l→∞ N l,[u l f l ] t∧τ En . Moreover N [u] ∈ N c,loc . Next we show that M n is also an {F t }-martingale, which implies that M [u] ∈ M [[0,ζ[[ loc .
In fact, by the fact that τ En is an {F n+1 t }-stopping time, we find that I τ En ≤s is F n+1 s∧τ En -measurable for any s ≥ 0. Let 0 ≤ s 1 < · · · < s k ≤ s < t and g ∈ B b (R k ). Then, we obtain by (2.9) and the fact M n+1,
[u n+1 f n+1 ] ∈Ṁ V n+1 that for E-q.e. x ∈ V n+1 , Ω M n t g(X s 1 , . . . , X s k )dP x = τ En ≤s M n t g(X s 1 , . . . , X s k )dP x + τ En >s M n t g(X s 1 , . . . , X s k )dP x = τ En ≤s M n s g(X s 1 , . . . , X s k )dP x + Ω M n+1,[u n+1 f n+1 ] t∧τ En g(X V n+1 s 1 ∧τ En , . . . , X V n+1 s k ∧τ En )I τ En >s dP x = τ En ≤s M n s g(X s 1 , . . . , X s k )dP x + Ω M n+1,[u n+1 f n+1 ] s∧τ En g(X V n+1 s 1 ∧τ En , . . . , X V n+1 s k ∧τ En )I τ En >s dP x = τ En ≤s M n s g(X s 1 , . . . , X s k )dP x + τ En >s M n s g(X s 1 , . . . , X s k )dP x = Ω M n s g(X s 1 , . . . , X s k )dP x .
Obviously, the equality holds for
x / ∈ V n+1 . Therefore, M n is an {F t }-martingale.
Finally, we prove the uniqueness of decomposition (2.4). Suppose that M 1 ∈ M loc and N 1 ∈ N c,loc such that
u(X t ) −ũ(X 0 ) = M 1 t + N 1 t , t ≥ 0, P x -a.s. for E-q.e. x ∈ E. Then, there exists {E n } ∈ Θ such that, for each n ∈ N, {(M [u] − M 1 )I [[0,τ En ]] })I [[0,τ En ]] > t = 0, ∀t ∈ [0, ∞)) = 0 for E-q.e. x ∈ E. Therefore M [u] t = M 1 t , 0 ≤ t ≤ τ En , P x -a.s. for E-q.e.
x ∈ E. Since n is arbitrary, we obtain the uniqueness of decomposition (2.4) up to the equivalence of local AFs.
Transformation formula
In this section, we adopt the setting of Section 2. Suppose that (E, D(E)) is a quasiregular local semi-Dirichlet form on L 2 (E; m) satisfying Assumption 2.3. We fix a {V n } ∈ Θ satisfying Assumption 2.3 and satisfying that ĥ is bounded on each V n . Let X Vn , (E Vn , D(E) Vn ),h n , etc. be the same as in Section 2. For u ∈ D(E) Vn,b , we denote by µ
1 2 (µ (n) <u+v> − µ (n) <u> − µ (n) <v> ). (3.1) Lemma 3.1. Let u, v, f ∈ D(E) Vn,b . Then Vnf dµ (n) <u,v> = E(u, vf ) + E(v, uf ) − E(uv, f ). (3.2)
Proof. By the polarization identity, (3.2) holds for u, v, f ∈ D(E) Vn,b is equivalent to
Vnf dµ (n) <u> = 2E(u, uf ) − E(u 2 , f ), ∀u, f ∈ D(E) Vn,b . (3.3)
Below, we will prove (3.3). Without loss of generality, we assume that f ≥ 0.
For k, l ∈ N, we define f k := f ∧ (kh n ) and f k,l := lĜ Vn l+1 f k . By [14, (3.9)],
f k ∈ D(E) Vn,b and E 1 (f k , f k ) ≤ E 1 (f, f k ). (3.4) By [12, Proposition III.1.2], f k,l is (l + 1)-co-excessive. Sinceh n is 1-co-excessive, 0 ≤ f k,l ≤ kh n . (3.5)
Hence f k,l ∈ D(E) Vn,b by noting thath n is bounded.
Note that by (3.5)
lim t↓0 1 t E f k,l ·m [(N n,[u] t ) 2 ] ≤ k lim t↓0 1 t Eh n·m [(N n,[u] t ) 2 ] = 2ke Vn (N n,[u] ) = 0. (3.6)
Then, by Theorem 5.8(i) in the Appendix and (3.6), we get
Vn f k,l dµ (n) <u> = lim t↓0 1 t E f k,l ·m [< M n,[u] > t ] = lim t↓0 1 t E f k,l ·m [(ũ(X Vn t ) −ũ(X Vn 0 )) 2 ] = lim t↓0 2 t (uf k,l , u − P Vn t u) − lim t↓0 1 t (f k,l , u 2 − P Vn t u 2 ) = 2E(u, uf k,l ) − E(u 2 , f k,l ).M n,[u],k = −ũ(X Vn ζ (n) − )I {ζ (n) ≤t} − (−ũ(X Vn ζ (n) − )I {ζ (n) ≤t} ) p ,
where ζ (n) denotes the life time of X Vn and p denotes the dual predictable projection, and µ (n) <u> = µ n,c <u> + µ n,k <u> .
(3.9)
Let (N (n) (x, dy), H (n) ) be a Lévy system of X Vn and ν (n) be the Revuz measure of
H (n) . Define K (n) (dx) := N (n) (x, ∆)ν (n) (dx< M n,[u],k > t = (ũ 2 (X Vn ζ (n) − )I ζ (n) ≤t ) p = t 0ũ 2 (X Vn s )N (n) (X Vn s , ∆)dH (n) s (3.10)
and µ n,k <u> (dx) =ũ 2 (x)K (n) (dx).
v k l )} l∈N of {v k } k∈N such that u k w → uw in D(E) Vn as k → ∞, where u k := 1 k k l=1 v k l . Note that u k → u in D(E) Vn as k → ∞ and u k ∞ ≤ u ∞ for k ∈ N. Moreover, L Vn u k ∞ < ∞ for k ∈ N,[E(u k f w, u k f w) + E(u 2 k f, u 2 k f ) + E(u k f, u k f )] < ∞.
Then, we obtain by [12, Lemma I.2.12] that u k f w → uf w, u 2 k f → u 2 f and u k f → uf weakly in D(E) Vn as k → ∞. Hence by (3.2) and the fact sup
k≥1 [E(u k f w, u k f w) + E(u k f, u k f )] < ∞ we get Vnfũ dµ (n) <u,w> = E(u, uf w) + E(w, u 2 f ) − E(uw, uf ) = lim k→∞ [E(u, u k f w) + E(w, u 2 k f ) − E(uw, u k f )] = lim k→∞ [E(u k , u k f w) + E(w, u 2 k f ) − E(u k w, u k f )] = lim k→∞ Vnf u k dµ (n) <u k ,[E(u 2 k , u 2 k ) + E(u 2 k f, u 2 k f ) + E(u 2 k w, u 2 k w)] < ∞. Then, we obtain by [12, Lemma I.2.12] that u 2 k → u 2 , u 2 k f → u 2 f and u 2 k w → u 2 w weakly in D(E) Vn as k → ∞. Hence by (3.2) we get Vnf dµ (n) <u 2 ,w> = E(u 2 , f w) + E(w, u 2 f ) − E(u 2 w, f ) = lim k→∞ [E(u 2 k , f w) + E(w, u 2 k f ) − E(u 2 k w, f )] = lim k→∞ Vnf dµ (n) <u 2 k ,w> .
(3.17)
By (3.16), (3.17) and the dominated convergence theorem, to prove (3.15), we may assume without loss of generality that u is equal to some u k . Moreover, we assume without loss of generality that f ≥ 0.
For k, l ∈ N, we define f k := f ∧ (kh n ) and f k,l := lĜ Vn l+1 f k . By [14, (3.9)], f k ∈ D(E) Vn,b ; by [12, Proposition III.1.2], f k,l is (l + 1)-co-excessive. Sinceh n is 1-co-excessive, 0 ≤ f k,l ≤ kh n .
Hence f k,l ∈ D(E) Vn,b by noting thath n is bounded. By the dominated convergence theorem, to prove that (3.15) holds for any f ∈ D(E) Vn,b , it suffices to prove that (3.15) holds for any f k,l .
Below, we will prove (3.15) for u = u k and f = f k,l .
Note that for any g ∈ D(E) Vn,b , Note that
lim t↓0 1 t E f k,t↓0 1 t E f k,l ·m [< M n,[u 2 k ] , M n,[w] > t ] = lim t↓0 1 t E f k,l ·m [( u k 2 (X Vn t ) − u k 2 (X Vn 0 ))(w(X Vn t ) −w(X Vn 0 ))] = lim t↓0 2 t E (f k,l u k )·m [( u k (X Vn t ) − u k (X Vn 0 ))(w(X Vn t ) −w(X Vn 0 ))] + lim t↓0 1 t E f k,l ·m [( u k (X Vn t ) − u k (X Vn 0 )) 2 (w(X Vn t ) −w(X Vn 0 ))] := lim t↓0 [I(t) + II(t)].I(t) = lim t↓0 2 t E (f k,l u k )·m (< M n,[u k ] , M n,[w] > t ) = lim t↓0 2 t t 0 < µ (n) <u k ,w> , T Vn s (f k,l u k ) > ds = lim t↓0 2 t t 0 [E(u k , wT Vn s (f k,l u k )) + E(w, u kTII(t) = 1 t E f k,l ·m [(M n,[u k ],c t ) 2 M n,[w],c t ] + 1 t E f k,l ·m [(M n,[u k ],k t ) 2 M) 4 ]) 1/2 (lim t↓0 1 t E f k,l ·m [< M n,[v],c > t ]) 1/2 ≤ C(2ke Vn (M n,[v] )) 1/2 (lim t↓0 1 t E f k,l ·m [< M n,[u k ],c > 2 t ]) 1/2 (3.23)
for some constant C > 0, which is independent of t.
By Theorem 5.8(i) in the Appendix, for any δ > 0, we get
lim t↓0 1 t E f k,l ·m [< M n,[u k ],c > 2 t ] = lim t↓0 2 t E f k,l ·m [ t 0 < M n,[u k ],c > (t−s) •θ s d < M n,[u k ],c > s ] = lim t↓0 2 t E f k,l ·m [ t 0 E X Vn s [< M n,[u k ],c > (t−s) ]d < M n,[u k ],c > s ] ≤ 2 < E · [< M n,[u k ] > δ ] · µ (n) <u k > , f k,l > .
(3.24)
Note that by our choice of u k , there exists a constant C k > 0 such that 1
E x (< M n,[u k ] > δ ) = E x [(M n,[u k ] δ ) 2 ] = E x [( u k (X Vn δ ) − u k (X Vn 0 ) − δ 0 L Vn u k (X Vn s )ds) 2 ] ≤ CIV (t) = 1 t E f k,l ·m [I ζ (n) ≤t {−( u k 2w )(X Vn ζ (n) − ) +2( u kw )(X Vn ζ (n) − )( u k (X Vn ζ (n) − )I ζ (n) ≤t ) p +( u k 2 )(X Vn ζ (n) − )(w(X Vn ζ (n) − )I ζ (n) ≤t ) p }] = 1 t E f k,l ·m [−(( u k 2w )(X Vn ζ (n) − )I ζ (n) ≤t ) p +2(( u kw )(X Vn ζ (n) − )I ζ (n) ≤t ) p M n,[u k ],k t +( u k 2 (X Vn ζ (n) − )I ζ (n) ≤t ) p M n,[w],k t ] ≤ 1 t E f k,l ·m [−(( u k 2w )(X Vn ζ (n) − )I ζ (n) ≤t ) p ]+ 2 t E 1/2 f k,l ·m [{(( u kw )(X Vn ζ (n) − )I ζ (n) ≤t ) p } 2 ]E 1/2 f k,l ·m [< M n,[u k ],k > t ] +E 1/2 f k,l ·m [{( u k 2 (X Vn ζ (n) − )I ζ (n) ≤t ) p } 2 ]E 1/2 f k,l ·m [< M n,[w],k > t ].t E f k,l ·m [< M n,[ψ 1 ],k > t ] = Vn f k,l dµ n,k <ψ 1 > . (3.28)
Furthermore, for any δ > 0,
lim t↓0 1 t E f k,l ·m [{(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤t ) p } 2 ] = lim t↓0 2 t E f k,l ·m [ t 0 (( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤(t−s) ) p • θ s d(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤s ) p ] = lim t↓0 2 t E f k,l ·m [ t 0 E X Vn s [(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤(t−s) ) p ]d(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤s ) p ] ≤< E · [( |ψ 1 ψ 2 |(X Vn ζ (n) − )I ζ (n) ≤δ ) p ] · µ n,k <|ψ 1 |,|ψ 2 |> , f k,l > =< E · [ |ψ 1 ψ 2 |(X Vn ζ (n) − )I ζ (n) ≤δ ] · µ n,k <|ψ 1 |,|ψ 2 |> , f k,l > . (3.29)
Letting δ → 0, by (3.29) and the dominated convergence theorem, we get Remark 3.3. When deriving formula (3.13) for non-symmetric Markov processes, we cannot apply Theorem 5.8(vi) or (vii) in the Appendix of this paper to smooth measures which are not of finite energy integral. To overcome that difficulty and obtain (3.13) in the semi-Dirichlet forms setting, we have to make some extra efforts as shown in the above proof. The proof uses some ideas of [9,Theorem 5.4] and [16,Theorem 5.3.2]. Let A be the family of all Φ ∈ C 1 (R m ) satisfying (3.32). If Φ, Ψ ∈ A, then ΦΨ ∈ A by Theorem 3.2. Hence A contains all polynomials vanishing at the origin. Let O be a finite cube containing the range of u(x) = (u 1 (x), . . . , u m (x)). We take a sequence {Φ k } of polynomials vanishing at the origin such that
lim t↓0 1 t E f k,l ·m [{(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤t ) p } 2 ] = 0.Theorem 3.4. Let m ∈ N, Φ ∈ C 1 (R m ) with Φ(0) = 0, and u = (u 1 , u 2 , . . . , u m ) with u i ∈ D(E) Vn,b , 1 ≤ i ≤ m. Then Φ(u) ∈ D(E) Vn,b and for any v ∈ D(E) Vn,b , dµ n,c <Φ(u),v> = m i=1 Φ x i (ũ)dµ n,c <u i ,v> .Φ k → Φ, Φ k x i → Φ x i , 1 ≤ i ≤ m,(u),v> | ≤ f ∞ | Vnh n dµ n,c <Φ(u)−Φ k (u)> | 1/2 | Vnh n dµ n,c <v> | 1/2 ≤ f ∞ | Vnh n dµ (n) <Φ(u)−Φ k (u)> | 1/2 | Vnh n dµ (n) <v> | 1/2 = 2 f ∞ e Vn (M n,[Φ(u)−Φ k (u)] ) 1/2 e Vn (M n,[v] ) 1/2 ≤ 2 f ∞ e Vn (M n,[v] ) 1/2 [KC n E Vn 1 (Φ(u) − Φ k (u), Φ(u) − Φ k (u)) 1/2 ·( Φ(u) − Φ k (u) ∞ E Vn 1 (h n ,h n ) 1/2 + h n ∞ E Vn 1 (Φ(u) − Φ k (u), Φ(u) − Φ k (u)) 1/2 )] 1/2 . Hence Vnfh n dµ n,c <Φ(u),v> = lim k→∞ Vnfh n dµ n,c <Φ k (u),v> .
It is easy to see that
Vnfh n Φ x i (ũ)dµ n,c <u i ,v> = lim k→∞ Vnfh n Φ k x i (ũ)dµ n,c <u i ,v> , 1 ≤ i ≤ m.
Therefore (3.33) holds.
For M, L ∈Ṁ Vn , there exists a unique CAF < M, L > of bounded variation such that
E x (M t L t ) = E x (< M, L > t ), t ≥ 0, E-q.e. x ∈ V n .
Denote by µ The mapping f → f · M is continuous and linear from L 2 (V n ; µ (n)
<M > ) into the Hilbert space (Ṁ Vn ; e Vn ).
Proof. Let L ∈Ṁ Vn . Then, by Lemma 3.5, we get
| 1 2 Vn fh n dµ (n) <M,L> | ≤ 1 √ 2 ( Vn f 2h n dµ (n) <M > ) 1/2 (1/2 Vnh n dµ (n) <L> ) 1/2 ≤ h n ∞ √ 2 f L 2 (Vn;µ (n) <M > )
e Vn (L).
Therefore, the proof is completed by Lemma 2.5.
Similar to [5, Lemma 5.6.2, Corollary 5.6.1 and Lemma 5.6.3], we can prove the following two lemmas.
(i) dµ (n) <f ·M,L> = f dµ (n) <M,L> for f ∈ L 2 (V n ; µ (n) <M > ). (ii) g · (f · M) = (gf ) · M for f ∈ L 2 (V n ; µ (n) <M > ) and g ∈ L 2 (V n ; f 2 dµ (n) <M > ). (iii) e Vn (f · M, g · L) = 1 2 f gh n dµ (n) <M,L> for f ∈ L 2 (V n ; µ (n) <M > ) and g ∈ L 2 (V n ; µ (n) <L> ).M [Φ(u)],c = m i=1 Φ x i (u) · M [u i ],c , P x -a.s. for E-q.e. x ∈ V n .
(3.34)
Proof. Let v ∈ D(E) Vn,b and f, g ∈ D(E) Vn,b . Then, by Lemma 3.7(iii) and Theorem 3.4, we get
e Vn (f · M n,[Φ(u)],c ,g · M n,[v] ) = 1 2 Vnfgh n dµ (n) <M n,[Φ(u)],c ,M n,[v] > = 1 2 Vnfgh n dµ n,c <Φ(u),v> = 1 2 m i=1 Vnfgh n Φ x i (u)dµ n,c <u i ,v> = 1 2 m i=1 Vnfgh n Φ x i (u)dµ (n) <M n,[u i ],c ,M n,[v] > = e Vn ( m i=1 (f Φ x i (u)) · M n,[u i ],c ,g · M n,[v] ).
By Lemma 3.8, we get
f · M n,[Φ(u)],c = m i=1 (f Φ x i (u)) · M n,[u i ],c , P x -a.s. for E-q.e. x ∈ V n .
Therefore, (3.34) is satisfied by Lemma 3.7(ii), since f ∈ D(E) Vn,b is arbitrary.
Let M ∈Ṁ loc . Then, there exist {V n }, {E n } ∈ Θ and {M n | M n ∈Ṁ Vn } such that E n ⊂ V n , M t∧τ En = M n t∧τ En , t ≥ 0, n ∈ N. We define
< M > t∧τ En :=< M n > t∧τ En ; < M > t := lim s↑ζ < M > s for t ≥ ζ.
Then, we can see that < M > is well-defined and < M > is a PCAF. Denote by µ <M > the Revuz measure of < M >. We define
L 2 loc (E; µ <M > ) := {f | ∃ {V n }, {E n } ∈ Θ and {M n | M n ∈Ṁ Vn } such that E n ⊂ V n , M t∧τ En = M n t∧τ En , f · I En ∈ L 2 (E n ; µ (n) <M n > ), t ≥ 0, n ∈ N} For f ∈ L 2 loc (E; µ <M > ), we define f · M on [[0, ζ[[ by (f · M) t∧τ En := ((f · I En ) · M n ) t∧τ En , t ≥ 0, n ∈ N.M [Φ(u)],c = m i=1 Φ x i (u) · M [u i ],c on [0, ζ), P x -a.s. for E-q.e. x ∈ E.
(3.35)
Proof. Since 1 ∈ D(E) loc , Φ(u) ∈ D(E) loc by Theorem 3.4. Hence (3.35) is a direct consequence of (3.34).
Examples
In this section we investigate some concrete examples.
(X V t ) − u(X V 0 ) = M V,[u] t + N V,[u] t
, by Lemma 2.6, where M V, [u] is an MAF of finite energy and N V, [u] is a CAF of zero energy w.r.t. X V .
Example 4.2. Let d ≥ 3, U be an open subset of R d , σ, ρ ∈ L 1 loc (U; dx), σ, ρ > 0 dx-a.e. For u, v ∈ C ∞ 0 (U), we define E ρ (u, v) = d i,j=1 U ∂u ∂x i ∂v ∂x j ρdx.
Assume that (E ρ , C ∞ 0 (U)) is closable on L 2 (U; σdx).
Let a ij , b i , d i ∈ L 1 loc (U; dx), 1 ≤ i, j ≤ d. For u, v ∈ C ∞ 0 (U), we define E(u, v) = d i,j=1 U ∂u ∂x i ∂u ∂x j a ij dx + d i=1 U ∂u ∂x i vb i dx + d i=1 U u ∂v ∂x i d i dx + U uvcdx.
Setã ij := 1 2 (a ij + a ji ),ǎ ij := 1 2 (a ij − a ji ), b := (b 1 , . . . , b d ), and d := (d 1 , . . . , d d ). Define F to be the set of all functions g ∈ L 1 loc (U; dx) such that the distributional derivatives ∂g
∂x i , 1 ≤ i ≤ d, are in L 1 loc (U; dx) such that ∇g (gσ) − 1 2 ∈ L ∞ (U; dx) or ∇g p (g p+1 σ p/q ) − 1 2 ∈ L d (U; dx) for some p, q ∈ (1, ∞) with 1 p + 1 q = 1, p < ∞,
where · denotes Euclidean distance in R d . We say that a B(U)−measurable function f has property (A ρ,σ ) if one of the following conditions holds:
(i) f (ρσ) − 1 2 ∈ L ∞ (U; dx).
(ii) f p (ρ p+1 σ p/q ) − 1 2 ∈ L d (U, dx) for some p, q ∈ (1, ∞) with 1 p + 1 q = 1, p < ∞, and ρ ∈ F .
Suppose that
(C.I) There exists η > 0 such that d i,j=1ã ij ξ i ξ j ≥ η|ξ| 2 , ∀ξ = (ξ 1 , . . . , ξ d ) ∈ R d . (C.II)ǎ ij ρ −1 ∈ L ∞ (U; dx) for 1 ≤ i, j ≤ d. (C.III) For all K ⊂ U, K compact, 1 K b + d and 1 K c 1/2 have property (A ρ,σ ), and (c + α 0 σ)dx − d i=1 ∂d i ∂x i is a positive measure on B(U) for some α 0 ∈ (0, ∞). (C.IV) ||b − d|| has property (A ρ,σ ). (C.V) b = β + γ such that β , γ ∈ L 1 loc (U, dx), (α 0 σ + c)dx − d 1 ∂γ i
∂x i is a positive measure on B(U) and β has property (A ρ,σ ).
Then, by [18,Theorem 1.2], there exists α > 0 such that (E α , C ∞ 0 (U)) is closable on L 2 (U; dx) and its closure (E α , D(E α )) is a regular local semi-Dirichlet form on L 2 (U; dx). Define η α (u, u) := E α (u, u) − ▽u, β udx for u ∈ D(E α ). By [18,Theorem 1.2 (ii) and (1.28)], we know (η α , D(E) α ) is a Dirichlet form and there exists C > 1 such that for any u ∈ D(E α ),
1 C η α (u, u) ≤ E α (u, u) ≤ Cη α (u, u).
Let X be the diffusion process associated with (E α , D(E α )). Then, by Theorem 2.4, Fukushima's decomposition holds for any u ∈ D(E) loc . Moreover, the transformation formula (3.35) holds for local MAFs. Denote by F C ∞ b the family of all functions on E with the following expression:
u(µ) = ϕ(µ(f 1 ), . . . , µ(f k )), f i ∈ C b (S), 1 ≤ i ≤ k, ϕ ∈ C ∞ 0 (R k ), k ∈ N.
Let m be a finite positive measure on (E, B(E)), where B(E) denotes the Borel σ-algebra of E. We suppose that
supp[m] = E. Let b : S × E → R be a measurable function such that sup µ∈E b(µ) µ < ∞, where b(µ)(x) := b(x, µ). For u, v ∈ F C ∞ b , we define E b (u, v) := E ( ∇u(µ), ∇v(µ) µ + b(µ), ∇u(µ) µ v(µ))m(dµ),
where ∇u(µ) := (∇ x u(µ)) x∈S := d ds
u(µ + sε x ) s=0 x∈S .
We suppose that (E 0 , F C ∞ b ) is closable on L 2 (E; m). Then, by [15,Theorem 3.5], there exists α > 0 such that
(E b α , F C ∞ b )
is closable on L 2 (E; m) and its closure (E b α , D(E b α )) is a quasi-regular local semi-Dirichlet form on L 2 (E; m). Moreover, by [15,Lemma 2.5], there exists C > 1 such that for any u ∈ D(E b α ),
1 C E 0 α (u, u) ≤ E b α (u, u) ≤ CE 0 α (u, u).
Let X be the diffusion process associated with (E b α , D(E b α )), which is a Fleming-Viot type process with interactive selection. Then, by Theorem 2.4, Fukushima's decomposition holds for any u ∈ D(E b ) loc . Moreover, the transformation formula (3.35) holds for local MAFs.
5 Appendix: some results on potential theory and PCAFs for semi-Dirichlet forms
Let E be a metrizable Lusin space and m be a σ-finite positive measure on its Borel σ-algebra B(E). Suppose that (E, D(E)) is a quasi-regular semi-Dirichlet form on L 2 (E; m). Let K > 0 be a continuity constant of (E, D(E)), i.e.,
|E 1 (u, v)| ≤ KE 1 (u, u) 1/2 E 1 (v, v) 1/2 , ∀u, v ∈ D(E). (5.1)
Denote by (T t ) t≥0 and (G α ) α≥0 (resp. (T t ) t≥0 and (Ĝ α ) α≥0 ) the semigroup and resolvent (resp. co-semigroup and co-resolvent) associated with (E, D(E)). Then there exists an m-tight special standard process M = (Ω, F , (F t ) t≥0 , (X t ) t≥0 , (P x ) x∈E ∆ ) which is properly associated with (E, D(E)) (cf. [13,Theorem 3.8]). It is known that any quasi-regular semi-Dirichlet form is quasi-homeomorphic to a regular semi-Dirichlet form (cf. [7, Theorem 3.8]). By quasi-homeomorphism and the transfer method (cf. [2] and [12, VI, especially, Theorem VI.1.6]), without loss of generality we can restrict to Hunt processes when we discuss the AFs of M.
Let A ⊂ E and f ∈ D(E). Denote by f A (resp.f A ) the 1-balayaged (resp. 1-cobalayaged) function of f on A. We fix φ ∈ L 2 (E; m) with 0 < φ ≤ 1 m-a.e. and set h =
G 1 φ,ĥ =Ĝ 1 φ. Define for U ⊂ E, U open, cap φ (U) := (h U , φ)
and for any A ⊂ E,
cap φ (A) := inf{cap φ (U) | A ⊂ U, U open}.
Hereafter, (·, ·) denotes the usual inner product of L 2 (E; m). By [13,Theorem 2.20], we have cap φ (A) = (h A , φ) = E 1 (h A ,Ĝ 1 φ). (ii) Let µ ∈ S 0 and α > 0. Then there exist unique
U α µ ∈ D(E) andÛ α µ ∈ D(E) such that E α (U α µ, v) = Eṽ (x)µ(dx) = E α (v,Û α µ). (5.2)
We call U α µ andÛ α µ α-potential and α-co-potential, respectively.
Let u ∈ D(E). By quasi-homeomorphism and similar to [ (i) u is α-excessive (resp. α-co-excessive).
(ii) u is an α-potential (resp. α-co-potential).
(iii) E α (u, v) ≥ 0 (resp. E α (v, u) ≥ 0), ∀v ∈ D(E), v ≥ 0. Theorem 5.3. Definê S * 00 := {µ ∈ S 0 |Û 1 µ ≤ cĜ 1 φ for some constant c > 0}.
Let A ∈ B(E). If µ(A) = 0 for all µ ∈Ŝ * 00 , then cap φ (A) = 0.
Proof. By quasi-homeomorphism, without loss of generality, we suppose that (E, D(E)) is a regular semi-Dirichlet form. Assume that A ∈ B(E) satisfying µ(A) = 0 for all µ ∈Ŝ * 00 . We will prove that cap φ (A) = 0.
Step 1. We first show that µ(A) = 0 for all µ ∈ S 0 . Suppose that µ ∈ S 0 . By [14,Proposition 4.13], there exists an E-nest {F k } of compact subsets of E such that Ĝ 1 φ, Û 1 µ ∈ C({F k }) and Ĝ 1 φ > 0 on F k for each k ∈ N. Then, there exists a sequences of positive constants {a k } such that
Û 1 µ ≤ a k Ĝ 1 φ on F k for each k ∈ N.
Define u k =Û 1 (I F k · µ) and set v k = u k ∧ a kĜ1 φ for k ∈ N. Then u k ≤ Û 1 µ ≤ a k Ĝ 1 φ E-q.e. on F k . By (5.2), we get
E 1 (v k , u k ) = F k v k (x)µ(dx) = F k u k (x)µ(dx) = E 1 (u k , u k ).
Since v k is a 1-co-potential and v k ≤ u k m-a.e.,
E 1 (v k −u k , v k −u k ) = E 1 (v k −u k , v k )− E 1 (v k − u k , u k ) ≤ 0, proving that u k = v k ≤ a kĜ1 φ
m-a.e. Hence I F k · µ ∈Ŝ * 00 . Therefore µ(A) = 0 by the assumption that A is not charged by any measure in S * 00 .
Step 2. Suppose that cap φ (A) > 0. By [13,Corollary 2.22], there exists a compact set K ⊂ B such that cap φ (K) > 0. Note that ( Ĝ 1 φ) K ∈ D(E) is 1-co-excessive. By Remark 5.2(ii), there exists µ ( Ĝ 1 φ) K ∈ S 0 such that
cap φ (K) = E 1 ((G 1 φ) K ,Ĝ 1 φ) = E 1 (G 1 φ, ( Ĝ 1 φ) K ) = E G 1 φdµ ( Ĝ 1 φ) K ≤ µ ( Ĝ 1 φ) K (E). (5.3)
For any v ∈ C 0 (K c ) ∩ D(E), we have ṽdµ
( Ĝ 1 φ) K = E 1 (v, ( Ĝ 1 φ) K ) = 0. Since C 0 (K c ) ∩ D(E)
is dense in C 0 (K c ), the support of µ Ĝ 1 φ is contained in K. Thus, by (5.3), we get µ Ĝ 1 φ (K) > 0. Therefore cap φ (A) = 0 by Step 1.
Theorem 5.4. The following conditions are equivalent for a positive measure µ on (E, B(E)).
(i) µ ∈ S.
(ii) There exists an E-nest {F k } satisfying I F k · µ ∈ S 0 for each k ∈ N.
Proof. (ii) ⇒ (i) is clear. We only prove (i) ⇒ (ii). Let (Ẽ, D(E)) be the symmetric part of (E, D(E)). Then (Ẽ, D(E)) is a symmetric positivity preserving form. Denote by (G α ) α≥0 the resolvent associated with (Ẽ, D(E)) and seth :=G 1 ϕ. Then (Ẽh 1 , D(Eh)) is a quasi-regular symmetric Dirichlet form on L 2 (E;h 2 · m) (thē h-transform of (Ẽ 1 , D(E))).
By [10, pages 838-839], for an increasing sequence {F k } of closed sets, {F k } is an E-nest if and only if it is anẼh 1 -nest. We select a compactẼh 1 -nest {F k } such that h is bounded on each F k . Let µ ∈ S(E), the family of smooth measures w.r.t. (E, B(E)). Then µ ∈ S(Ẽh 1 ), the family of smooth measures w.r.t. (Ẽh 1 , D(Eh)). By [5, Theorem 2.2.4] and quasi-homeomorphism, we know that there exists a compactẼh 1 -nest (hence E-nest) {J k } such that I J k · µ ∈ S 0 (Ẽh 1 ). Then, there exists Lemma 5.6. Let {u n } be a sequence of E-quasi continuous functions in D(E). If {u n } is an E 1 -Cauchy sequence, then there exists a subsequence {u n k } satisfying the condition that for E-q.e. x ∈ E, P x (u n k (X t ) converges uniformly in t on each compact interval of [0, ∞)) = 1.
In [3], Fitzsimmons extended the smooth measure characterization of PCAFs from the Dirichlet forms setting to the semi-Dirichlet forms setting (see [3,Theorem 4.22]). In particular, the following proposition holds. where U α A f (x) := E x ( ∞ 0 e −αt f (X t )dA t ). (iii) For any t > 0, g ∈ B + (E) ∩ L 2 (E; m) and f ∈ B + (E),
E g·m ((f A) t ) = t 0 < f · µ, T s g > ds.
(iv) For any α > 0, g ∈ B + (E) ∩ L 2 (E; m) and f ∈ B + (E),
(g, U α A f ) =< f · µ, Ĝ α g > .
When µ ∈ S 0 , each of the above four conditions is also equivalent to each of the following three conditions:
(v) U 1 A 1 is an E-quasi-continuous version of U 1 µ. (vi) For any g ∈ B + (E) ∩ D(E) and f ∈ B + b (E), lim t↓0 1 t E g·m ((f A) t ) =< f · µ,g > .
(vii) For any g ∈ B + (E) ∩ D(E) and f ∈ B + b (E), lim α→∞ α(g, U α A f ) =< f · µ,g > .
The family of all equivalent classes of PCAFs and the family S are in one to one correspondence under the Revuz correspondence (5.4).
Given a PCAF A, we denote by µ A the Revuz measure of A. Lemma 5.9. Let A be a PCAF and ν ∈Ŝ * 00 . Then there exists a positive constant C ν such that for any t > 0,
E ν (A t ) ≤ C ν (1 + t) E ĥ dµ A .
Proof. By Theorem 5.4, we may assume without loss of generality that µ A ∈ S 0 . Set c t (x) = E x (A t ). Similar to [16, page 137], we can show that c t ∈ D(E) and for
any v ∈ D(E) E(c t , v) =< µ A , v −T t v > .
Let ν ∈Ŝ * 00 . Then
E ν (A t ) = < ν, c t > = E 1 (c t ,Û 1 ν) ≤ < µ A ,Û 1 ν > + < c t ,Û 1 ν > ≤ C ν [< µ A ,ĥ > +Eĥ ·m (A t )]
for some constant C ν > 0. Therefore the proof is completed by (5.4).
limit exists in [0, ∞].
and D(E) loc := {u | ∃ {V n } ∈ Θ and {u n } ⊂ D(E)
Definition 2. 2 .
2(cf.[5, page 226]) A family A = (A t ) t≥0 of functions on Ω is called an local additive functional (local AF in short) of M, if A satisfies all the requirements for an AF as stated in above (i) and (ii), except that the additivity property (2.1) is required only for s, t ≥ 0 with t + s < ζ(ω).
[[ 0
0,ζ[[ loc to denote the family of all local martingales on [[0, ζ[[ (cf. [6, §8.3])
h n :=h Vn , and D(E) Vn,b := B b (E) ∩ D(E) Vn . By the definition (2.3), employing some potential theory developed in the Appendix of this paper (cf. Lemma 5.9, Theorem 5.8 and Theorem 5.3), following the argument of [5, Theorem 5.2.1], we can prove the following lemma.
Lemma 2.5.Ṁ Vn is a real Hilbert space with inner product e Vn . Moreover, if {M l } ⊂Ṁ Vn is e Vn -Cauchy, then there exist a unique M ∈Ṁ Vn and a subsequence {l k } such that lim k→∞ e Vn (M l k − M) = 0 and for E-q.e. x ∈ V n , P x ( lim k→∞ M l k (t) = M(t) uniformly on each compact interval of [0, ∞)) = 1.
Lemma 2 . 6 .
26Let u ∈ D(E) Vn,b . Then there exist unique M n,[u] ∈Ṁ Vn and N n,[u] ∈ N Vn c such that for E-q.e. x ∈ V n , Note that if an AF A ∈Ṁ Vn with e Vn (A) = 0 then µ (n) <A> ( h n ) = 2e Vn (A) = 0 by Theorem 5.8 in the Appendix and (2.3). Here µ (n) <A> denotes the Revuz measure of A w.r.t. X Vn . Hence < A >= 0 since h n > 0 E-q.e. on V n . Therefore Ṁ Vn ∩ N Vn c = {0} and the proof of the uniqueness of decomposition (2.5) is complete.
is a square integrable martingale and a zero quadratic variation process w.r.t. P m . This implies that P m (< (M [u] − M 1 )I [[0,τ En ]] > t = 0, ∀t ∈ [0, ∞)) = 0. Consequently by the analog of [5, Lemma 5.1.10] in the semi-Dirichlet forms setting, P x (< (M [u] − M 1
the Revuz measure of < M n,[u] > (cf. Lemma 2.6 and Theorem 5.8 in the Appendix). For u, v ∈ D(E) Vn,b , we define µ (n) <u,v> :=
(3. 7 )
7By[12, Theorem I.2.13], for each k ∈ N, f k,l → f k in D(E) Vn as l → ∞. Furthermore, by Assumption 2.3,[12, Corollary I.4.15] and (3.5), we can show that sup l≥1 E(uf k,l , uf k,l ) < ∞. Thus, we obtain by[12, Lemma I.2.12] that uf k,l → uf k weakly in D(E) Vn as l → ∞. Note that Vn h n dµ(n) <u> = 2e Vn (M n,[u] ) < ∞ for any u ∈ D(E) Vn,b. Therefore, we obtain by (3.7), (3.5) and the dominated convergence theorem thatVn f k dµ (n) <u> = 2E(u, uf k ) − E(u 2 , f k ), ∀u ∈ D(E) Vn,b .(3.8)By(3.4) and the weak sector condition, we get sup k≥1 E 1 (f k , f k ) < ∞. Furthermore, by Assumption 2.3 and [12, Corollary I.4.15], we can show that sup k≥1 E(uf k , uf k ) < ∞. Thus, we obtain by[12, Lemma I.2.12] that f k → f and uf k → uf weakly in D(E) Vn as k → ∞. Therefore (3.3) holds by(3.8) and the monotone convergence theorem.For u ∈ D(E) Vn,b , we denote by M n,[u],c and M n,[u],k the continuous and killing parts of M n,[u] , respectively; denote by µ n,c <u> and µ n,k <u> the Revuz measures of < M n,[u],c > and < M n,[u],k >, respectively. Then M n,[u] = M n,[u],c + M n,[u],k with
. 2 .
2Let u, v, w ∈ D(E) Vn,b . Then dµ n,c <uv,w> =ũdµ n,c <v,w> +ṽdµ n,c <u,w> . (3.13) Proof. By quasi-homeomorphism and the polarization identity, (3.13) holds for u, v, w ∈ D(E) Vn,b is equivalent to Vnf dµ n,c <u 2 ,w> = 2 Vnfũ dµ n,c <u,w> , ∀f, u, w ∈ D(E) Vn,b . w> , ∀f, u, w ∈ D(E) Vn,b . (3.15) For k ∈ N, we define v k := kR Vn k+1 u. Then v k → u in D(E) Vn as k → ∞. By Assumption 2.3 and [12, Corollary I.4.15], we can show that sup k≥1 E(v k w, v k w) < ∞. Then, by [12, Lemma I.2.12], there exists a subsequence {(
Vns
(f k,l u k )) −E(u k w,T Vn s (f k,l u k ))]ds. (3.20) By [1, Theorem 3.4],T Vn s (f k,l u k ) → f k,l u k in D(E) Vn as s → 0. Furthermore, by Assumption 2.3, [12, Corollary I.4.15] and the fact that |e −sT Vn s (f k,l u k )| ≤ k u k ∞hn , s > 0, we can show that sup s>0 E(wT Vn s (f k,l u k ), wT Vn s (f k,l u k )) < ∞. Thus, we obtain by [12, Lemma I.2.12] that wT Vn s (f k,l u k ) → wf k,l u k weakly in D(E) Vn as s → 0. Similarly, we get u kT Vn s (f k,l u k ) → u k f k,l u weakly in D(E) Vn as s → 0. Therefore, by (3.20) and (
III(t) ≤ (lim t↓0 1 t E f k,l ·m [(M n,[u k ],c t
i) in the Appendix, (3.10)-(3.12), we obtain that for ψ 1 , ψ 2 ∈ D(E) Vn,b , lim t↓0 1 t E f k,l ·m [(( ψ 1 ψ 2 )(X Vn ζ (n) − )I ζ (n) ≤t ) p ] = Vn f k,l dµ n,k <ψ 1 ,ψ 2 > = Vn f k,l ψ 1 ψ 2 dK (n)
,l u k 2w dK (n) .(3.31) Therefore, the proof is completed by (3.19), (3.21), (3.22), (3.25) and (3.31).
( 3 . 32 )
332Proof. Φ(u) ∈ D(E) Vn,b is a direct consequence of Assumption 2.3 and the corresponding property of Dirichlet form. Below we only prove (3.32). Let v ∈ D(E) Vn,b . Then (3.32) is equivalent to Vnfh n dµ n,c <Φ(u),v> = m i=1 Vnfh n Φ x i (ũ)dµ n,c <u i ,v> , ∀f ∈ D(E) Vn,b . (3.33)
L> the Revuz measure of < M, L >. Then, similar to [5, Lemma 5.6.1], we can prove the following lemma.
Lemma 3 . 5 .
35If f ∈ L 2 (V n ; µ (n) <M > ) and g ∈ L 2 (V n ; µ (n) <L> ), then f g is integrable w.r.t. |µ
. 6 .
6Let M ∈Ṁ Vn and f ∈ L 2 (V n ; µ (n) <M > ). Then there exists a unique element f · M ∈Ṁ Vn such that e Vn (f · M, L) = 1 2 Vn fh n dµ (n) <M,L> , ∀L ∈Ṁ Vn .
Lemma 3 . 7 .
37Let M, L ∈Ṁ Vn . Then
Lemma 3 . 8 .
38The family {f · M u | f ∈ D(E) Vn,b } is dense in (Ṁ Vn , e Vn ).
Theorem 3 . 9 .
39Let m ∈ N, Φ ∈ C 1 (R m ) with Φ(0) = 0, and u = (u 1 , u 2 , . . . , u m ) with u i ∈ D(E) Vn,b , 1 ≤ i ≤ m. Then
Then, we can see that f · M is well-defined and f · M ∈ M [[0,ζ[[ loc . Denote by M c the continuous part of M.Finally, we obtain the main result of this section.
Theorem 3 . 10 .
310Suppose that (E, D(E)) is a quasi-regular local semi-Dirichlet form on L 2 (E; m) satisfying Assumption 2.3. Let m ∈ N, Φ ∈ C 1 (R m ), and u = (u 1 , u 2 , . . . , u m ) with u i ∈ D(E) loc , 1 ≤ i ≤ m. Then Φ(u) ∈ D(E) loc and
Example 4. 1 .
1We consider the following bilinear formE(u, v) = 1 0 u ′ v ′ dx + 1 0 bu ′ vdx, u, v ∈ D(E) := H 1,2 0 (0, 1). (i) Suppose that b(x) = x 2 .Then one can show that (E, D(E)) is a regular local semi-Dirichlet form (but not a Dirichlet form) on L 2 ((0, 1); dx) (cf. [13, Remark 2.2(ii)]). Note that any u ∈ D(E) is bounded and 1 2 −Hölder continuous by the Sobolev embedding theorem. Then we obtain Fukushima's decomposition, u(X t ) − u(X 0 ) = M by Lemma 2.6, where X is the diffusion process associated with (E, D(E)), M[u] is an MAF of finite energy and N [u] is a CAF of zero energy.(ii) Suppose that b(x) = √ x. By [13, Remark 2.2(ii)], (E, D(E)) is a regular local semi-Dirichlet form but not a Dirichlet form. Let u ∈ D(E) loc . Then we obtain Fukushima's decomposition (2.4) by Theorem 2.4. If u ∈ D(E) satisfying supp[u] ⊂ (0, 1), then we may choose an open subset V of (0, 1) such that supp[u] ⊂ V ⊂ (0, 1). Let X V be the part process of X w.r.t. V . Then we obtain Fukushima's decomposition, u
Example 4. 3 .
3Let S be a Polish space. Denote by B(S) the Borel σ-algebra of S. Let E := M 1 (S) be the space of probability measures on (S, B(S)). For bounded B(S)-measurable functions f, g on S and µ ∈ E, we define µ(f ) := S f dµ, f, g µ := µ(f g) − µ(f ) · µ(g), f µ := f, f 1/2 µ .
Definition 5. 1 .
1A positive measure µ on (E, B(E)) is said to be of finite energy integral, denoted by S 0 , if µ(N) = 0 for each E-exceptional set N ∈ B(E) and there exists a positive constant C such thatE |ṽ(x)|µ(dx) ≤ CE 1 (v, v) 1/2 , ∀v ∈ D(E). Remark 5.2. (i) Assume that (E, D(E)) is a regular semi-Dirichlet form. Let µ be a positive Radon measure on E satisfying E |v(x)|µ(dx) ≤ CE 1 (v, v) 1/2 , ∀v ∈ C 0 (E) ∩ D(E)for some positive constant C, where C 0 (E) denotes the set of all continuous functions on E with compact supports. Then one can show that µ charges no Eexceptional set (cf.[8, Lemma 3.5]) and thus µ ∈ S 0 .
Proposition 5.7. (cf.[3, Proposition 4.12]) For any µ ∈ S 0 , there is a uniquefinite PCAF A such that E x ( ∞ 0 e −t dA t ) is an E-quasi-continuous version of U 1 µ.By Proposition 5.7 and Theorem 5.4, following the arguments of [5, Theorems 5.1.3 and 5.1.4] (with slight modifications by virtue of [13, 14, 10] and [1, Theorem 3.4]), we can obtain the following theorem.
Theorem 5 . 8 .
58Let µ ∈ S and A be a PCAF. Then the following conditions are equivalent to each other:(i) For any γ-co-excessive function g (γ ≥ 0) in D(E) and f ∈ B + (E), lim t↓0 1 t E g·m ((f A) t ) =< f · µ,g > . (5.4)(ii)For any γ-co-excessive function g (γ ≥ 0) in D(E) and f ∈ B + (E),α(g, U α+γ A f ) ↑ < f · µ,g >, α ↑ ∞,
Now we can state the main result of this section.Theorem 2.4. Suppose that (E, D(E)) is a quasi-regular local semi-Dirichlet form
on L 2 (E; m) satisfying Assumption 2.3. Then, for any u ∈ D(E) loc , there exist
M [u] ∈Ṁ loc and N [u] ∈ N c,loc such that
En . By the analog of [5, Lemma 5.5.2] in the semi-Dirichlet forms setting, {N l,[unfn]t∧τ En } is a CAF of X Vn . By (2.11), e Vn (N l,[unfn]
s., and both {M n,[unfn] t∧τ En } and {M l,[unfn]t∧τ En
} are {Υ n
t }-martingale, hence M
n,[unfn]
t∧τ En
=
M
l,[unfn]
t∧τ En
and N
n,[unfn]
t∧τ En
= N
l,[unfn]
where L Vn is the generator of X Vn . By Assumption 2.3 and [12, Corollary I.4.15], we can show that sup k≥1
By Assumption 2.3 and [12, Corollary I.4.15], we can show that sup k≥1w> .
(3.16)
k for any δ ≤ 1 and E-q.e. x ∈ V n . Letting δ → 0, by (3.24), the dominated convergence theorem and (3.23), we getlim
t↓0
III(t) = 0.
(3.25)
By [17, Theorem II.33, integration by parts (page 68) and Theorem II.28], we
get
5, Theorem 2.2.1] (cf. [8, Lemma 1.2]), one can show that the following conditions are equivalent to each other:
t } the minimum completed admissible filtration of X Vn . For n < l, F n t ⊂ F l t ⊂ F t . Since E n ⊂ V n , τ En is an {F n t }-stopping time.Lemma 2.7. For n < l, we have M n,[unfn] t∧τ En = M l,[u l f l ] t∧τ En and N n,[unfn] t∧τ En = N l,[u l f l ] t∧τ En , t ≥ 0, P x -a.s. for E-q.e. x ∈ V n .Proof. Let n < l.Since M n,[unfn] ∈Ṁ Vn , M n,[unfn] is an {F n t }-martingale by the Markov property. Since τ En is an {F n t }-stopping time, {M n,[unfn] t∧τ En } is an {F n t∧τ En }martingale. Denote Υ n t = σ{X Vn s∧τ En | 0 ≤ s ≤ t}. Then {M n,[unfn] t∧τ En } is a {Υ n t }martingale. Denote Υ n,l t = σ{X V l s∧τ En | 0 ≤ s ≤ t}. Similarly, we can show that {M l,[unfn] t∧τ En
Acknowledgments
dµ ≤ C. kẼh 1 (g, g) 1/2 , ∀g ∈ D(Ehdµ ≤ C kẼh 1 (g, g) 1/2 , ∀g ∈ D(Eh).
S Albeverio, R Z Fan, M Röckner, W Stannat, A remark on coercive forms and associated semigroups, Partial Differential Operators and Mathematical Physics, Operator Theory Advances and Applications. 78S. Albeverio, R.Z. Fan, M. Röckner and W. Stannat, A remark on coercive forms and associated semigroups, Partial Differential Operators and Mathe- matical Physics, Operator Theory Advances and Applications 78 (1995) 1-8.
Quasi-homeomorphisms of Dirichlet forms. Z Q Chen, Z M Ma, M Röckner, Nagoya Math. J. 136Z.Q. Chen, Z.M. Ma and M. Röckner, Quasi-homeomorphisms of Dirichlet forms, Nagoya Math. J. 136 (1994) 1-15.
On the quasi-regularity of semi-Dirichlet forms. P J Fitzsimmons, Potential Anal. 15P.J. Fitzsimmons, On the quasi-regularity of semi-Dirichlet forms, Potential Anal. 15 (2001) 158-185.
A decomposition of additive functionals of finite energy. M Fukushima, Nagoya Math. J. 74M. Fukushima, A decomposition of additive functionals of finite energy, Nagoya Math. J. 74 (1979) 137-168.
M Fukushima, Y Oshima, M Takeda, Dirichlet Forms and Symmetric Markov Processes. Walter de Gruyterfirst edition. second revised and extended edition, 2011. (In the present paper, the sections and pages are quoted from the first editionM. Fukushima, Y. Oshima and M. Takeda, Dirichlet Forms and Symmetric Markov Processes, Walter de Gruyter, first edition, 1994; second revised and extended edition, 2011. (In the present paper, the sections and pages are quoted from the first edition.)
S W He, J G Wang, J A Yan, Semimartingale Theory and Stochastic Calculus. BeijingScience PressS.W. He, J.G. Wang and J.A. Yan, Semimartingale Theory and Stochastic Calculus, Science Press, Beijing, 1992.
Extensions of Lévy-Khintchine formula and Beurling-Deny formula in semi-Dirichlet forms setting. Z C Hu, Z M Ma, W Sun, J. Funct. Anal. 239Z.C. Hu, Z.M. Ma and W. Sun, Extensions of Lévy-Khintchine formula and Beurling-Deny formula in semi-Dirichlet forms setting, J. Funct. Anal. 239 (2006) 179-213.
Balayage of semi-Dirichlet forms, To appear in Can. Z C Hu, W Sun, J. Math. Z.C. Hu and W. Sun, Balayage of semi-Dirichlet forms, To appear in Can. J. Math.
Stochastic calculus related to non-symmetric Dirichlet forms. J H Kim, Osaka J. Math. 24J.H. Kim, Stochastic calculus related to non-symmetric Dirichlet forms, Osaka J. Math. 24 (1987) 331-371.
Maximum principles for subharmonic functions via local semi-Dirichlet forms. K Kuwae, Can. J. Math. 60K. Kuwae, Maximum principles for subharmonic functions via local semi- Dirichlet forms, Can. J. Math. 60 (2008) 822-874.
Stochastic calculus over symmetric Markov processes without time reversal. K Kuwae, Ann. Probab. 38K. Kuwae, Stochastic calculus over symmetric Markov processes without time reversal, Ann. Probab. 38 (2010) 1532-1569.
Z M Ma, M Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms. Springer-VerlagZ.M. Ma and M. Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer-Verlag, 1992.
Markov processes associated with semi-Dirichlet forms. Z M Ma, L Overbeck, M Röckner, Osaka J. Math. 32Z.M. Ma, L. Overbeck and M. Röckner, Markov processes associated with semi-Dirichlet forms, Osaka J. Math. 32 (1995) 97-119.
Markov processes associated with positivity preserving coercive forms. Z M Ma, M Röckner, Can. J. Math. 47Z.M. Ma and M. Röckner, Markov processes associated with positivity pre- serving coercive forms, Can. J. Math. 47 (1995) 817-840.
An analytic approach to Fleming-Viot processes with interactive selection. L Overbeck, M Röckner, B Schmuland, Ann. Probab. 23L. Overbeck, M. Röckner and B. Schmuland, An analytic approach to Fleming-Viot processes with interactive selection, Ann. Probab. 23 (1995) 1-36.
Y Oshima, Lecture on Dirichlet Spaces. Univ. Erlangen-NürnbergY. Oshima, Lecture on Dirichlet Spaces, Univ. Erlangen-Nürnberg, 1988.
P E Protter, Stochastic Integration and Differential Equations. Berlin-Heidelberg-New YorkSpringerP.E. Protter, Stochastic Integration and Differential Equations, Springer, Berlin-Heidelberg-New York, 2005.
Quasi-regular Dirichlet forms: examples and counterexamples. M Röckner, B Schmuland, Can. J. Math. 47M. Röckner and B. Schmuland, Quasi-regular Dirichlet forms: examples and counterexamples, Can. J. Math. 47 (1995) 165-200.
|
[] |
[
"AN IMPRIMITIVITY THEOREM FOR PARTIAL ACTIONS",
"AN IMPRIMITIVITY THEOREM FOR PARTIAL ACTIONS"
] |
[
"Damián Ferraro "
] |
[] |
[] |
We define proper, free and commuting partial actions on upper semicontinuous bundles of C * −algebras. With such, we construct the C *
| null |
[
"https://arxiv.org/pdf/1209.4092v1.pdf"
] | 119,614,952 |
1209.4092
|
00062995a8d473dd2b40587b3fac41ea4e71d570
|
AN IMPRIMITIVITY THEOREM FOR PARTIAL ACTIONS
18 Sep 2012
Damián Ferraro
AN IMPRIMITIVITY THEOREM FOR PARTIAL ACTIONS
18 Sep 2012
We define proper, free and commuting partial actions on upper semicontinuous bundles of C * −algebras. With such, we construct the C *
Properties of Partial Actions
Through this work the letters G, H and K will denote LCH topological groups and X, Y topological spaces. When any additional topological property is required it will be explicitly mentioned (this will never happen for the groups).
This section is a brief resume of some results contained in [2] and in the PHD Thesis [1], for that reason some proof will be omitted. We start by recalling the definition of partial action. (1) X t is a subset of X and X e = X (e being the identity of H).
(2) α t : X t → X t −1 is a bijection and α e = id X (the identity on X).
(3) If x ∈ X t −1 and α t (x) ∈ X s −1 , then x ∈ X (st) −1 and α st (x) = α s • α t (x).
The domain of α is the set Γ α := {(t, x) ∈ H × X | x ∈ X t −1 }. Recall α is continuous if Γ α is open in H × X and the function, also called α, Γ α → X, (t, x) → α t (x), is continuous. The graph of the partial action α, Gr(α), is the graph of the function α : Γ α → X. We say α has closed graph if Gr(α) is closed in H × X × X.
Take two continuous partial actions of H, α and β, on the spaces X and Y respectively. A morphism f : α → β is a continuous function f : X → Y such that for every t ∈ H : f (X t ) ⊂ Y t and the restriction of β t • f to X t −1 equals f • α t .
Given β as before and a non empty open set Z ⊂ Y, the restriction of β to Z is the continuous partial action of H on Z given by γ t : Z ∩ β t −1 (Z) → Z ∩ β t (Z), z → β t (z).
Up to isomorphism of partial actions, every continuous partial action can be obtained as a restriction of a global action. That is, given α as before there exits a global and continuous action of H on a topological space Y, β, and an open set Z ⊂ Y such that α is isomorphic to the restriction of β to Z. If in addition Y = ∪{β t (Z) | t ∈ H}, we say β is an enveloping action of α. Enveloping actions exists and are unique up to isomorphism of (partial) actions [2,1]. The enveloping 1 action of α is denoted α e and the space where it acts X e , we also think X is an open set of X e and α is the restriction of α e to X.
The orbit of a subset U ⊂ X by α is the set HU := ∪{α t (U ∩ X t −1 ) | t ∈ G}. The orbit of a point x ∈ X is the orbit of the set {x} and is denoted Hx. If we want to emphasize the name of the action we write αHx. The orbits of two points are equal or disjoint and the union of all of them is equal to X. With this partition of X we construct the quotient space X/H with the quotient topology, this is the orbit space of α. The canonical projection X → X/H is continuous, surjective and open. The function X/H → X e /H, αHx → α e Hx is a homeomorphism.
Raeburn's Symmetric Imprimitivity Theorem involves free, proper and commuting actions. We now give the corresponding definitions for partial actions. We refer the reader to [1] to a more detailed exposition of these concepts.
The stabilizer of a point x ∈ X is the set H x := {t ∈ H | x ∈ X t −1 , α t (x) = x}. It is easy to see that H x is a subgroup of H, not necessarily closed if the action is not global. A partial action is free is the stabilizer of every point is the set {e}. A partial action is free if and only if it's enveloping action is free.
The next concept we define is commutativity. We will have two continuous partial actions, α and β, of H and K, on X. As we do not want any confusions, we will use the notation α s : X H s −1 → X H s and β t :
X K t −1 → X K t ,
for s ∈ H and t ∈ K. We say α and β commute if for every (s, t)
∈ H × K (i) α s (X H s −1 ∩ X K t ) = β t (X K t −1 ∩ X H s ) and (ii) α s • β t (x) = β t • α s (x), for every x ∈ α s −1 (X H s ∩ X K t −1 )
. This definition expresses the fact that we can compute α s β t (x) if and only if we can compute β t α s (x), and in that case α s β t (x) = β t α s (x). As we can see, if both actions are global, this is the usual notion of commuting actions.
Recall
[6] a subset U ⊂ X is said α−invariant if α t (X H t −1 ∩ U ) ⊂ U , for every t ∈ H. Condition (i) of the previous definition implies X K s is α−invariant for every s ∈ K.
An important property of commuting global actions is that we can define an action of the product group, this is also true for partial actions. . If α and β commute then there is a continuous partial action, α × β, of G := H × K on X such that, for every (s, t) ∈ G,
(1) X (s,t) = β t (X K t −1 ∩ X H s ) = α s (X H s −1 ∩ X K t ) and (2) α × β (s,t) = α s • β t .
Proof. The fact that µ := α × β is a partial action (not necessarily continuous) is an easy consequence of the fact that α and β are commuting partial actions ([1] Proposição 4.35). We just have to deal with the continuity.
To
show Γ µ is open in G×X notice that the set Γ −1 β := {(t, x) ∈ K ×X | x ∈ X K t } is open in K × X, and define: π H : H × K × X → H × X, (h, k, x) → (h, x), π K : H × K × X → K × X, (h, k, x) → (k, x), and F : π −1 K (Γ β ) → π −1 K (Γ −1 β ), (h, k, x) → (h, k, β k (x)
). It is easy to see that the three functions are continuous. So, the domain and
range of F are open in H × K × X and Γ µ = F −1 (π −1 K (Γ −1 β ) ∩ π −1 H (Γ α )) is open in G × X.
Finally, the continuity of µ : Γ µ → X follows from that of α : Γ α → X and β : Γ β → X.
Here is another property of partial actions we will use. Lemma 1.3. If α and β commute then there is a partial action of H on X/K, called α, such that for every s ∈ H (1) (X/K) s := KX H s and (2) α s (Kx) = Kα s (x) for any x ∈ X H s −1 . Proof. The first step is to show we can define α as in (1) and (2). Define the function F : Γ α → H × X/K as F (s, x) = (s, Kx). This map is open and continuous. The domain of α will be the image of F , Γ α := Im(F ), which is an open set. Now define S : Γ α → X/K in such a way that (s, x) → Kα s (x), and consider (on Γ α ) the equivalence relation u ∼ v if F (u) = F (v). The function S is constant in the classes of ∼, and the quotient space Γ α / ∼ is homeomorphic to Γ α trough the map defined by F . So, there is a unique continuous map Γ α → X/K such that (s, Kx) → Kα s (x). This is the partial action α we are looking for.
It remains to be shown that α is a partial action. Properties (1) and (2) of Definition 1.1 are easy to prove, for (3) recall every X H s is β−invariant.
Assume for a moment we have a continuous global action of H on X. It is immediate that it's domain, being equal to H × X, is a closed and open (clopen) set of H × X. That is not always the case for partial actions.
Definition 1.4. A partial action, α, of H on X has closed domain if Γ α is closed in H × X.
Lemma 1.5. If X is Hausdorff and α is continuous, the following conditions are equivalent:
(1) α has closed domain.
(2) The enveloping space X e is Hausdorff and X is closed in X e .
Proof. We start by proving (1)⇒(2). Recall [2] X e is Hausdorff if α has closed graph. Consider the function F :
H × X × X → H × X, (s, x, y) → (s, x). The set F −1 (Γ α ) is closed in H × X × X. Now, Gr(α) is closed in F −1 (Γ α ) because it is the pre-image of the diagonal {(x, x) | x ∈ X} ⊂ X × X by the continuous function F −1 (Γ α ) → X × X, (t, x, y) → (α t (x), y)
. This implies α has closed graph.
To show X is closed in X e take a net contained in X, {x i } i∈I , converging to a point x ∈ X e . There exists t ∈ H such that α e t (x) ∈ X. By the continuity of α e there is an i 0 such that (t −1 , α e t (x i )) ∈ Γ α for i i 0 . Then (t −1 , α e t (x)), being the limit of
{(t −1 , α e t (x i ))} i i0 , belongs to ∈ Γ α . Finally x = α t −1 α e t (x) ∈ X.
For the converse notice three facts: the topology of H ×X is the topology relative to H × X e , (α e ) −1 (X) is closed in H × X e and Γ α = H × X ∩ (α e ) −1 (X). So we clearly have that Γ α is closed in H × X.
The previous lemma characterizes the continuous partial actions with closed domain on Hausdorff spaces, as those arising as the restriction of a global action on a Hausdorff space to a clopen set. Lemma 1.6. Given two continuous and commuting partial actions, both with closed domain, the partial action of the product group (as defined on Lemma 1.2) has closed domain.
Proof. In the proof of Lemma 1.2 we showed Γ α×β is open, use the same arguments changing the word "open" for "closed".
A dynamical system (DS for short) is a tern (Y, H, β) where β is a continuous action of H on Y , where H and Y are LCH. The natural extension to partial actions is the following one. Proof. By Theorem 1.1. of [2] every point of X e has a neighbourhood homeomorphic to X. So, every point of X e has a local basis of compact neighbourhoods. As α is continuous and H is LCH, (X e , H, α e ) is a DS if and only if X e is Hausdorff. By Proposition 1.2. of [2] X e is Hausdorff if and only if α has closed graph.
A PDS (X, H, α) is proper if the function F α : Γ α → X, (t, x) → (x, α t (x)), is proper (the pre-image of a compact set is compact). This definition, and part of the next Lemma, are taken from [1]. Lemma 1.9. Given a PDS (X, H, α), the following statements are equivalent:
(1) The system is proper.
(2) Every net contained in Γ α , {(t i , x i )} i∈I , such that {(x i , α ti (x i ))} i∈I converges to some point of X × X, has a subnet converging to a point of Γ α . (3) α has closed graph and the enveloping DS (X e , H, α e ) is proper.
Proof. The equivalence between (1) and (3) is proved in [1] (Proposicão 4.62). The equivalence between (1) and (2) is proved as in Lemma 3.42 of [13].
It is a known fact that the orbit space of a proper DS is a LCH space, this is also true for PDS.
Lemma 1.10. If (X, H, α) is a proper PDS then X/H is a LCH space.
Proof. By the previous Lemma (X e , H, α e ) is a proper DS. So, X e /H is LCH. But X/H is homeomorphic to X e /H and so is LCH.
The next result follows immediately from the previous ones.
Lemma 1.11. Let (X, H, α) and (X, K, β) be commuting PDS (that is, α and β commute). If β is proper then (X/K, H, α) is a PDS, where α is the partial action defined on Lemma 1.3.
Partial actions on bundles of C * −Algebras.
The definition of upper semicontinuous C * −bundle we are going to use is Definition C.16 of [13] (notice that we do not require the base space to be Hausdorff). From now on B = {B x } x∈X and C = {C y } y∈Y will be upper semicontinuous C * −bundles. The projections of B and C will be denoted p : B → X and q : C → Y , respectively.
The set of continuous and bounded sections of the bundle B will be denoted C b (B) (this notation differs from that of [13]). Similarly, C 0 (B) is the set of continuous sections vanishing at infinity (C.21 [13]) and C c (B) the set of continuous sections of compact support. When X is a LCH space, C b (B) and C 0 (B) are C * −algebras with the supremum norm and C c (B) is a dense * −sub algebra of C 0 (B). Definition 2.1. A partial action of H on B is a pair (α, ·), where α and · are continuous partial actions of H on B and X, respectively, satisfying
(1) p −1 (X t ) = t B for every t ∈ H. Here t B is the range of α t .
(2) p is a morphism of partial actions (Definition 1.1 of [2]).
(3) The restriction of α t to a fiber is a morphism of C * −algebras, for each t ∈ H. Notice that · is determined by α. For that reason, with abuse of notation, we name α the pair (α, ·). We say α is global if the partial action on the total space is a global action or, what is the same, if · is global.
The domain of the partial action on the total and base space will be denoted Γ(B, α) and Γ(X, α), respectively. Example 2.2. Let (X, H, ·) be a PDS and (A, H, γ) a C * −DS, that is, A is a C * −algebra and γ : H → Aut(A) is a strongly continuous action. With such define the trivial bundle p : A × X → X, where p(a, x) = x. All the fibers of this bundle, called B, are isomorphic to A by the maps A → B x , a → (a, x). We define a global action of H on B by setting α t :
A × X t −1 → A × X t , (a, x) → (γ t (a), t · x).
I would like to emphasize that, from now on, we are going to use the letters α and β for actions on total spaces. The actions on the base spaces will be denoted · and ⋆. We will write α t (a) and t · x, similarly with β and ⋆.
If β = (β, ⋆) is a partial action of H on C, a morphism (F, f ) : α → β is a pair of continuous functions, F : B → C and f : X → Y , such that: both are morphism of partial actions, q • F = f • p and the restriction of F to each fiber is a morphism of C * −algebras. Naturally, the composition of morphisms is the composition of functions (on each coordinate).
Following [2] we can define the restriction of actions. Let β = (β, ⋆) be a global action of H on B and U an open subset of X. Consider the restriction bundle B U = {B u } u∈U with the partial action β U , which is the pair formed by the restriction of the actions of H to p −1 (U ) and U . Notice we have obtained a partial action because
p −1 (U ) ∩ β t (p −1 (U )) = p −1 (U ∩ t ⋆ U ),: α e → β such that ψ e • (ι X , ι B ) = ψ.
Moreover, the pair (ι X , ι B , α e ) is unique up to canonical isomorphisms, and
(1) ι X (X) is open in X e . (2) (ι X , ι B ) : α → (α e ) ιX (X) is an isomorphism. (3) X e is the orbit of ι X (X). (4) B e is a continuous C * −bundle if and only if B is.
Proof. Let (ι X , · e ) and (ι B , α e ) be the pairs given by Theorem 1.1 of [2] for · and α. We also have a morphism p e : α e → · e . Notice p e is surjective because is a morphism and the orbit of ι X (X) equals X e . Again, as B e is the orbit of ι B (B) and p e is a morphism, to prove p e is open we only have to see that
p e • α e t • ι B is open, for every t ∈ H. But this is true because, if U is open in B p e • α e t • ι B (U ) = t · e (ι X (p(U )))
, the last being an open set.
We have proved B e fibers over X e . We now give a structure of C * −algebra to each fiber of B e . Let x be an element of X e , take t ∈ H such that t· e x ∈ ι X (X) and define the C * −structure on B e x as the unique making
α e t −1 • ι B : B ι −1 X (t· e x) → B e x
an isomorphism of C * −algebras. This is independent of the choice of t because α acts as isomorphism of C * −algebras on the fibers of B.
To prove the norm of B e is semicontinuous notice that, given ε > 0, the set
{b ∈ B e : b < ε} equals the open set t∈H α e t • ι B ({b ∈ B : b < ε}).
In fact, a similar argument shows the norm of B e is continuous if and only if the norm of B is continuous. This suffices to prove property (4) of the thesis.
We now indicate how to prove the continuity of the product, for the other operations there are analogous proofs. Set D e := {(a, b) ∈ B e × B e : p e (a) = p e (b)}. We prove the continuity of D e → B e , (a, b) → ab, locally. Fix (a, b) ∈ D e , we may assume p(a) = t · e x for some x ∈ X and t ∈ H. The product is continuous on (a, b)
because U := (α t • ι B (B) × α t • ι B (B)) ∩ D e
is open in D e , and the restriction of the product to U is the continuous function
(c, d) → α e t • ι B (ι B ) −1 • α e t −1 (c) + (ι B ) −1 • α e t −1 (d) . Up to here we have constructed an upper semicontinuous C * −bundle B e = {B e
x } x∈X e . By the previous construction we also have that (α e , ι e ) is a global action of H on B e , and (ι X , ι B ) : α → α e is a morphism. Except for property (2) of the thesis, everything follows immediately from the previous constructions and Theorem 1.1 of [2].
To prove property (2) it suffices to see that (p e ) −1 (ι X (X)) = ι B (B). We clearly have the inclusion ⊃, for the other one let b ∈ (p e ) −1 (ι X (X)). We may suppose b = α e t (ι B (c)) for some c ∈ B and t ∈ H.
As p e (b) = p e (α e t (ι B (c))) ∈ ι B (B), we have p e (b) = p e (α e t (ι B (c))) = t · e ι X (p(c)) ∈ ι X (X). So, p(c) ∈ X t −1 . This implies b = ι B (α t (c)) ∈ ι B (B).
The non commutative analogue of PDS's are the C * −PDS's, they are terns (A, G, γ) formed by a C * −algebra A, a LCH group G and a partial action γ of G on A (Definition 2.2 of [2], for a more general definition see [5]).
We know every PDS gives us a C * −PDS with commutative algebra [2]. Following that construction, we are going to use partial actions on upper semicontinuous C * −bundles over LCH spaces to construct partial actions on the C * −algebras C 0 (B). The ideals are of the form C 0 (B, (1) C 0 (B) t = C 0 (B, X t ), for every t ∈ H.
U ) := {f ∈ C 0 (B) | f (x) = 0 x if x / ∈ U }, for open sets U ⊂ X.
(
2) If f ∈ C 0 (B) t −1 then α t (f )(x) = α t (f (t −1 · x)) if x ∈ X t and 0 x otherwise.
Proof. First of all we have to show that, given t ∈ H and f ∈ C 0 (B) t −1 , the function α t (f ) belongs to C 0 (B) t . It is clear that α t (f ) is a section that vanishes outside X t . Besides, the function
X t → R, x → α t (f )(x) , being equal to X t → R, x → f (t −1 · x) , vanishes at infinity.
Clearly α t (f ) is continuous on X t and in the interior of the complement of X t . To prove the continuity of α t (f ) it suffices to show that given a net {x i } i∈I ⊂ X t converging to a point x /
∈ X t , we have α t (f )(x i ) → 0. Notice that the function X t −1 → R, y → f (y) , vanishes at infinity and the net {t −1 · x i } i∈I is eventually outside every compact of X t −1 , we conclude α t (f )(x i ) = f (t −1 · x i ) → 0.
The next step is to show α is a partial action (Definition 1.1). We omit the proof of this fact because it is an easy task.
To prove {C 0 (B) t } t∈H is a continuous family [5], let U be an open set of C 0 (B) and fix t ∈ H such that C 0 (B) t ∩ U = ∅. By the Urysohn Lemma we can find g ∈ C 0 (B) t ∩ U with compact support. As the domain of the partial action on X is an open set, there is an open set containing t, V , such that X r contains the support of g for every r ∈ V . Then V is an open set containing t and contained in {r ∈ H | C 0 (B) ∩ U = ∅}.
Now we deal with the continuity of α. Let {(t i , f i )} i∈I be a net contained in Γ α converging to (t, f ) ∈ Γ α . Given ε > 0 there exists g ∈ C c (B), with support contained in X t −1 , such that f − g < ε 3 (by the Urysohn Lemma).
We can find an i 0 ∈ I such that supp(g) ⊂ X t −1 i and f i − g < ε 3 , for every i i 0 . Then, for every i i 0
α t (f ) − α ti (f i ) α ti (f i − g) + α ti (g) − α t (g) + α t (f − g) < 2ε 3 + α ti (g) − α t (g) .
To complete the proof it suffices to see that lim i α ti (g) − α t (g) = 0. To this purpose let D be a compact containing t · supp(g) on it's interior, and contained in X t . We may find i 1 (larger than i 0 ) such that t i · supp(g) ⊂ D and D ⊂ X ti , for every i i 1 . Given i i 1 we have
α ti (g) − α t (g) = sup{ α ti (g(t −1 i · x)) − α t (g(t −1 · x)
) : x ∈ D}. As D is compact, it suffices to prove α ti (g) converges point wise to α t (g), which is an easy consequence of Lemma C.18 of [13] and the continuity of α and g.
The definitions of proper, free and commuting partial actions on bundles are the following ones.
Definition 2.5. Given an upper semicontinuous C * −bundle over a LCH space and a partial action of a LCH group on the bundle, we say the partial action is free, proper, has closed graph or has closed domain if the partial action on the base space has the respective property. Similarly, given two partial actions on an upper semicontinuous C * −bundle we say they commute if the partial actions on the total and base space commute.
Relating the concepts of enveloping action, in the contexts of C * −algebras and bundles, we have the following result.
Theorem 2.6. Let α be a partial action of H on the upper semicontinuous C *bundle B = {B x } x∈X . If X is LCH, α has closed graph, α e is the enveloping action of α and B e the enveloping bundle, then (C 0 (B e ), H, α e ) is the enveloping system of (C 0 (B), H, α) (Definition 2.3 of [2]). So C 0 (B) ⋊ α H is a hereditary and full sub C * −algebra of C 0 (B e ) ⋊ α e H. In particular those crossed products are strongly Morita equivalent.
Proof. By Theorem 2.3 we may suppose X ⊂ X e , B ⊂ B e , p e (B) = X and that p is the restriction of p e to B. Now, by Lemma 1.8, X e is LCH. This considerations allows us to identify the bundle B with the restriction of B e to X, which gives C 0 (B) = C 0 (B e , X). We have identified C 0 (B) with an ideal of C 0 (B e ).
We also have, for every t ∈ H,
α e t (C 0 (B e , X)) ∩ C 0 (B e , X) = C 0 (B e , t · X) ∩ C 0 (B e , X) = C 0 (B e , X ∩ t · X) = C 0 (B e , X t ) = C 0 (B) t ;
and clearly the restriction of α e t to C 0 (B) t −1 equals α t . So, using Corollary 1.3 of [3], the only thing that remains to be showed is that the space generated by the α e −orbit of C 0 (B) is dense in C 0 (B e ). We show every continuous section with compact support of B e is a finite sum of points in the orbit of C 0 (B). Fix f ∈ C c (B e ). The support of f has an open cover by sets of the form t · e X, t varying in H. We can find t 1 , . . . , t n ∈ H and h 1 , . . . , h n ∈ C c (X e ) such that: 0 h 1 + · · ·+ h n 1, the support of h i is contained in t i · X (i = 1, . . . , n) and h 1 (x) + · · · + h n (x) = 1 if x ∈ supp(f ). Defining f i (x) = h i (x)f (x) (i = 1, . . . , n), we have f = f 1 + · · · + f n and g i := α e
t −1 i (f i ) ∈ C c (B)
for every i = 1, . . . , n. Besides, f = α e t1 (g i ) + · · · + α e tn (g n ), that gives the desired result.
We can reproduce most of the results of Section 1 in this context. For example, the next Theorem is a direct consequence of Lemma 1.2.
Theorem 2.7. Given an upper semicontinuous C * −bundle B and commutative partial actions, (α, ·) and (β, ⋆) of H and K on B, respectively, the pair (α, ·) × (β, ⋆) := (α × β, · × ⋆) is a partial action of H × K on B.
Writing α = (α, ·) and β = (β, ⋆), the product α × β is the one defined in the previous Theorem.
Orbit bundle.
There is a notion of "orbit bundle", analogous to the notion of "orbit space", but to construct it we have to consider proper and free partial actions.
Fix an upper semicontinuous C * −bundle over a LCH space, B = {B x } x∈X , and a proper and free partial action, α, of H on B. Let B/H and X/H be the orbit spaces and π B : B → B/H and π X : X → X/H be the orbit maps. As p is a morphism of partial actions, there is a unique continuous (also open and surjective) function p α :
B/H → X/H such that π X • p = p α • π B .
We want to equip B/H with operations making B/H := (B/H, X/H, p α ) an upper semicontinuous C * -bundle. To do this first notice that, given Hx ∈ X/H, the fiber (B/H) xH is homeomorphic to B x trough the restriction of π B to B x (because the partial action on X is free). Call that map h x : B x → (B/H) xH . Define the structure of C * −algebra of (B/H) xH in such a way that h x is an isomorphism of C * −algebras. This definition is independent of the choice of x because, if Hy = Hx, then h −1 y • h x : B x → B y is an isomorphism. As it is the restriction of α t to B x , t ∈ H being the unique such that x ∈ X t −1 and t · x = y.
The If (a, b) ∈ D there is a unique t ∈ H, which we name t(p(a), p(b)), such that p(a) ∈ X t −1 and t · p(a) = p(b). Hence, α t (a) and b are in the same fiber, we define S(a, b) := α t (a) + b and P (a, b) := α t (a)b.
To prove the continuity of S and P we only have to prove the continuity of the function
F : {(x, y) ∈ X × X : Hx = Hy} → Γ(X, α), F (x, y) = (t(x, y), x).
Call D X the domain of F .
Consider the function R : Γ(X, α) → X × X given by R(t, x) = (x, t · x), this is a continuous, proper and injective function between LCH spaces. Such functions are homeomorphisms over its image, but the image of R is D X and F = R −1 . So, F is continuous.
Once we have proved the continuity of S and P , using the freeness of the partial action on X, we prove they are constant in the classes of the equivalence relation The last step, to show B/H is an upper semicontinuous C * −bundle is to prove it satisfies the following property: for every net
{b i } i∈I ⊂ B/H such that b i → 0 and p α (b i ) → z, for some z ∈ X/H, we have b i → 0 z .
Let {b i } i∈I be a net as before, it suffices to show it has a subnet converging to 0 z . There is a net in B, {a i } i∈I , such that b i = Ha i for every i ∈ I. We have that Hp(a i ) = p α (b i ) → Hx, where x ∈ X is such that Hx = z. As the orbit map X → X/H is open and surjective, Proposition 13.2 Chapter II of [7] implies there is a subnet {a ij } j∈J and a net {t j } j∈J ⊂ H such that p(a ij ) ∈ X t −1
j and t j ·p(a ij ) → x. This implies a ij ∈ t −1 j B and p(α ti j (a ij )) → x. But also α ti j (a ij ) = b ij → 0, so, α ti j (a ij ) → 0 x . Finally, as b ij = Hα ti j (a ij ), π B (0 x ) = H0 x = 0 z and π B is continuous, 0 z is a limit point of {b ij } j∈J .
Definition 2.8. The orbit bundle of B by α is the upper semicontinuous C * −bundle B/H constructed before.
Example 2.9. Consider the situation of Example 2.2 where the action of H on A is the trivial one (γ t = id A for every t ∈ H) and the system (X, H, ·) is free and proper. Then the quotient bundle B/H is isomorphic to the trivial bundle A×X/H. Notice that C 0 (B/H) is isomorphic to C 0 (X/H, A), the set of continuous functions from X/H to A vanishing at infinity. Our next goal is to identify C 0 (B/H) with a C * −sub algebra of C b (B). Every function f ∈ C b (B), which is also a morphism of partial actions, induces a continuous and bounded section Ind b (f ) : X/H → B/H, given by Hx → Hf (x).
The induced algebra Ind b (B, α) is the subset of C b (B) formed by all the sections which are also morphism of partial actions. There is a natural map
Ind b : Ind b (B, α) → C b (B/H), f → Ind b (f ).
Similarly, the algebra Ind 0 (B, α) is the pre image of C 0 (B/H) under Ind b . The function Ind 0 is simply the restriction of Ind b to Ind 0 (B, α).
In fact, the induced algebras are C * −sub algebras of C b (B). To prove this it suffices to show Ind b (B, α) is a C * −sub algebra and to notice Ind b is a morphism of C * −algebras.
The non trivial fact is that Ind b (B, α) is closed in C b (B). Assume {f n } n∈N is a sequence contained in Ind b (B, α) converging to f . Choose some t ∈ H and x ∈ X t −1 . Even if B is not Hausdorff, B x and B t·x are, so we have the following equalities Proof. The only thing to prove is that Ind b is surjective (it is injective because is an isometry). Fix g ∈ C b (B), we will construct f ∈ Ind b (B, α) such that Ind b (f ) = g.
α t (f (x)) = lim n α t (f n (x)) = lim n f n (t · x) = f (t · x).
As the action on the base space is free, for every x ∈ X there is a unique f (x) ∈ B x such that Hf (x) = g(Hx). Clearly f is a bounded section.
To prove f is continuous, let {x i } i∈I be a net contained in X converging to x ∈ X. It suffices to find a subnet {x ij } j∈J such that f (x ij ) → f (x). By the continuity of g the net {Hf (x i )} i∈I has Hf (x) as a limit point. As the orbit map B → B/H is open, there is a subnet {x ij } j∈J and a net {t j } j∈J such that {(t j , f (x ij ))} j∈J ⊂ Γ(B, α) and α tj (f (x ij )) → f (x). This implies {(t j , x ij )} j∈J ⊂ Γ(X, α) and t j ·x ij → x. Then t j = t(x ij , t j ·x ij ) → t(x, x) = e (see the construction of the orbit bundle in Section 2.1). Finally, the net {f (x ij )} j∈J , being equal to {α t −1 j α tj (f (x ij ))} j∈J , has f (x) as a limit point. It remains to prove f is a morphism of partial actions. Clearly f (X t ) ⊂ t B for every t ∈ H. Now take t ∈ H and x ∈ X t −1 . The points f (t · x) and α t (f (x)) are, both, the unique point of B t·x in the class of g(Hx), so they are equal.
Theorem 2.11. Let B = {B x } x∈X be an upper semicontinuous C * −bundle over a LCH space and α and β be partial actions of H and K on B, respectively. If α is free and proper, then (Ind 0 (B, α)
, K, β) is a C * −PDS where (1) Ind 0 (B, α) t := {f ∈ Ind 0 (B, α) : x → f (x) vanishes outside X H t }. (2) For every f ∈ Ind 0 (B, α) t −1 β t (f )(x) = β t (f (t −1 ⋆ x)) if x ∈ X K t and 0 x otherwise.
Proof. Let B/H be the orbit bundle. As α commutes with β, using Lemmas 1.3 and 1.11, we define a partial action, µ, of K on B/H. By Theorem 2.4, µ defines a C * −PDS (C 0 (B/H), K, µ). Lemma 2.10 ensures the map Ind 0 : Ind 0 (B, α) → C 0 (B/H) is an isomorphism. Notice Ind 0 (B, α) t is the pre image of C 0 (B/H) t . The partial action of the thesis is the unique making Ind 0 : β → µ an isomorphism of partial actions.
Morita equivalence
In our last section we prove our main theorem, which is a generalization of Raeburn's and Green's Symmetric Imprimitivity Theorems [10,11]. The first task is to translate Raeburn's result to the language of actions on bundles.
Consider two C * −DS (A, H, γ) and (A, K, δ), and two proper and free DS (X, H, ·) and (X, K, ⋆). Assume also that the actions on A and X commute. On the trivial bundle B = A × X define the actions of H and K as in Example 2.2, call them α and β, respectively.
Let Ind γ be the induced C * -algebra defined as in [10]. We have an isomorphism ρ : Ind γ → Ind 0 (B, α), given by ρ(f )(x) = (f (x), x). This isomorphism takes the action of K on Ind γ (as defined on [10]) into the action β. By using Raeburn's Theorem we conclude that Ind 0 (B, α) ⋊ β K is strongly Morita equivalent to Ind 0 (B, β) ⋊ α H. Our purpose is to give a version of this result for partial actions. We will write A ∼ M B whenever A and B are strongly Morita equivalent C * −algebras [12].
3.1. The main Theorem. From now on we work with two LCH topological groups, H and K, an upper semicontinuous C * −bundle with LCH base space, B = {B x } x∈X , and two continuous, free, proper and commuting partial actions, α and β, of H and K on B, respectively.
We want to give conditions under which we can say that Ind 0 (B, α) ⋊ β K is strongly Morita equivalent to Ind 0 (B, β) ⋊ α H. For global actions, with some additional hypotheses on the group and the base space, this is proved in [4], [9] or [8]. In fact, the proof of the next Theorem is a minor modification of Raeburn's proof the Symmetric Imprimitivity Theorem [10].
Ind 0 (B, α) ⋊ β K ∼ M Ind 0 (B, β) ⋊ α H.
Proof. Define E := C c (H, Ind 0 (B, β)) and F := C c (K, Ind 0 (B, α)), viewed as dense * −sub-algebras of the respective crossed products. Define also Z := C c (B), which will be an E − F −bimodule with inner products; whose completion implements the equivalence between Ind 0 (B, α) ⋊ β K and Ind 0 (B, β) ⋊ α H.
For f, g ∈ Z, b ∈ E and c ∈ F define b · f (x) := H b(s)(x) α s (f )(x)∆ H (s) 1/2 ds, (3.1) f · c(x) := K β t (f c(t −1 ))(x)∆ K (t) −1/2 dt, (3.2) E f, g (s)(x) := ∆ H (s) −1/2 K β t (f α s (g * )) (x)dt, (3.3) f, g F (t)(x) := ∆ K (t) −1/2 H α s f * β t (g) (x)ds. (3.4)
The integration is with respect to left invariant Haar measures; ∆ H and ∆ K are the modular functions of the groups. Here α and β are the partial actions defined on Theorem 2.4.
We now justify the fact that b · f ∈ Z. The function H → C 0 (B), given by s → b(s) α s (f )∆ H (s) 1/2 , is continuous (Theorem 2.4). Besides, it's support is contained in the support of b and so we can integrate it. This integral is exactly b · f . Finally, notice supp(b · f ) ⊂ {s · x : (s, x) ∈ supp(b) × supp(f )}, the last being a compact set.
To prove (3.3) defines an element of E we proceed as follows. Fixed s ∈ H and x ∈ X the function K → B x , given by t → β t (f α s (g * )) (x), is continuous with support contained in the compact {t ∈ K : t −1 ⋆ x ∈ supp(f )}. So, the function is integrable. The value of that integral is E f, g (s)(x).
We now prove E f, g is continuous, what we do locally. Fix some s 0 ∈ H and x 0 ∈ X. Take compact neighbourhoods, V of s 0 and W of x 0 . The bundle B W will be the restriction of B to W . Define the function F : V × H → C(B W ) by F (s, t)(x) = β t (f α s (g * )) (x). As the action of K on X is proper, F has compact support. By integrating, with respect to the second coordinate, we get a continuous function R ∈ C(V, C(B W )), defined by R(s) = K F (s, t)dµ K (t) ([7] II.15.19).
Fixing (s, x) ∈ V × W , we have R(s)(x) = E f, g (s)(x). From this follows the continuity of E f, g .
An easy calculation shows E f, g (s)(t · x) = β t ( E f, g (s)(x)), for every t ∈ K and x ∈ X. Besides, if E f, g (s)(x) = 0 x , then x belongs to the K−orbit of supp(f ), and s to the compact set {s ∈ H : s · supp(g) ∩ supp(f ) = ∅}. We have proved that E f, g ∈ E.
The computations needed to prove equations (3.1)-(3.4) define an equivalence bi-module are the same as in [10] or [13]. For the construction of the approximate unit, analogous to that of Lemma 1.2 of [10], follow the proof of Proposition 4.5 of [13], recalling Ind P H (A, β) plays the role of our Ind 0 (B, β). Our next step is to let α and β to be partial, but to have the same result we need additional hypotheses, which are trivially satisfied in the previous case.
Let α × β be the partial action given by Theorem 2.7. Now, by Theorem 2.3, we have an enveloping action (α × β) e and an enveloping bundle B e . We can assume B is the restriction of B e to X ⊂ X e .
For the action given by (α × β) e on X e we will use the notation (s, t)x, for (s, t) ∈ H × K and x ∈ X e .
Define σ and τ as the restriction of (α × β) e to H and K, respectively (identify H with H ×{e} ⊂ H ×K). It is immediate that (α×β) e = σ ×τ , σ and τ commute, and that α (β) is the restriction of σ (τ ) to B.
The next is the main Theorem of this article.
Theorem 3.2. If α × β has closed graph and σ and τ are proper then
Ind 0 (B, α) ⋊ β K ∼ M Ind 0 (B, β) ⋊ α H.
Proof. To show that σ (and also τ ) is free. Assume (s, e)x = x for some s ∈ H and x ∈ X e . As X e is the H × k)x and hsh −1 = e. We conclude s = e. As α×β has closed graph, B e is an upper semicontinuous C * −bundle over a LCH space. The hypotheses, together with Theorems 2.11 and 3.1, imply Ind 0 (B e , σ) ⋊ τ K is strongly Morita equivalent to Ind 0 (B e , τ ) ⋊ σ H. The proof of our Theorem will be completed if we can show that Ind 0 (B, α) ⋊ β K is strongly Morita equivalent to Ind 0 (B e , σ) ⋊ τ K, because, by symmetry, the same will hold changing α for β, σ for τ and H for K.
K−orbit of X, there exists (h, k) ∈ H × K such that (h, k)x ∈ X. Notice that (hsh −1 , e)(h, k)x = (h, k)x ∈ X ∩ (hsh −1 , e) −1 X, so, hsh −1 · (h, k)x = (h,
Tracking back the construction of β and τ , to Theorem 2.11, we notice that Ind 0 (B, α) ⋊ β K is isomorphic to C 0 (B/H) ⋊ µ K and Ind 0 (B e , σ) ⋊ τ K isomorphic to C 0 (B e /H) ⋊ ν K. Here µ and ν are the partial actions of K on B/H and B e /H given by Lemma 1.3, respectively. Meanwhile, µ and ν are the one given by Theorem 2.4. Putting all together, by Theorem 2.6, it suffices to prove ν is the enveloping action of µ.
Consider the map B → B e /H, given by b → Hb. This is an open and continuous map, it is also constant in the α−orbits. So it defines a unique map F : B/H → B e /H, given by Hb → Hb (this is not the identity map). It turns out this function is continuous, open, injective and maps fibers into fibers. In an analogous way we define f : X/H → X e /H, which has the same topological properties.
Recalling the construction of µ and ν, it is easy to show (F, f ) : µ → ν is a morphism. To show that µ e = ν it suffices to prove only two things. Namely, that f ((X/H) t ) = f (X/H) ∩ tf (X/H) for every t ∈ K (we adopted the notation tz for the action of t ∈ K on z ∈ X e /H) and that the K−orbit of f (X/H) is X e /H.
For the first one notice that f ((X/H) t ) = HX K t = HX ∩ H(e, t)X = HX ∩ tHX = f (X/H) ∩ tf (X/H). The second equality of the previous formula is not immediate, but the inclusion ⊂ is. For the other one assume y ∈ HX ∩H(e, t)X. Then there exists x, z ∈ X such that y = Hx = H(e, t)z. There is some s ∈ H such that x = (s, e)(e, t)z = (s, t)z. So x ∈ X ∩ (s, t)X = s · (X H s ∩ X K t ), (s −1 , e)x ∈ X K t and y = K(s −1 , e)x ∈ HX K t .
To show X e /H is the K−orbit of f (X/H), notice that t∈K tf (X/H) = t∈K tHX = H s∈H t∈K (s, t)X = HX e = X e /H.
We have proved ν is the enveloping action of µ, by Theorem 2.6 C 0 (B/H) ⋊ µ K is strongly Morita equivalent to C 0 (B e /H) ⋊ ν K. This completes the proof of our main theorem.
The next Theorem is a consequence of the previous one, it has the advantage of not making any mention to σ nor τ . Proof. We check the hypotheses of the previous theorem are satisfied. To show α × β has closed graph notice it has closed domain (Lemma 1.6) and use Lemma 1.5. Finally, we only have to show σ and τ are proper. It is enough to show σ is proper, for that purpose we use Lemma 1.9.
Let {(s i , x i )} be a net in H × X e such that {(x i , (s i , e)x i )} converges to the point (x, y) ∈ X e × X e . It is enough to show {s i } i∈I has a converging subnet. We may assume (s, t)x ∈ X and (h, k)y ∈ X, for some (h, k), (s, t) ∈ H × K.
There is an i 0 such that, for i i 0 , (s, t)x i and (h, k)(s i , e)x i belong to X. For i i 0 define u i = (s, t)x i . By the construction of α × β and because (hs i s −1 , kt −1 )u i ∈ X, we have that u i is an element of the clopen set tk −1 ⋆(X K k −1 t ∩X H ssih −1 ). Defining v i := (e, k −1 t)u i for every i i 0 , we have that v i ∈ X. So, the limit lim i v i is an element of X (recall X is clopen in X e ).
The net {(hs i s −1 , v i )} i is contained in Γ(X, α) and {(v i , hs i s −1 · v i )} i has a limit point. Then {hs i s −1 } i has a converging subnet, an so {s i } i has a converging subnet. We conclude σ is proper, and we are done.
). A pair α = ({X t } t∈H , {α t } t∈H )is a partial action of H on X if, for every t, s ∈ H:
Definition 1. 7 .
7The tern (X, H, α) is a partial dynamical system (PDS) if α is a continuous partial action of H on X and both (H and X) are LCH.
Lemma 1. 8 .
8If (X, H, α) is a PDS then (X e , H, α e ) is a DS if and only if α has closed graph.
Theorem 2. 4 .
4Let X be a LCH space, B = {B x } x∈X an upper semicontinuous C * −bundle and α a continuous partial action of H on B. Then (C 0 (B), H, α) is a C * −PDS, where
norm of B/H is the function : B/H → R, Ha → a . To prove it is upper semicontinuous let ε be a positive number. The set {Hb ∈ B/H | Hb < ε} is open because it equals the open set π B ({b ∈ B | b < ε}). Similarly, we prove : B/H → R is continuous if : B → R is continuous. To prove the continuity of the product and the sum name D the set of points (a, b) ∈ B × B such that Ha = Hb.
on D : (a, b) ∼ (c, d) if Ha = Hc and Hb = Hd. The space D/ ∼ is (homeomorphic to) D ′ := {(a, b) ∈ B/H × B/H : p α (a) = p α (b)}, and the functions defined by S and P on D ′ are exactly the sum and product of B/H. We have proved they are continuous, for the rest of the operations we proceed in a similar way.
Theorem 2 . 10 .
210The functions Ind b : Ind b (B, α) → C b (B/H) and Ind 0 : Ind 0 (B, α) → C 0 (B/H) are isomorphism of C * −algebras.
Theorem 3. 1 .
1If α and β are global actions then
Theorem 3 . 3 .
33If α and β have closed domain thenInd 0 (B, α) ⋊ β K ∼ M Ind 0 (B, β) ⋊ α H.
for every t ∈ H. Rephrasing Theorem 1.1 of [2] we get Theorem 2.3. For every continuous partial action α of H on an upper semicontinuous C * −bundle B, there exists a tern (ι X , ι B , α e ) such that α e is an action of H on an upper semicontinuous C * −bundle B e , and (ι X , ι B ) : α → α e is a morphism, such that for any morphism ψ : α → β, where β is an action of H (on an upper semicontinuous C * −bundle), there exists a unique morphism ψ e
We say "the" enveloping action because it is unique up to isomorphisms.
F Abadie, Sobre Ações Parciais, Fibrados de Fell, e Grupóides. Universidade de São PauloPhD thesisF. Abadie, Sobre Ações Parciais, Fibrados de Fell, e Grupóides, PhD thesis, Universidade de São Paulo, September 1999.
Enveloping actions and Takai duality for partial actions. J. Funct. Anal. 197, Enveloping actions and Takai duality for partial actions, J. Funct. Anal., 197 (2003), pp. 14-67.
On the amenability of partial and enveloping actions. F Abadie, L Martí Pérez, Proc. Amer. Math. Soc. 137F. Abadie and L. Martí Pérez, On the amenability of partial and enveloping actions, Proc. Amer. Math. Soc., 137 (2009), pp. 3689-3693.
An equivariant brauer semigroup and the symmetric imprimitivity theorem. A Huef, I Raeburn, D P Williams, Trans. Amer. Math. Soc. 352A. an Huef, I. Raeburn, and D. P. Williams, An equivariant brauer semigroup and the symmetric imprimitivity theorem, Trans. Amer. Math. Soc., 352 (2010), pp. 4759-4787.
Twisted partial actions: a classification of regular C * -algebraic bundles. R Exel, Proc. London Math. Soc. 743R. Exel, Twisted partial actions: a classification of regular C * -algebraic bundles, Proc. London Math. Soc. (3), 74 (1997), pp. 417-443.
Partial dynamical systems and C * -algebras generated by partial isometries. R Exel, M Laca, J Quigg, J. Operator Theory. 47R. Exel, M. Laca, and J. Quigg, Partial dynamical systems and C * -algebras generated by partial isometries, J. Operator Theory, 47 (2002), pp. 169-186.
J M G Fell, R S Doran, Representations of * -algebras, locally compact groups, and Banach * -algebraic bundles. Boston, MAAcademic Press Incof Pure and Applied MathematicsJ. M. G. Fell and R. S. Doran, Representations of * -algebras, locally compact groups, and Banach * -algebraic bundles., vol. 125-126 of Pure and Applied Mathematics, Academic Press Inc., Boston, MA, 1988.
S Kaliszewski, P S Muhly, J Quigg, D P Williams, arXiv:1201.5035Fell bundles and imprimitivity theorems. S. Kaliszewski, P. S. Muhly, J. Quigg, and D. P. Williams, Fell bundles and imprimitivity theorems, eprint arXiv:1201.5035, (2012).
Equivariant kk-theory and the Novikov conjecture. G Kasparov, Invent. Math. 91147201G. Kasparov, Equivariant kk-theory and the Novikov conjecture, Invent. Math., 91 (1988), p. 147201.
Induced C * -algebras and a symmetric imprimitivity theorem. I Raeburn, Math. Ann. 280I. Raeburn, Induced C * -algebras and a symmetric imprimitivity theorem, Math. Ann., 280 (1988), pp. 369-387.
Applications of strong Morita equivalence to transformation group C * -algebras. M A , Operator algebras and applications, Part I. Kingston, Ont; Providence, R.IAmer. Math. Soc38M. A. Rieffel, Applications of strong Morita equivalence to transformation group C * - algebras, in Operator algebras and applications, Part I (Kingston, Ont., 1980), vol. 38 of Proc. Sympos. Pure Math., Amer. Math. Soc., Providence, R.I., 1982, pp. 299-310.
Operator algebras and applications, Part I (Kingston, Ont., 1980). Providence, R.I38, Morita equivalence for operator algebras, in Operator algebras and applications, Part I (Kingston, Ont., 1980), vol. 38 of Proc. Sympos. Pure Math., Amer. Math. Soc., Providence, R.I., 1982, pp. 285-298.
D P Williams, Crossed products of C * -algebras. Providence, RIAmerican Mathematical Society134D. P. Williams, Crossed products of C * -algebras, vol. 134 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 2007.
|
[] |
[
"The Hopf algebra isomorphism between κ−Poincaré algebra in the case g 00 = 0 and \"null plane\" quantum Poincaré algebra",
"The Hopf algebra isomorphism between κ−Poincaré algebra in the case g 00 = 0 and \"null plane\" quantum Poincaré algebra"
] |
[
"Karol Przanowski \nDepartment of Field\nTheory University of Lódź ul\nPomorska 149/15390-236LódźPoland\n"
] |
[
"Department of Field\nTheory University of Lódź ul\nPomorska 149/15390-236LódźPoland"
] |
[] |
The Hopf algebra isomorphism between κ−Poincaré algebra defined by P.Kosiński and P.Maślanka in the case g 00 = 0 [2] and "null plane" quantum Poincaré algebra by A.Ballesteros, F.J.Herranz and M.A.del Olmo [1] is defined.
| null |
[
"https://arxiv.org/pdf/q-alg/9611025v1.pdf"
] | 18,917,665 |
q-alg/9611025
|
b762661d9501cde32a2c023a9536016a0837d5cf
|
The Hopf algebra isomorphism between κ−Poincaré algebra in the case g 00 = 0 and "null plane" quantum Poincaré algebra
Nov 1996
Karol Przanowski
Department of Field
Theory University of Lódź ul
Pomorska 149/15390-236LódźPoland
The Hopf algebra isomorphism between κ−Poincaré algebra in the case g 00 = 0 and "null plane" quantum Poincaré algebra
Nov 1996arXiv:q-alg/9611025v1 21
The Hopf algebra isomorphism between κ−Poincaré algebra defined by P.Kosiński and P.Maślanka in the case g 00 = 0 [2] and "null plane" quantum Poincaré algebra by A.Ballesteros, F.J.Herranz and M.A.del Olmo [1] is defined.
Introduction
Recently, considerable interest has been paid to the deformations of group and algebras of space-time symmetries [8]. An interesting deformation of the Poincarè algebra [5], [2] as well as group [7] has been introduced which depend on the dimensional deformation parameter κ; the relevant objects are called κ−Poincarè algebra and κ−Poincarè group, respectively. Their structure was studied in some detail and many of their properties are now well understood.
In the paper [1] using the so called deformation embeding method A.Ballesteros, F.J.Herranz and M.A.del Olmo obtained the "null plane" quantum Poincaré algebra. On the other hand in [2] P.Kosiński and P.Maślanka presented the method for obtaining the κ−Poincarè algebra for arbitrary metric tensor. It is interesting whether the "null plane" quantum Poincaré algebra is the particual case (for the special choice of the matric tensor g µν ) of the κ−deformation presented in [2]. In our paper we solve this problem.
In the section 2 we rewrite κ−Poincaré algebra in the new basis. In the section 3 we describe the "null plane" quantum Poincaré algebra [1] and define isomorpfism between these two algebras. At least in the section 4 we define isomorphism between κ−Poincaré algebra defined below and the "null plane" quantum Poincaré algebra.
Let us remind the definition of κ−Poincarè algebra . The κ−Poincarè algebraP κ [2] (in the Majid and Ruegg basis [3]) is a quantized universal envoloping algebra in the sense of Drinfeld [4] described by the following relations:
The commutation rules:
[M ij , P 0 ] = 0, [M ij , P k ] = iκ(δ j k g 0i − δ i k g 0j )(1 − e − P 0 κ ) + i(δ j k g is − δ i k g js )P s , [M i0 , P 0 ] = iκg i0 (1 − e − p 0 κ ) + ig ik P k , [M i0 , P k ] = −i κ 2 g 00 δ i k (1 − e −2 P 0 κ ) − iδ i k g 0s P s e − P 0 κ +, +ig 0i P k (e − P 0 κ − 1) + i 2κ δ i k g rs P r P s − i κ g is P s P k , [P µ , P ν ] = 0, [M µν , M λσ ] = i(g µσ M νλ − g νσ M µλ + g νλ M µσ − g µλ M νσ ).
(1.1)
The coproducts, counit and antipode:
∆P 0 = I ⊗ P 0 + P 0 ⊗ I, ∆P k = P k ⊗ e − P 0 κ + I ⊗ P k , ∆M ij = M ij ⊗ I + I ⊗ M ij , ∆M i0 = I ⊗ M i0 + M i0 ⊗ e − P 0 κ − 1 κ M ij ⊗ P j , ε(M µν ) = 0; ε(P ν ) = 0, S(P 0 ) = −P 0 , S(P i ) = −e P 0 κ P i , S(M ij ) = −M ij , S(M i0 ) = −(M i0 + 1 κ M ij P j )e P 0 κ , (1.2)
where i, j, k = 1, 2, 3 and the metric tensor g µν , (µν = 0, 1, ..., 3) is represented by an arbitrary nondegenerate symetric 4×4 matrix (not necessery diagonal).
The κ−Poincaré algebra in the new basis
Note that in our paper we mark
A i B i = 3 n=1 A n B n , A i B i = 3 n=1 A n B n
for any tensors A µ , B ν and ε 123 = −1.
We put:
M i = 1 2 ε ijk M jk , (we have also M ij = ε ijk M k ), N i = M i0 .
We define the isomorphism on generators of κ−Poincaré algebra [3]:
P 0 = −P 0 , P i = −P i e P 0 2κ , M i = M i , N i = (N i − 1 2κ ε ijk M j P k )e P 0 2κ = N i e P 0 2κ + 1 2κ ε ijk M j P k .
After some calculation we get the following relations defining the κ−Poincaré algebra in the new basis:
The commutation rules:
[P µ , P ν ] = 0, [M i , P 0 ] = 0, [M i , P k ] = iε ijl δ l k (2κg oj sinh( P 0 2κ ) + g js P s ), [N i , P 0 ] = 2iκg i0 sinh( P 0 2κ ) + ig ik P k , [N i , P k ] = −iκg 00 δ i k sinh( P 0 κ ) − iδ i k g 0s P s cosh( P 0 2κ ), [M i , M j ] = −iε ijk g ks M s , [N i , M j ] = iε jrs g ir N s + ig i0 M j cosh( P 0 2κ ) − iδ ij g k0 M k cosh( P 0 2κ ), [N i , N j ] = ig j0 N i cosh( P 0 2κ ) − ig i0 N j cosh( P 0 2κ ) − i 4κ 2 ε ijs g kr M r P s P k + i 2κ ε jrs g i0 M r P s sinh( P 0 2κ ) − i 2κ ε irs g j0 M r P s sinh( P 0 2κ ) −iε ijk g 00 M k cosh( P 0 κ ) − i κ ε ijk M k g s0 P s sinh( P 0 2κ ). (2.1)
The coproducts, counit and antipode:
∆P 0 = P 0 ⊗ I + I ⊗ P 0 , ∆P i = P i ⊗ e P 0 2κ + e − P 0 2κ ⊗ P i , ∆M i = M i ⊗ I + I ⊗ M i , ∆N i = N i ⊗ e P 0 2κ + e − P 0 2κ ⊗ N i − 1 2κ ε ijk (e − P 0 2κ M j ⊗ P k − P k ⊗ M j e P 0 2κ ), ε(X ) = 0, for X = N i , M i , P µ , S(X ) = −X , for X = M i , P µ , S(N i ) = −N i + 3i(g i0 sinh( P 0 2κ ) + 1 2κ g ik P k ), (or S(X ) = −e 3P 0 2κ X e − 3P 0 2κ , for X = M i , N i , P µ ). (2.2)
Note, if we take diagonal metric tensor g µν = Diag(1, −1, −1, −1), we obtain κ−Poincaré algebra considered in [3], [5], [6].
The "null plane" quantum Poincaré algebra and isomorphism
The "null plane" quantum Poincaré algebra is a Hopf *-algebra generated by ten elements: P + , P − , P 1 , P 2 , E 1 , E 2 , F 1 , F 2 , J 3 , K 3 and z-deformation parameter with the following relations [1]:
The commutation rules:
[K 3 , P + ] = sinh(zP + ) z , [K 3 , P − ] = −P − cosh(zP + ), [K 3 , E i ] = E i cosh(zP + ), [K 3 , F 1 ] = −F 1 cosh(zP + ) + zE 1 P − sinh(zP + ) − z 2 P 2 W q + , [K 3 , F 2 ] = −F 2 cosh(zP + ) + zE 2 P − sinh(zP + ) − z 2 P 1 W q + , [J 3 , P i ] = −ε ij3 P j , [J 3 , E i ] = −ε ij3 E j , [J 3 , F i ] = −ε ij3 F j , [E i , P j ] = δ ij sinh(zP + ) z , [F i , P j ] = δ ij P − cosh(zP + ), [E i , F j ] = δ ij K 3 + ε ij3 cosh(zP + ), [P + , F i ] = −P i , [F 1 , F 2 ] = z 2 P − W q + + zP − J 3 sinh(zP + ), [P − , E i ] = −P i . (3.1)
The coproducts, counit and antipode:
∆X = I ⊗ X + X ⊗ I, for X = P + , E i , J 3 , ∆Y = e −zP + ⊗ Y + Y ⊗ e zP + , for Y = P − , P i , ∆F 1 = e −zP + ⊗ F 1 + F 1 ⊗ e zP + + ze −zP + E 1 ⊗ P − −zP − ⊗ E 1 e zP + + ze −zP + J 3 ⊗ P 2 − zP 2 ⊗ J 3 e zP + , ∆F 2 = e −zP + ⊗ F 2 + F 2 ⊗ e zP + + ze −zP + E 2 ⊗ P − −zP − ⊗ E 2 e zP + − ze −zP + J 3 ⊗ P 1 + zP 1 ⊗ J 3 e zP + , ∆K 3 = e −zP + ⊗ K 3 + K 3 ⊗ e zP + + ze −zP + E 1 ⊗ P 1 −zP 1 ⊗ E 1 e zP + + ze −zP + E 2 ⊗ P 2 − zP 2 ⊗ E 2 e zP + , ε(X) = 0, S(X) = −e 3zP + Xe −3zP + , for X = P ± , P i , F i , E i , J 3 , K 3 ,(3.2)
where W q + = E 1 P 2 − E 2 P 1 + J 3 sinh(zP + ) z and i, j = 1, 2. If we subtitude the followig expresions into the relations (2.1),(2.2) de-scribing the κ−Poincaré algebra:
g µν = 0 0 0 1 0 −1 0 0 0 0 −1 0 1 0 0 0 (3.3)
and:
P + = P 0 , P − = P 3 , P i = P i , K 3 = −iN 3 , F i = iN i , E 1 = iM 2 , E 2 = −iM 1 , J 3 = −iM 3 , z = 1 2κ ,
we get the relations (3.1),(3.2) describing the "null plane" quantum Poincaré algebra.
Summary
We define Hopf algebra isomorpfism from κ−Poincaré algebra (1.1),(1.2) for metric tensor (3.3) to the "null plane" quantum Poincaré algebra by putting:
P + = −P 0 , P − = −P 3 e P 0 2κ , P i = −P i e P 0 2κ , K 3 = −i(M 30 + 1 2κ M 3k P k )e P 0 2κ , F i = i(M i0 + 1 2κ M ik P k )e
A.del Olmo A new "null-plane" quantum Poincaré algebra. A Ballesteros, F J Herranz, M , Phys. Lett. A.Ballesteros, F.J.Herranz, M.A.del Olmo A new "null-plane" quantum Poincaré algebra Phys. Lett. (1995), B 351 137-145.
Maślanka The κ−Weyl group and its algebra in. P Kosiński, P , or Q-ALG 9512018From Field Theory to Quantum Groups. P.Kosiński, P.Maślanka The κ−Weyl group and its algebra in "From Field Theory to Quantum Groups" vol. on 60 anniversary of J.Lukierski, World Scientific Singapur, 1996 or Q-ALG 9512018.
. S Majid, H Ruegg, Phys. Lett. B. 334348S. Majid, H. Ruegg Phys. Lett. B 334 (1994), 348.
. V G Drinfeld, Proc. Int. Congr. of Math. Berkeley. 798V.G.Drinfeld Proc. Int. Congr. of Math. Berkeley(1986), p.798.
. J Lukierski, A Nowicki, H Ruegg, V Tolstoy, Phys. Lett. B. 264331J. Lukierski, A. Nowicki, H. Ruegg, V. Tolstoy Phys. Lett. B 264 (1991), 331.
. J Lukierski, A Nowicki, H Ruegg, Phys. Lett. B. 293344J. Lukierski, A. Nowicki, H. Ruegg Phys. Lett. B 293 (1993), 344.
. S Giller, P Kosiński, J Kunz, M Majewski, P Maślanka, 57. Ph. Zaugg preprint MIT-CTP-2353Phys. Lett. B. 286S. Giller, P. Kosiński, J. Kunz, M. Majewski, P. Maślanka Phys. Lett. B 286 (1992), 57. Ph. Zaugg preprint MIT-CTP-2353 (1994), .
Deformation map and hermitean representations of κ−Poincaré algebra. P Maślanka, J. Math. Phys. 34126025P. Maślanka Deformation map and hermitean representations of κ−Poincaré algebra J. Math. Phys. (1993), 34 (12)6025.
. S Zakrzewski, J.Phys. 27S.Zakrzewski J.Phys. (1994), A 27.
. W B Schmidke, J Weiss, B Zumino Zeitschr, Physik. 52472W.B.Schmidke, J.Weiss, B.Zumino Zeitschr. f. Physik 52 (1991), 472.
. U Carow-Watamura, M Schliecker, M Scholl, S , Watamura Int. J. Mod. Phys. A. 63081U.Carow-Watamura, M.Schliecker, M.Scholl, S.Watamura Int. J. Mod. Phys. A 6 (1991), 3081.
. O Ogievetsky, W B Schmidke, J Weiss, B Zumino Comm, Math. Phys. 150495O. Ogievetsky, W.B. Schmidke, J. Weiss, B. Zumino Comm. Math. Phys. 150 (1992), 495.
M Chaichian, A P Demichev, Generalized symmetries in Physics. M. Chaichian, A.P. Demichev Proceedings of the Workshop: "General- ized symmetries in Physics", Clausthal (1993), .
. V Dobrev, J. Phys. A. 261317V. Dobrev J. Phys. A 26 (1993), 1317.
L Castellani, Quantum Groups" Proceedings of XXX Karpacz Winter School of Theoretical Physics. Karpacz13L. Castellani in "Quantum Groups" Proceedings of XXX Karpacz Win- ter School of Theoretical Physics, Karpacz 1994, PWN 1995, p. 13 .
|
[] |
[
"Column2Vec: Structural Understanding via Distributed Representations of Database Schemas",
"Column2Vec: Structural Understanding via Distributed Representations of Database Schemas"
] |
[
"Michael J Mior [email protected] \nRochester Institute of Technology Rochester\nNew York\n",
"Alexander G Ororbia \nRochester Institute of Technology Rochester\nNew York\n"
] |
[
"Rochester Institute of Technology Rochester\nNew York",
"Rochester Institute of Technology Rochester\nNew York"
] |
[] |
We present Column2Vec, a distributed representation of database columns based on column metadata. Our distributed representation has several applications. Using known names for groups of columns (i.e., a table name), we train a model to generate an appropriate name for columns in an unnamed table. We demonstrate the viability of our approach using schema information collected from open source applications on GitHub.
| null |
[
"https://arxiv.org/pdf/1903.08621v1.pdf"
] | 84,187,190 |
1903.08621
|
8d783d9b0ef33c41234a7396f59111faabb5b057
|
Column2Vec: Structural Understanding via Distributed Representations of Database Schemas
Michael J Mior [email protected]
Rochester Institute of Technology Rochester
New York
Alexander G Ororbia
Rochester Institute of Technology Rochester
New York
Column2Vec: Structural Understanding via Distributed Representations of Database Schemas
We present Column2Vec, a distributed representation of database columns based on column metadata. Our distributed representation has several applications. Using known names for groups of columns (i.e., a table name), we train a model to generate an appropriate name for columns in an unnamed table. We demonstrate the viability of our approach using schema information collected from open source applications on GitHub.
INTRODUCTION
It has become increasingly common for enterprise data management platforms to soak up as much data as possible from a variety of sources. These "data lakes" are often lacking in metadata which makes the structure of stored data challenging to understand. This limits the usability of this additional data since significant time must be invested by data scientists when adding useful metadata before the data can be analyzed or integrated with other sources.
One common operation is database normalization. When performing normalization, it is common to decompose a single database table into multiple tables based on dependencies which are inferred from the data. The final schema typically more closely represents logical entities in the underlying data. Consider the single table below: authorID, firstName, lastName, ISBN, title This table contains information on both authors and books. A standard normalization algorithm to convert this table into Boyce-Codd Normal Form (BCNF) [5] would produce three tables:
authorID, firstName, lastName ISBN, title authorID, ISBN These tables represent information on authors, books, and the relationship between them. This normal form is useful for data integration tasks but there exists no obvious approach for producing meaningful names for each of these tables. Currently these names are manually assigned by the database designer.
Our work makes the following contributions:
(1) A semantic embedding of database column names (2) A method for using these embeddings to assign meaningful names to tables containing a given set of column names (3) A metric for evaluating generated table names which shows the usefulness of our prediction.
RELATED WORK
One of the central motivations behind this work comes from recent progress that has been made in the area of natural language processing (NLP) due to the use of distributed representations. Distributed representations, or embeddings, refer to the mapping of input examples to vectors of values, of possibly lower dimensionality than the input itself. These embeddings are usually produced through the use of an artificial neural network (ANN). Each element in one such vector is not necessarily associated with one particular concept, feature, or object, but rather works in tandem with the other elements in the same vector to represent a set of features or concepts that describe the input itself. The Word2vec family of models, i.e., skip-gram and continuous bag of words, have yielded some of the most widely-used embeddings in NLP, where a simple feed-forward ANN language model is trained on a large collection of documents. The internal weight vectors, which each map to a particular token, are then used in some subsequent predictive language task, e.g., text classification or chunking. Other variants have been proposed since Word2vec's initial public release, such as GloVe [17], ELMo [18], and BERT [6]. Interestingly enough, these representations can be composed into representations of phrases and sentences, by averaging, summing, or concatenating the embeddings for each of the constituent words, yielding a possible distributed representation of the phrase/sentence itself. Embeddings have found use in domains even outside of NLP, such as in graph/network representation [8,14], including entity resolution [7], concept modeling [11,13,19] and data curation [20]. In short, distributed representations have facilitated the construction of a useful, alternative means of comparing, aggregating, and manipulating the fundamental elements of a data type.
In this work, for tables themselves, we hypothesize that the same potential for discovering aggregated embeddings exists as wellfor each column name, we could find its particular distributed representation, and, after examining the entire set of column names, we could compose a plausible distributed representation of the entire table itself by applying some aggregation operation to the constituent embeddings. Furthermore, these aggregate table embeddings could be used in the generation of plausible names for the tables themselves. In this work, we will propose two possible ways in which this might be implemented and empirically explore the effects of one of these.
MODELS FOR TABLE NAME GENERATION
In this section, we will describe two architectures, one simple and one complex, for generating table names. We will then provide some experimental results for the first model in this study.
Model # 1
There are two main steps in this approach to generating database table names. First, we produce a word embedding for each table and column name as discussed in Section 3.1.1. Then, as discussed in Section 3.1.2 we use the vectors generated for each column name in a table to predict a meaningful name for the table.
Embeddings for Database Columns.
To generate word embeddings for Column2Vec, we make use of fastText [3]. This is primarily because fastText allows embeddings to be generated for terms which are outside of the original vocabulary the model was trained on. This is important in our setting since table and column names contain significant variation and may not appear as an exact match in any existing data. For example, approximately half of the table names we see in the data set used for our preliminary results in Section 4 appear only once.
To train the fastText model, we first construct documents based on the names of tables and columns of known database schemas. For example, one document might consist of the string "authors authorID firstName lastName". Training on a collection of such documents allows us to generate embeddings, or word vectors for each of the table and column names in our training set. In addition, fastText also enables us to generate word vectors for terms outside of this vocabulary by looking at subword information.
Table Name
Generation. For our generation task, we aim to take as input a list of column names and assign the most meaningful table name from tables observed in our training set. Our prediction is based on comparing the similarity of the word vectors for each column vectors. Since we want to incorporate semantic context from all column names, we need to combine the individual word vectors generated for each column. Mikolov et al. [12] showed that the summation of word vectors is most likely to produce a vector for a term which is semantically similar to the composition of each term. In our case, we expect the sum of the word vectors associated with a set of columns will produce a vector which is close (in vector space) to a word vector representation of a viable table name.
To produce a table name that might be suitable for the given set of columns, we use the nearest neighbor based on cosine similarity. That is, after summing the individual column vectors, we select the table name which corresponds to the word vector most similar to this summed vector. Currently our model considers/examines only the single closest vector. However, in future work, we intend to investigate the case where k nearest closest neighbors are considered for further evaluation.
Model # 2
Another approach we propose is based on a recurrent neural architecture that learns a generative model of table names, conditioned on a distributed representation of table metadata. A graphical depiction of a basic version of this model can be found in Figure 1.
The generative model can be decomposed into three general modules: 1) a column name encoder/processor, 2) a table representation aggregation function, and 3) a conditional name generator. All three functions could be as simple as feed-forward network models or as complex and powerful as a gated recurrent neural network (RNN), such as Long Short Term Memory [9] or ∆-RNN [15]. We envision the column name encoder and table name generator to operate at the character symbolic level (or the subword level) to circumvent the problem of an incredibly high-dimensional input/output space that word-level models have to face.
The column name encoder can be a recurrent network that builds a stateful representation of a single column by iteratively processing the underlying characters that compose the word or phrase used to label the column name itself. This column name encoder would be shared across column names to drastically reduce the number of parameters required in our model and allow it to easily handle a variable number of columns. A more difficult, but perhaps quite fruitful, extension of this encoder would be to have it process the data values associated with a particular column name as well. This encoder would be applied to each column name and output a set of K column name representations.
The K embeddings produced by the column name encoder would then be run through an aggregation function, which could itself be a nonlinear multilayer perceptron (MLP) or a simple function such as averaging or (weighted) summation. This function's primarily role is to create a single, fixed-length representation of a set of column names, or rather, the table of interest itself. This table representation would then be used to guide a generative model of text, which could be a simple RNN as depicted in Figure 1. By focusing on a character or subword-level RNN generative model, we can naturally handle names of variable length and even potentially generate non-standard symbols (such as alphanumeric strings). The types of names that would be generated by this model would largely depend on the table dataset used to construct the overall model, however, if large enough, the model might be able to produce some interesting, creative table names or even a set of candidates that the human user would finally select from and/or error-correct (providing additional feedback to the model).
Parameters of the entire end-to-end model could be learned using reverse-mode differentiation to optimize an objective such as the negative log likelihood of the name text in the training set. Since the entire system is soft, or rather, makes use of differentiable nonlinear transformations, calculating gradients would not be difficult, though the path of credit assignment could potentially be long depending on how long some table names and column names are (since in RNN structures, the model parameters are shared over time-steps, so in the case of characters, we apply the RNN parameters once per character).
PRELIMINARY RESULTS
To examine the effectiveness of our first model, we used a set of table schemas (column and table names) collected from a crawl of open source repositories on GitHub [10]. Files in all repositories were checked for the syntactically valid CREATE TABLE statements and the table and column names were extracted from these statements. This resulted in a total of 436, 545 tables combined with the names of each column in the table. We leave implementation and evaluation of our second model as future work.
Data cleaning
The dataset pulled from GitHub contains a significant amount of test and dummy data which is not useful for our problem. For example, a table with the name bb and columns col1 and col2 would be discarded. We implemented a simple set of rules to filter the extracted information: (1) trigrams appearing in table or column names appearing only once, eliminating random entries, (2) names with special characters, (3) names with a large number of digits, and (4) names which consist of only two repeated characters (e.g., bb).
Model training
The data was split into 90% training data with 10% reserved for testing. Word vectors were trained using the fastText skip-gram model with each document consisting of the table name and associated column names. We performed hyper-parameter optimization using the tree-structured Parzen estimator approach described by Bergstra et al. [2] on a subset of the data. A k-nearest neighbors model was then trained using the table name word vectors (in the training set) to generate table names as described in Section 3. consider a suggestion of the name posts to be a reasonable substitute. We want to be able to identify how well the components of our predicted names match the original. To identify the components of a name, we first attempt to split names into words via a dynamic programming approach that aims to infer the position of spaces by attempting to maximize the probability of individual word frequencies [1]. This is based off of a sample of words from the English Wikipedia corpus.
Table name quality
To evaluate the quality of the generated table names, we define a metric based on the F 1 score [4] which combines precision and recall. We use the words split from the table names as mentioned above when calculating the F 1 score. However, we also want words which are semantically similar to rank high. For example, the name books may a suitable alternative to the name library. To capture these relationships, we use the path similarity from WordNet::Similarity [16]. When computing precision and recall, instead of using intersection between the predicted and original names, we calculate fuzzy precision and recall using this WordNet similarity metric. Figure 2 shows the cumulative distribution of the evaluation metric. Approximately one third of the predicted table names evaluate to zero. Of these, roughly 60% have the same two predicted tables, vtp and defaultrequiredlengthncharcolumns. Neither of these are semantically meaningful. Further preprocessing to remove names with limited semantic value will likely improve this result. We also see that approximately 1% of the values have the highest possible score of 1.0. In this case, names differed just in pluralization and punctuation, such as recipe_ingredients and recipeingredient. We provide examples of results which fall between these two extremes in the following section. Table 1 gives examples of the original names assigned to tables, i.e., the ground truth, as well as the predicted name and the value for our evaluation metric. Although the examples above suggest that our metric is useful, in the future we intend to perform a more thorough evaluation to determine whether this metric correlates with human judgment.
Table Name Samples
To better demonstrate our generation/evaluation process, consider predicting a name for a table with columns id, calendarid, name, eventdate, and locutionid. We generate word vectors for each of these columns and sum them. When searching for the table name with the closest word vector, we find eventdates.
The original name given to this table was holidaydates. To evaluate this according to our metric we first split each name into words giving [holiday, dates] and event, dates]. Based on the WordNet path similarity, holiday and event have a similarity of 0.14. Since dates matches exactly, we end up with precision and recall values of P = R = (1.14/2) = 0.57. Our final metric value is then 2PR/(P + R) = 0.57. Note that in this case, we have no indication based on column names that the table specifically refers to holidays. In the future, we will explore other sources of contextual information which may help produce more accurate results.
CONCLUSIONS AND FUTURE WORK
Generating meaningful names for tables given only constituent column names is a challenging problem. Our approach, based on distributed representations, is able to generate meaningful table names given the names of the columns contained in the table. While our metric does prove useful, additional work is needed to compare it properly against human judgment.
We believe that incorporating information on the data stored in these columns (e.g. data type and value distribution) will make this representation even more useful. In addition to generating tables names, such a representation would likely benefit other data integration tasks, e.g., deciding which tables in a large set may be meaningfully joined together.
Figure 1 :
1Recurrent generative model of table titles.
Figure 2 :
2Cumulative distribution of evaluation metric
Table names may
namesconsist of multiple words concatenated together. A table storing blog posts may be called blogposts, but we might
. Derek Anderson, Derek Anderson. 2018. Word Ninja. https://pypi.org/project/wordninja/0.1.5/
Algorithms for Hyper-parameter Optimization. James Bergstra, Rémi Bardenet, Yoshua Bengio, Balázs Kégl, Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11). the 24th International Conference on Neural Information Processing Systems (NIPS'11)USACurran Associates IncJames Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algo- rithms for Hyper-parameter Optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11). Curran Associates Inc., USA, 2546-2554.
Enriching Word Vectors with Subword Information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. En- riching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics 5 (2017), 135-146.
MUC-4 evaluation metrics. Nancy Chinchor, MUC 1992. McLean, VirginiaAssociation for Computational LinguisticsNancy Chinchor. 1992. MUC-4 evaluation metrics. In MUC 1992. Association for Computational Linguistics, McLean, Virginia, 22-29.
Recent Investigations into Relational Data Base Systems. Edgar F Codd, RJ1385. IBMTechnical ReportEdgar F. Codd. 1974. Recent Investigations into Relational Data Base Systems. Technical Report RJ1385. IBM.
BERT: Pre-training of Deep Bidirectional Transformers for Language. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.0480514Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language, Understanding. CoRR abs/1810.04805 (2018), 14.
. Muhammad Ebraheem, Saravanan Thirumuruganathan, R Shafiq, Mourad Joty, Nan Ouzzani, Tang, abs/1710.0059714DeepER -Deep Entity Resolution. CoRRMuhammad Ebraheem, Saravanan Thirumuruganathan, Shafiq R. Joty, Mourad Ouzzani, and Nan Tang. 2017. DeepER -Deep Entity Resolution. CoRR abs/1710.00597 (2017), 14.
node2vec: Scalable feature learning for networks. Aditya Grover, Jure Leskovec, Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on Knowledge discovery and data miningSan Francisco, CAACMAditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, San Francisco, CA, 855-864.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 9Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735-1780.
Retrieved Feb. Felipe Hoffa, 15Felipe Hoffa. 2016. Retrieved Feb. 15 2019 from https://medium.com/google- cloud/github-on-bigquery-analyze-all-the-code-b3576fd2b150.
Generative Topic Embedding: a Continuous Representation of Documents (Extended Version with Proofs). Shaohua Li, Tat-Seng Chua, Jun Zhu, Chunyan Miao, CoRR abs/1606.0297913Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016. Generative Topic Embedding: a Continuous Representation of Documents (Extended Version with Proofs). CoRR abs/1606.02979 (2016), 13.
Distributed Representations of Words and Phrases and Their Compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing SystemsUSACurran Associates Inc2Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2 (NIPS'13). Curran Associates Inc., USA, 3111-3119.
Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec. Christopher E Moody, CoRR abs/1605.020198Christopher E. Moody. 2016. Mixing Dirichlet Topic Models and Word Embed- dings to Make lda2vec. CoRR abs/1605.02019 (2016), 8.
2017. graph2vec: Learning distributed representations of graphs. Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, Shantanu Jaiswal, CoRR abs/1707.050058Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, and Shantanu Jaiswal. 2017. graph2vec: Learning distributed representations of graphs. CoRR abs/1707.05005 (2017), 8.
Learning simpler language models with the differential state framework. Alexander G Ororbia, I I , Tomas Mikolov, David Reitter, Neural computation. 29Alexander G. Ororbia II, Tomas Mikolov, and David Reitter. 2017. Learning simpler language models with the differential state framework. Neural computation 29, 12 (2017), 3327-3352.
Word-Net::Similarity: Measuring the Relatedness of Concepts. Ted Pedersen, Siddharth Patwardhan, Jason Michelizzi, Demonstration Papers at HLT-NAACL 2004. Stroudsburg, PA, USAAssociation for Computational LinguisticsTed Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Word- Net::Similarity: Measuring the Relatedness of Concepts. In Demonstration Papers at HLT-NAACL 2004. Association for Computational Linguistics, Stroudsburg, PA, USA, 38-41.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Association for Computational Linguistics, Doha, Qatar, 1532-1543.
Deep Contextualized Word Representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the NAACL: Human Language Technologies. NAACL. the 2018 Conference of the NAACL: Human Language Technologies. NAACLNew Orleans, LouisianaMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the NAACL: Human Language Technologies. NAACL, New Orleans, Louisiana, 2227-2237.
Vector Embedding of Wikipedia Concepts and Entities. Ehsan Sherkat, Evangelos E Milios, CoRR abs/1702.0347011Ehsan Sherkat and Evangelos E. Milios. 2017. Vector Embedding of Wikipedia Concepts and Entities. CoRR abs/1702.03470 (2017), 11.
Data Curation with Deep Learning [Vision]: Towards Self Driving Data Curation. Nan Saravanan Thirumuruganathan, Mourad Tang, Ouzzani, CoRR abs/1803.0138414Saravanan Thirumuruganathan, Nan Tang, and Mourad Ouzzani. 2018. Data Curation with Deep Learning [Vision]: Towards Self Driving Data Curation. CoRR abs/1803.01384 (2018), 14.
|
[] |
[] |
[] |
[] |
[] |
The so-called constrained least mean-square algorithm is one of the most commonly used linear-equalityconstrained adaptive filtering algorithms. Its main advantages are adaptability and relative simplicity. In order to gain analytical insights into the performance of this algorithm, we examine its mean-square performance and derive theoretical expressions for its transient and steady-state mean-square deviation. Our methodology is inspired by the principle of energy conservation in adaptive filters. Simulation results corroborate the accuracy of the derived formula.Index Terms-Constrained least mean-square; linearlyconstrained adaptive filtering; mean-square deviation; meansquare stability; performance analysis.
|
10.1016/j.sigpro.2015.05.011
|
[
"https://arxiv.org/pdf/1412.2424v2.pdf"
] | 75,586 |
1412.2424
|
acc6f49428c4ef29b95374949ac384f5659fa4ac
|
The so-called constrained least mean-square algorithm is one of the most commonly used linear-equalityconstrained adaptive filtering algorithms. Its main advantages are adaptability and relative simplicity. In order to gain analytical insights into the performance of this algorithm, we examine its mean-square performance and derive theoretical expressions for its transient and steady-state mean-square deviation. Our methodology is inspired by the principle of energy conservation in adaptive filters. Simulation results corroborate the accuracy of the derived formula.Index Terms-Constrained least mean-square; linearlyconstrained adaptive filtering; mean-square deviation; meansquare stability; performance analysis.
I. INTRODUCTION
ONSTRAINED adaptive filtering algorithms are powerful tools tailored for applications where a parameter vector need be estimated subject to a set of linear equality constraints. Examples of such applications are antenna array processing, spectral analysis, linear-phase system identification, and blind multiuser detection. The deterministic constraints are usually construed from some prior knowledge about the considered problem such as directions of arrival in antenna array processing, linear phase in system identification, and spreading codes in multiuser detection. In some other applications, specific linear equality constraints can help improve robustness of the estimates or obviate a training phase [1]- [3].
The constrained least mean-square (CLMS) algorithm proposed in [4], [5] is a popular linearly-equality-constrained adaptive filtering algorithm. It was originally developed for array processing as an online linearly-constrained minimumvariance (LCMV) filter [2]. The CLMS algorithm implements stochastic gradient-descent optimization. Hence, it is relatively simple in structure and computational complexity. It is also capable of adapting to slow changes in the system parameters or the statistical properties of the input data. It has been widely utilized in applications pertaining to adaptive LCMV filtering, particularly adaptive beamforming [6]- [12]. Several other linearly-constrained adaptive filtering algorithms have been R. Arablouei proposed, which are computationally more demanding compared with the CLMS algorithm but offer improved convergence speed or steady-state performance [13]- [22].
Performance analysis of the constrained adaptive filtering algorithms is often challenging since the incorporation of the constraints makes their update equations more complex than those of the unconstrained algorithms. In [4], mean performance of the CLMS algorithm is analyzed. It is shown that, for an appropriately selected step-size, the CLMS algorithm converges to the optimal solution in the mean sense, i.e., the CLMS algorithm is asymptotically unbiased. Moreover, using the analysis results from [23]- [25], a stable operating range for the step-size as well as lower and upper bounds for the steady-state misadjustment of the CLMS algorithm are specified. These bounds are derived under the assumption that the input vectors are temporally independent and have multivariate Gaussian distribution. In [6] and [7], the mean-square performance of the CLMS algorithm is analyzed and its theoretical steady-state mean output-power and misadjustment are computed. The former studies the behavior of the weight covariance matrix and the latter considers the weight-error covariance matrix. However, the analyses in [6] and [7] are carried out for the particular application of adaptive beamforming where the objective is to minimize the filter output energy and there is no observed reference or training signal. Moreover, the analytical methods employed in these works are not suitable for studying the dynamics of the algorithm's mean-square deviation (MSD). MSD is the expectation of the squared norm of the difference between the estimate vector and the optimal solution vector. It a particularly important representative of performance when the objective is primarily to identify the unobserved parameters of an underlying system that governs a linear relation between the input and output of the system while the parameter estimates are required to satisfy certain linear equality constraints. The examples of such applications abound in estimation and control theories [1], [26]- [28].
In this letter, we take a fresh look into the mean-square performance of the general-form CLMS algorithm from the perspective of a technique based on the energy conservation arguments [29]. We study the mean-square convergence of the CLMS algorithm and find the stable operating range for its step-size parameter. Then, we derive theoretical expressions for the transient as well as steady-state values of the MSD of the CLMS algorithm. Following the same line of analysis, we also derive a theoretical expression for the steady-state On the Mean-Square Performance of the Constrained LMS Algorithm Reza Arablouei, Kutluyıl Doğançay, and Stefan Werner C misadjustment of the CLMS algorithm and show that it is in agreement with the one given in [6] and [7]. Our simulation results exhibit a good agreement between the theoretically predicted and experimentally found values of the MSD. Therefore, the presented analysis sheds valuable light on the mean-square performance of the CLMS algorithm.
II. ALGORITHM Consider a linear system where, at each time instant ∈ ℕ, an input vector ∈ ℝ ×1 and an output scalar ∈ ℝ are related via
= + .(1)
Here, ∈ ℝ ×1 is the system parameter vector, ∈ ℝ is the background noise, and ∈ ℕ is the order of the system. An adaptive filter of order , with tap-coefficients vector ∈ ℝ ×1 , is employed to find an estimate of from the observed input-output data. In addition, at every iteration, 1 ≤ < linear equality constraints are imposed upon such that to have
⊤ = (2)
where ∈ ℝ × and ∈ ℝ ×1 are the constraint parameters. The CLMS algorithm updates the filter coefficients via [4] =
[ −1 + ( − −1 ⊤ ) ] + (3) where = − ( ⊤ ) −1 ⊤ , = ( ⊤ ) −1 ,
is the step-size parameter, and is the × 1 identity matrix.
III. ANALYSIS
To make the analysis more tractable, let us use the following common assumptions [29], [30]:
A1: The input vectors of different time instants are independent zero-mean multivariate Gaussian and have a positive-definite covariance matrix ∈ ℝ × . A2: The background noise is temporally-independent zeromean Gaussian with variance ∈ ℝ ≥0 . It is also independent of the input data. Under A1 and A2, the optimal filter coefficient vector is given by [1] = + −1 ( ⊤ −1 ) −1 ( − ⊤ ). Substituting (1) into (3), subtracting from both sides of (3), and using
+ − = gives = ( − ⊤ ) −1 + ⊤ + .(4)
Here, is the × 1 zero vector and we define
= − .
The matrix is idempotent, i.e., we have 2 = , which can be easily verified. Therefore, pre-multiplying both sides of (4) by reveals that
= ∀ .
Consequently, we can rewrite (4) as
= ( − ⊤ ) −1 + ⊤ + .(5)
A. Mean-square stability Denote the Euclidean norm of a vector ∈ ℝ ×1 by ‖ ‖ and define its weighted Euclidean norm with a weighting matrix ∈ ℝ × as
‖ ‖ = ‖ ‖ vec{ } = √ ⊤
where vec{⋅} is the vectorization operator that stacks the columns of its matrix argument on top of each other.
Bearing in mind A1 and A2, calculating the expected value of the squared Euclidean norm of both sides of (5) yields the following variance relation:
[‖ ‖ 2 ] = [‖ −1 ‖ 2 ] + 2 ⊤ [ ⊤ ⊤ ] + 2 [ 2 ⊤ ] (6) where = [( − ⊤ )( − ⊤ )] = − 2 + 2 [ ⊤ ⊤ ](7)
and = .
Using the Isserlis' theorem [31] and A1, we get
[ ⊤ ⊤ ] = [ ⊤ ] [ ⊤ ] +2 [ ⊤ ] [ ⊤ ] = tr{ } + 2 .(8)
Moreover, due to A1 and A2, we have
[ 2 ⊤ ] = tr{ }(9)
and = .
Substituting (8)-(10) into (6) and (7) gives [‖ ‖ 2 ] = [‖ −1 ‖ 2 ] + 2 tr{ }( ⊤ + ) (11) and = − 2 + 2 tr{ } + 2 2 2 .
The matrix has zero and − nonzero eigenvalues, , = 1, … , − [4]. Subsequently, has unit and − non-unit eigenvalues, , = 1, … , − . The recursion of (11) is stable and convergent if < 1, = 1, … , − or equivalently 1 − 2 + 2 tr{ } + 2 2 2 < 1, = 1, … , − .
To satisfy (12), it is enough to choose the step-size such that
0 < < 2 2 max + tr{ }(13)
where max is the largest eigenvalue of . Note that the meansquare stability upper-bound for in (13) is the same as the one given in [4] and [7] although our analytical approach is different from those of [4] and [7].
B. Instantaneous MSD
Take ∈ ℝ × as an arbitrary symmetric nonnegativedefinite matrix. Applying the expectation operator to the squared-weighted Euclidean norm of both sides in (4) while considering A1 and A2 leads to the following weighted variance relation:
[‖ ‖ 2 ] = [‖ −1 ‖ 2 ] + 2 ⊤ [ ⊤ ⊤ ] + 2 [ 2 ⊤ ](14)
where
= [( − ⊤ ) ( − ⊤ )] = − − + [ ⊤ ⊤ ](15)
In the same vein as (8) and (9), we have
[ ⊤ ⊤ ] = tr{ } + 2(16)
and
[ 2 ⊤ ] = tr{ }.(17)
Using (16), (15) can be written as
= ( − ) ( − ) + 2 tr{ } + 2(18)
Applying the vectorization operator to (18) and ⊗ denotes the Kronecker product. Substituting (16), (17), and (20) into (14) together with using (10) and (19) gives
[‖ ‖ 2 ] = [‖ −1 ‖ 2 ] + 2 ( ⊤ + )vec ⊤ { } . (21)
By making appropriate choices of in (21), for any time instant , we can write
[‖ ‖ − 2 ] = [‖ −1 ‖ − +1 2 ] + 2 ( ⊤ + )vec ⊤ { } − , 1 ≤ <(22)
where we define = vec{ }.
Summation of both sides in (22) for = 1, … , gives
[‖ ‖ 2 ] = ‖ 0 ‖ 2 + 2 ( ⊤ + )vec ⊤ { } ∑ −1 =0 .(23)
Similarly, we can show that
[‖ −1 ‖ 2 ] = ‖ 0 ‖ −1 2 + 2 ( ⊤ + )vec ⊤ { } ∑ −2 =0 .(24)
Subtraction of (24) from (23) results in the time-evolution recursion of the instantaneous MSD as
[‖ ‖ 2 ] = [‖ −1 ‖ 2 ] − ‖ 0 ‖ −1 ( 2 − ) 2 + 2 ( ⊤ + )vec ⊤ { } −1 .
C. Steady-state MSD
Provided that (13) is fulfilled, the CLMS algorithm converges in the mean-square sense. Thus, at the steady state, i.e., when → ∞,
D. Steady-state misadjustment
The steady-state misadjustment of the CLMS algorithm is defined as [33]
Using (27) in (26), we get
= 2 vec ⊤ { }( 2 − ) −1 vec{ }.(28)
Note that although (28) is seemingly different from the expression derived in [6] and [7] for the steady-state misadjustment, i.e.,
= ∑ 1 − − =1 2 − ∑ 1 − − =1 ,(29)
it can be verified that (28) and (29) are in fact identical.
SIMULATIONS Consider a constrained system identification problem where the underlying linear system is of order = 7 and there exist = ( − 1)/2 linear equality constraints. We set the system parameter vector, , the constraint parameters, and , and the input covariance matrix, , arbitrarily. However, we ensure that has unit energy, is full-rank, and is symmetric positive-definite with tr{ } = . The input vectors are zero-mean multivariate Gaussian. The noise is also zeromean Gaussian. We attain the experimental results by averaging over 10 4 independent runs and, when applicable, over 10 3 steady-state values.
In Fig. 1, we depict the theoretical and experimental MSDversus-time curves of the CLMS algorithm for different value of the step-size when the noise variance is = 10 −2 .
In Fig. 2, we plot the theoretical and experimental steadystate MSDs of the CLMS algorithm as a function of the noise variance for different values of the step-size.
In Fig. 3, we compare the theoretical and experimental values of the steady-state misadjustment for different stepsizes. We include both (28) and (29) as well as the lower and upper bounds given in [4], i.e., where min is the smallest nonzero eigenvalues of . Fig. 3 shows that (28) and (29) are equivalent.
Figs. 1-3 illustrate an excellent match between theory and experiment, verifying the analytical performance results developed in this paper.
IV. CONCLUSION
We studied the mean-square performance of the constrained least mean-square algorithm and derived theoretical expressions for its transient and steady-state mean-square deviation. Through simulation examples, we substantiated that the resultant expressions are accurate for a wide range of values of the noise variance and step-size parameter. The presented theoretical formula can help designers predict the steady-state performance of the CLMS algorithm and tune its step-size to attain a desired performance in any given scenario without resorting to Monte Carlo simulations.
Fig. 1 .
1MSD of the CLMS algorithm versus the iteration number for different values of the step-size when = 10 −2 .
Fig. 2 .
2Steady-state MSD of the CLMS algorithm versus the noise variance for different values of the step-size.
Fig. 3 .
3Steady-state misadjustment of the CLMS algorithm versus the stepsize.
and K. Doğançay are with the School of Engineering and the Institute for Telecommunications Research, University of South Australia, Mawson Lakes SA 5095, Australia (email: [email protected], [email protected]). S. Werner is with the Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, Espoo, Finland (email: [email protected]).
P S R Diniz, Adaptive Filtering: Algorithms and Practical Implementations. BostonSpringer4th ed.P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementations, 4th ed., Boston: Springer, 2013.
Van Trees, Detection, Estimation, and Modulation Theory Part IV: Optimum Array Processing. H L , WileyNew YorkH. L. Van Trees, Detection, Estimation, and Modulation Theory Part IV: Optimum Array Processing, New York: Wiley, 2002.
Constrained adaptive filters. M L R De Campos, S Werner, J A ApolinárioJr, Adaptive Antenna Arrays: Trends and Applications, S. Chrandran. Ed. New YorkSpringer-VerlagM. L. R. de Campos, S. Werner, and J. A. Apolinário, Jr., "Constrained adaptive filters," in Adaptive Antenna Arrays: Trends and Applications, S. Chrandran, Ed. New York: Springer-Verlag, 2004.
An algorithm for linearly constrained adaptive array processing. O L Frost, Iii, Proc. IEEE. IEEE60O. L. Frost, III, "An algorithm for linearly constrained adaptive array processing," Proc. IEEE, vol. 60, no. 8, pp. 926-935, Aug. 1972.
Adaptive least squares optimization subject to linear equality constraints. O L Frost, Iii, TR 6796-2Stanford Electron. Lab. Tech. Rep.O. L. Frost, III, "Adaptive least squares optimization subject to linear equality constraints," Stanford Electron. Lab., Stanford, CA, Doc. SEL- 70-055, Tech. Rep. TR 6796-2, Aug. 1970.
Analysis of constrained LMS algorithm with application to adaptive beamforming using perturbation sequences. L C Godara, A Cantoni, IEEE Trans. Antennas Propag. 343L. C. Godara and A. Cantoni, "Analysis of constrained LMS algorithm with application to adaptive beamforming using perturbation sequences," IEEE Trans. Antennas Propag., vol. AP-34, no. 3, pp. 368-379, Mar. 1986.
Statistical analysis of a jointly optimized beamformer-assisted acoustic echo canceler. M H Maruo, J C M Bermudez, L S Resende, IEEE Trans. Signal Process. 621M. H. Maruo, J. C. M. Bermudez, and L. S. Resende, "Statistical analysis of a jointly optimized beamformer-assisted acoustic echo canceler," IEEE Trans. Signal Process., vol. 62, no.1, pp. 252-265, Jan. 1 2014.
Design of an adaptive wideband beamforming algorithm for conformal arrays. M Rasekh, S R Seydnejad, IEEE Commun. Lett. 1811M. Rasekh and S. R. Seydnejad, "Design of an adaptive wideband beamforming algorithm for conformal arrays," IEEE Commun. Lett., vol. 18, no. 11, pp. 1955-1958. Nov. 2014.
Principles of minimum variance robust adaptive beamforming design. S A Vorobyov, 93Signal ProcessS. A. Vorobyov, "Principles of minimum variance robust adaptive beamforming design," Signal Process., vol. 93, pp. 3264-3277, Dec. 2013.
Dual-domain adaptive beamformer under linearly and quadratically constrained minimum variance. M Yukawa, Y Sung, G Lee, IEEE Trans. Signal Process. 6111M. Yukawa, Y. Sung, and G. Lee, "Dual-domain adaptive beamformer under linearly and quadratically constrained minimum variance," IEEE Trans. Signal Process., vol. 61, no. 11, pp. 2874-2886, Jun. 1 2013.
On microphonearray beamforming from a MIMO acoustic signal processing perspective. J Benesty, J Chen, Y Huang, J Dmochowski, IEEE Trans. Audio, Speech, Language Process. 153J. Benesty, J. Chen, Y. Huang, and J. Dmochowski, "On microphone- array beamforming from a MIMO acoustic signal processing perspective," IEEE Trans. Audio, Speech, Language Process., vol. 15, no. 3, pp. 1053-1064, Mar. 2007.
A class of constrained adaptive beamforming algorithms based on uniform linear arrays. L Zhang, W Liu, R J Langley, IEEE Trans. Signal Process. 587L. Zhang, W. Liu, and R. J. Langley, "A class of constrained adaptive beamforming algorithms based on uniform linear arrays," IEEE Trans. Signal Process., vol. 58, no. 7, pp. 3916-3922, Jul. 2010.
A fast leastsquares algorithm for linearly constrained adaptive filtering. L S Resende, J M T Romano, M G Bellanger, IEEE Trans. Signal Process. 445L. S. Resende, J. M. T. Romano, and M. G. Bellanger, "A fast least- squares algorithm for linearly constrained adaptive filtering," IEEE Trans. Signal Process., vol. 44, no. 5, pp. 1168-1174, May 1996.
The constrained affine projection algorithm -Development and convergence issues. M L R De Campos, J A ApolinárioJr, Proc. 1st Balkan Conf. Signal Process. 1st Balkan Conf. Signal essIstanbul, TurkeyM. L. R. de Campos and J. A. Apolinário, Jr., "The constrained affine projection algorithm -Development and convergence issues," in Proc. 1st Balkan Conf. Signal Process., Commun., Circuits, and Syst., Istanbul, Turkey, May 2000.
The constrained conjugate gradient algorithm. J A ApolinárioJr, M L R De Campos, C P Bernal, O , IEEE Signal Process. Lett. 712J. A. Apolinário, Jr., M. L. R. de Campos, and C. P. Bernal O. "The constrained conjugate gradient algorithm," IEEE Signal Process. Lett., vol. 7, no. 12, pp. 351-354, Dec. 2000.
Constrained adaptation algorithms employing Householder transformation. M L R De Campos, S Werner, J A ApolinárioJr, IEEE Trans. Signal Process. 509M. L. R. de Campos, S. Werner, and J. A. Apolinário, Jr., "Constrained adaptation algorithms employing Householder transformation," IEEE Trans. Signal Process., vol. 50, no. 9, pp. 2187-2195, Sep. 2002.
Low-complexity constrained affine-projection algorithms. S Werner, J A ApolinárioJr, M L R De Campos, P S R Diniz, IEEE Trans. Signal Process. 5312S. Werner, J. A. Apolinário, Jr., M. L. R. de Campos, and P. S. R. Diniz, "Low-complexity constrained affine-projection algorithms," IEEE Trans. Signal Process., vol. 53, no. 12, pp. 4545-4555, Dec. 2005.
Low-complexity implementation of the constrained recursive least-squares adaptive filtering algorithm. R Arablouei, K Doğançay, Proc. Asia-Pacific Signal Inform. Asia-Pacific Signal InformHollywood, USA10R. Arablouei and K. Doğançay, "Low-complexity implementation of the constrained recursive least-squares adaptive filtering algorithm," in Proc. Asia-Pacific Signal Inform. Process. Assoc. Annu. Summit Conf., Hollywood, USA, Dec. 2012, paper id: 10.
Reduced-complexity constrained recursive least-squares adaptive filtering algorithm. R Arablouei, K Doğançay, IEEE Trans. Signal Process. 6012R. Arablouei and K. Doğançay, "Reduced-complexity constrained recursive least-squares adaptive filtering algorithm," IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6687-6692, Dec. 2012.
Linearly-constrained recursive total least-squares algorithm. R Arablouei, K Doğançay, IEEE Signal Process. Lett. 1912R. Arablouei and K. Doğançay, "Linearly-constrained recursive total least-squares algorithm," IEEE Signal Process. Lett., vol. 19, no. 12, pp. 821-824, Dec. 2012.
Linearly-constrained line-search algorithm for adaptive filtering. R Arablouei, K Doğançay, Electron. Lett. 4819R. Arablouei and K. Doğançay, "Linearly-constrained line-search algorithm for adaptive filtering," Electron. Lett., vol. 48, no. 19, pp. 1208-1209, 2012.
Performance analysis of linearequality-constrained least-squares estimation. R Arablouei, K Doğançay, arXiv:1408.6721R. Arablouei and K. Doğançay, "Performance analysis of linear- equality-constrained least-squares estimation," arXiv:1408.6721.
Adaptive linear discrete-time estimation. K D Senne, TR 6778-5Stanford Electron. Lab. Tech. Rep.K. D. Senne, "Adaptive linear discrete-time estimation," Stanford Electron. Lab., Stanford, CA, Doc. SEL-68-090, Tech. Rep. TR 6778-5, Jun. 1968.
New results in adaptive estimation theory. K D Senne, SRL-TR-70-0013Frank J. Seiler Res. Lab. Tech. Rep.K. D. Senne, "New results in adaptive estimation theory," Frank J. Seiler Res. Lab., USAF Academy, CO, Tech. Rep. SRL-TR-70-0013, Apr. 1970.
Adaptive filtering with clipped input data. J L Moschner, TR 6796-1Stanford Electron. Lab. Tech. Rep.J. L. Moschner, "Adaptive filtering with clipped input data," Stanford Electron. Lab., Stanford, CA, Doc. SEL-70-053, Tech. Rep. TR 6796-1, Jun. 1970.
Theory and applications of adaptive control-A survey. K J Åström, Automatica. 195K. J. Åström, "Theory and applications of adaptive control-A survey," Automatica, vol. 19, no. 5, pp. 471-486, Sep. 1983.
. P Ioannou, B Fidan, Adaptive Control Tutorial. P. Ioannou and B. Fidan, Adaptive Control Tutorial, SIAM, 2006.
Van Trees, Detection, Estimation, and Modulation Theory. H L , Wiley1New YorkH. L. Van Trees, Detection, Estimation, and Modulation Theory, Part 1, New York: Wiley, 2001.
Adaptive Filters. A H Sayed, WileyHoboken, NJA. H. Sayed, Adaptive Filters, Hoboken, NJ: Wiley, 2008.
Adaptive Filter Theory. S Haykin, Prentice-Hall4Upper Saddle River, NJth ed.S. Haykin, Adaptive Filter Theory, 4 th ed., Upper Saddle River, NJ: Prentice-Hall, 2002.
On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. L Isserlis, Biometrika. 121L. Isserlis, "On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables," Biometrika, vol. 12, no. 1/2, pp. 134-139, Nov. 1918.
K M Abadir, J R Magnus, Matrix Algebra. NYCambridge Univ. PressK. M. Abadir and J. R. Magnus, Matrix Algebra, NY: Cambridge Univ. Press, 2005.
Stationary and nonstationary learning characteristics of the LMS adaptive filter. B Widrow, J M Mccool, M G Larimore, C R Johnson, Jr , Proc. IEEE. IEEE64B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson, Jr., "Stationary and nonstationary learning characteristics of the LMS adaptive filter," Proc. IEEE, vol. 64, no. 8, pp. 1151-1162, Aug. 1976.
|
[] |
[
"On the analytic structure of QCD propagators On the analytic structure of QCD propagators",
"On the analytic structure of QCD propagators On the analytic structure of QCD propagators"
] |
[
"Peter Lowdon [email protected] \nSLAC National Accelerator Laboratory\nStanford University\nUSA\n",
"Peter Lowdon \nSLAC National Accelerator Laboratory\nStanford University\nUSA\n"
] |
[
"SLAC National Accelerator Laboratory\nStanford University\nUSA",
"SLAC National Accelerator Laboratory\nStanford University\nUSA"
] |
[] |
Local formulations of quantum field theory provide a powerful framework in which nonperturbative aspects of QCD can be analysed. Here we report on how this approach can be used to elucidate the general analytic features of QCD propagators, and why this is relevant for understanding confinement.
|
10.22323/1.336.0050
|
[
"https://arxiv.org/pdf/1811.03037v1.pdf"
] | 118,900,029 |
1811.03037
|
31745443c1d13919fa5f6b14bd3e26ebce393ebd
|
On the analytic structure of QCD propagators On the analytic structure of QCD propagators
7 Nov 2018 July -6 August 2018
Peter Lowdon [email protected]
SLAC National Accelerator Laboratory
Stanford University
USA
Peter Lowdon
SLAC National Accelerator Laboratory
Stanford University
USA
On the analytic structure of QCD propagators On the analytic structure of QCD propagators
7 Nov 2018 July -6 August 2018XIII Quark Confinement and the Hadron Spectrum -Confinement2018 31 Maynooth University, Ireland * Speaker. c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). https://pos.sissa.it/
Local formulations of quantum field theory provide a powerful framework in which nonperturbative aspects of QCD can be analysed. Here we report on how this approach can be used to elucidate the general analytic features of QCD propagators, and why this is relevant for understanding confinement.
Introduction
The non-perturbative behaviour of propagators involving coloured fields plays an important role in many areas of quantum chromodynamics (QCD), from the dynamics of quark-gluon plasma to the nature of confinement itself [1,2,3,4,5,6]. Nevertheless, the overall analytic structure of these objects remains largely unknown. In order to gain a better understanding of the general characteristics of these objects one necessarily requires a non-perturbative approach. An important such example are local formulations of quantum field theory (QFT), the construction of which are based on a series of physically motivated axioms [7]. A significant advantage of this framework is that the axioms are assumed to hold independently of the coupling regime, allowing non-perturbative features to be derived in a purely analytic manner. Numerical non-perturbative techniques such as lattice Monte-Carlo simulations and functional methods have also played an essential role in helping to unravel the structure of QCD propagators, especially in recent years [3,8,9,10,11,12,14]. However, significant progress has been achieved when local QFT has been used both as a guide for the analytic input required for these numerical techniques, and also to help interpret the corresponding results. Here we report on recent progress [6,15,16] in establishing the structural form of QCD propagators using a local QFT approach.
The general structure of correlators in local QFT
In local formulations of QFT a characteristic of central importance is that correlators are distributions [7]. Distributions are a generalisation of functions, and among other things this implies that they can possess a broader range analytic properties compared to regular functions. Due to the Lorentz transformation properties of the fields φ 1 and φ 2 it follows that the Fourier transform of any correlator T (1,2)
(p) = F [ 0|φ 1 (x)φ 2 (y)|0 ] has the decomposition T (1,2) (p) = N ∑ α=1 Q α (p) T α (1,2) (p) (2.1)
where Q α (p) are Lorentz covariant polynomial functions 1 of p carrying the same Lorentz index structure as φ 1 and φ 2 , and T α (1,2) (p) are Lorentz invariant distributions 2 [17]. Due to the decomposition in Eq. (2.1), the key to defining any correlator is to understand the properties of the Lorentz invariant distributional components. In principle these objects could have a wide variety of different properties, but it turns out that the physical requirement for states in the theory to have positive energy implies that the components T α (1,2) (p) must vanish outside the closed forward light cone V + = {p µ | p 2 ≥ 0, p 0 ≥ 0}, and can therefore be written in the following general manner [17]:
T α (1,2) (p) = ∞ 0 ds θ (p 0 )δ (p 2 − s)ρ α (s) + P α (∂ 2 )δ (p) (2.2)
where P α (∂ 2 ) is a polynomial of finite order in the d'Alembert operator ∂ 2 = g µν ∂ ∂ p µ ∂ ∂ p ν , and ρ α (s) are distributions with support in R + . Eq. (2.2) is the so-called spectral representation of T α (1,2) (p), and ρ α (s) the spectral density. If one instead considers the time-ordered correlator (propagator)
D (1,2) (p) = F θ (x 0 − y 0 ) 0|φ 1 (x)φ 2 (y)|0 + σ (1,2) θ (y 0 − x 0 ) 0|φ 2 (y)φ 1 (x)|0 , where σ (1,2) = ±1
depending on the spin statistics of the fields, it follows from Eq. (2.2) that the corresponding Lorentz invariant components D α (1,2) (p) have the structure
D α (1,2) (p) = i ds ρ α (s) p 2 − s + iε + P α (∂ 2 )δ (p) (2.
3)
The first term has the familiar looking Källén-Lehmann spectral form, whereas the second term is purely singular. In theories for which the space of states have a positive-definite inner product it turns out that P α (∂ 2 )δ (p) can only contain terms proportional to δ (p) [17]. An important feature of gauge theories such as QCD is that the gauge symmetry provides an obstacle to the locality of the theory [18]. In order to consistently quantise the theory one is therefore left with two options: explicitly preserve locality, or allow non-local fields. A general feature of local quantisations is that additional degrees of freedom are introduced into the theory, resulting in a space of states with an indefinite inner product. The prototypical example is the Becchi-Rouet-Stora-Tyutin (BRST) quantisation of QCD, where the space of states contains negative-norm ghost states [2]. Although many features of positive-definite inner product QFTs are preserved in BRST quantised QCD, the existence of an indefinite inner product can lead to significant changes to the structure of propagators. In particular, P α (∂ 2 )δ (p) can potentially contain terms involving derivatives of δ (p) [6]. The relevance of these types of contribution was first recognised in [19,20], where it was proven that their presence can fundamentally alter the asymptotic behaviour of correlators, and in fact cause the correlation strength between clusters of states to grow with distance. For clusters of coloured states this provides a mechanism which can guarantee their absence from the asymptotic spectrum, since a growth in correlation strength between coloured states prevents the independent measurement of either of these states at large distances. In other words, the presence of these type of contributions are indicative of confinement [5].
Dynamical constraints on the QCD propagators
In light of the general structural features of propagators in locally quantised QCD, determining the behaviour of propagators associated with coloured fields is important for understanding the nonperturbative dynamics of the theory, and in particular confinement. Since the quark, gluon and ghost fields parametrise the coloured degrees of freedom in this theory, the propagators associated with these fields play a crucial role. With this motivation in mind, in Refs. [6,15,16] a local QFT approach was adopted in order to derive the most general structural form of these QCD propagators, and the constraints imposed on them by the dynamical properties of the theory 3 .
In the case of the quark propagator S i j F (p) = F 0|T {ψ i (x)ψ j (y)}|0 it follows from Eqs. (2.2) and (2.3) that the propagator can be written [15]
S i j F (p) = i ∞ 0 ds 2π ρ i j 1 (s) + / pρ i j 2 (s) p 2 − s + iε + P i j 1 (∂ 2 ) + / pP i j 2 (∂ 2 ) δ (p) (3.1)
It turns out that the equations of motion impose constraints both on the form of the spectral densities, and the coefficients of the potential singular terms. In Ref. [15] it was demonstrated that these constraints can be derived by considering the Dyson-Schwinger equation, which in momentum space has the form
( / p − m) S i j F (p) = iδ i j Z −1 2 + K i j (p) (3.2)
where Z 2 is the quark field renormalisation constant, and K i j (p) is the current-quark propagator. By inserting the spectral representation of S i j F (p) and the corresponding representation of K i j (p) one can match the different Lorentz components on both sides, and this give rise to a series of constraints. After applying this procedure one finds that the coefficients of the singular terms in the propagators are linearly related to one another, and that the quark spectral densities have the following representation [15]
ρ i j 1 (s) = 2πm δ i j Z −1 2 − ds κ i j 1 (s) δ (s − m 2 ) + κ i j 1 (s) (3.3) ρ i j 2 (s) = 2πδ i j Z −1 2 − ds κ i j 2 (s) δ (s − m 2 ) + κ i j 2 (s) (3.4)
These equalities explicitly demonstrate that both spectral densities contain a discrete mass component, but that the coefficients in front of these components depend explicitly on the behaviour of κ i j 1 (s) and κ i j 2 (s), both of which are related to the corresponding spectral densities of the currentquark propagator K i j (p).
An analogous approach as applied to the quark propagator can also be used to constrain the ghost propagator G ab F (p) = F 0|T {C a (x)C b (y)}|0 [15]. In this case the propagator has the general form
G ab F (p) = i ∞ 0 ds 2π ρ ab C (s) p 2 − s + iε + P ab C (∂ 2 )δ (p) (3.5)
and the momentum space Dyson-Schwinger equation is given by
−p 2 G ab F (p) = δ ab Z −1 3 + L ab (p) (3.6)
where now Z 3 is the ghost field renormalisation constant, and L ab (p) is the current-ghost propagator. Inserting the spectral representations of these propagators into this equation one again finds that the coefficients of the singular terms in both propagators are linearly related to one another, and that the ghost spectral density is constrained to satisfy [15]:
ρ ab C (s) = 2πiδ ab Z −1 3 − ∞ 0 ds κ ab C (s) δ (s) + κ ab C (s) (3.7)
Eq. (3.7) demonstrates that the ghost spectral density contains a discrete massless component. However, similarly to the quark spectral densities, the coefficient in front of this component is not completely constrained since it depends on the integral of κ ab C (s), which itself is determined by the spectral function of L ab (p). Therefore, the presence or absence of a non-perturbative massless ghost pole is not so clear-cut.
The final QCD propagator of interest involves the gluon field. In this case the propagator has the general form [6]
D ab F µν (p) = i ∞ 0 ds 2π g µν ρ ab 1 (s) + p µ p ν ρ ab 2 (s) p 2 − s + iε + N ∑ n=0 c ab n g µν (∂ 2 ) n + d ab n ∂ µ ∂ ν (∂ 2 ) n−1 δ (p) (3.8)
The Dyson-Schwinger equation for this propagator is given by
− p 2 g α µ − 1 − 1 ξ 0 p µ p α D ab F αν (p) = iδ ab g µν Z −1 3 + J ab µν (p) (3.9)
where Z 3 is the gluon field renormalisation constant, J ab µν (p) the current-gluon propagator, and ξ 0 is the bare gauge fixing parameter. Again, by inserting the spectral representations of the gluon and current-gluon propagators one obtains constraints. Similarly to the quark and ghost propagators, Eq. (3.9) implies that the coefficients of the potential singular terms in the gluon propagator are linearly related to those in J ab µν (p). Moreover, the gluon spectral densities are constrained to satisfy the relations
ρ ab 1 (s) + sρ ab 2 (s) = −2πξ δ ab δ (s) (3.10) ρ ab 1 (s) = −2πδ ab Z −1 3 δ (s) + ρ ab 2 (s) (3.11) ∞ 0 ds ρ ab 2 (s) = 0 (3.12)
In contrast to both the quark and ghost spectral densities, Eq. (3.11) implies that ρ ab 1 (s) contains an explicit massless contribution, and that the coefficient of this component is completely specified by the value of the corresponding renormalisation constant. Since Z −1 3 is expected to vanish in Landau gauge [3], this implies that massless gluons must therefore necessarily be absent from the spectrum in this gauge. In the literature [8,9,12,14,24] it is often argued that the violation of non-negativity of ρ ab 1 (s) in Landau gauge as a result of the sum rule 4 : ds ρ ab 1 (s) = 0 is the reason why gluons do not appear in the spectrum. However, from the structure of Eq. (3.11) it is apparent that (continuous) non-negativity violations can only arise from the component ρ ab 2 (s), which has vanishing integral [Eq. (3.12)]. Performing an identical analysis for the photon propagator it turns out that this propagator satisfies identical constraints, and in particular the analogous component ρ 2 (s) of the photon spectral density ρ 1 (s) has vanishing integral. This implies that potential nonnegativity violations are not QCD specific, and casts doubt on the hypothesis that these violations in Landau gauge are the reason why gluons are absent from the spectrum.
Conclusions
Although the propagators in QCD play an important role in determining the non-perturbative characteristics of the theory, the analytic behaviour of these objects remains largely unknown. It turns out that the dynamical properties of the quark, ghost and gluon fields, and in particular their corresponding Dyson-Schwinger equations, impose considerable constraints on the structure of these propagators. In all of these cases singular terms involving derivatives of δ (p) are permitted, which is particularly interesting in the context of confinement, and the general form of the corresponding spectral densities are constrained. Besides the purely theoretical relevance of these results, these constraints could also provide important input for improving existing parametrisations of the propagators.
For example, when φ 1 = ψ and φ 2 = ψ there are two such functions: Q 1 (p) = I and Q 2 (p) = / p.2 Lorentz invariant distributions satisfy the property T α (1,2) (Λp) = T α (1,2) (p) for any Lorentz transformation Λ.
For an alternative approach see e.g.[21].
This sum rule is often referred to as the Oehme-Zimmermann superconvergence relation[22,23].
AcknowledgementsThis work was supported by the Swiss National Science Foundation under contract P2ZHP2_168622, and by the DOE under contract DE-AC02-76SF00515.
Local Covariant Operator Formalism of Non-Abelian Gauge Theories and Quark Confinement Problem. T Kugo, I Ojima, Prog. Theor. Phys. Suppl. 661T. Kugo and I. Ojima, Local Covariant Operator Formalism of Non-Abelian Gauge Theories and Quark Confinement Problem, Prog. Theor. Phys. Suppl. 66, 1 (1979).
Covariant Operator Formalism of Gauge Theories and Quantum Gravity. N Nakanishi, I Ojima, World Scientific Publishing Co. Pte. LtdN. Nakanishi and I. Ojima, Covariant Operator Formalism of Gauge Theories and Quantum Gravity, World Scientific Publishing Co. Pte. Ltd (1990).
The Infrared behavior of QCD Green's functions: Confinement dynamical symmetry breaking, and hadrons as relativistic bound states. R Alkofer, L , arXiv:hep-ph/0007355Phys. Rept. 353R. Alkofer and L. von Smekal, The Infrared behavior of QCD Green's functions: Confinement dynamical symmetry breaking, and hadrons as relativistic bound states, Phys. Rept. 353, 281 (2001) [arXiv:hep-ph/0007355].
Quark confinement: the hard problem of hadron physics. R Alkofer, J Greensite, arXiv:hep-ph/0610365J. Phys. G: Nucl. Part. Phys. 34S3R. Alkofer and J. Greensite, Quark confinement: the hard problem of hadron physics, J. Phys. G: Nucl. Part. Phys. 34, S3 (2007) [arXiv:hep-ph/0610365].
Conditions on the violation of the cluster decomposition property in QCD. P Lowdon, arXiv:1511.02780J. Math. Phys. 57102302P. Lowdon, Conditions on the violation of the cluster decomposition property in QCD, J. Math. Phys. 57, 102302 (2016) [arXiv:1511.02780].
The non-perturbative structure of the photon and gluon propagators. P Lowdon, arXiv:1702.02954Phys. Rev. D. 9665013P. Lowdon, The non-perturbative structure of the photon and gluon propagators, Phys. Rev. D 96, 065013 (2017) [arXiv:1702.02954].
PCT, Spin and Statistics, and all that. R F Streater, A S Wightman, W. A. Benjamin, IncR. F. Streater and A. S. Wightman, PCT, Spin and Statistics, and all that, W. A. Benjamin, Inc. (1964);
R Haag, Local Quantum Physics. Springer-VerlagR. Haag, Local Quantum Physics, Springer-Verlag (1996);
N N Bogolubov, A A Logunov, A I Oksak, General Principles of Quantum Field Theory. Kluwer Academic PublishersN. N. Bogolubov, A. A. Logunov and A. I. Oksak, General Principles of Quantum Field Theory, Kluwer Academic Publishers (1990);
F Strocchi, An Introduction to Non-Perturbative Foundations of Quantum Field Theory. Oxford University PressF. Strocchi, An Introduction to Non-Perturbative Foundations of Quantum Field Theory, Oxford University Press (2013).
Analytic properties of the Landau gauge gluon and quark propagators. R Alkofer, W Detmold, C S Fischer, P Maris, arXiv:hep-ph/0309077Phys. Rev. D. 7014014R. Alkofer, W. Detmold, C. S. Fischer and P. Maris, Analytic properties of the Landau gauge gluon and quark propagators, Phys. Rev. D 70, 014014 (2004) [arXiv:hep-ph/0309077].
Positivity violation for the lattice Landau gluon propagator. A Cucchieri, T Mendes, A R Taurines, arXiv:hep-lat/0406020Phys. Rev. D. 7151902A. Cucchieri, T. Mendes, and A. R. Taurines, Positivity violation for the lattice Landau gluon propagator, Phys. Rev. D 71, 051902(R) (2005) [arXiv:hep-lat/0406020].
Constraints on the IR behavior of the gluon propagator in Yang-Mills theories. A Cucchieri, T Mendes, arXiv:0712.3517Phys. Rev. Lett. 100241601A. Cucchieri and T. Mendes, Constraints on the IR behavior of the gluon propagator in Yang-Mills theories, Phys. Rev. Lett. 100, 241601 (2008) [arXiv:0712.3517].
Infrared Gluon and Ghost Propagator Exponents From Lattice QCD. O Oliveira, P J Silva, arXiv:0705.0964Eur. Phys. J. C. 62525O. Oliveira and P. J. Silva, Infrared Gluon and Ghost Propagator Exponents From Lattice QCD, Eur. Phys. J. C 62, 525 (2009) [arXiv:0705.0964].
Analytic Structure of the Landau-Gauge Gluon Propagator. S Strauss, C S Fischer, C Kellermann, arXiv:1208.6239Phys. Rev. Lett. 109252001S. Strauss, C. S. Fischer and C. Kellermann, Analytic Structure of the Landau-Gauge Gluon Propagator, Phys. Rev. Lett. 109, 252001 (2012) [arXiv:1208.6239].
The lattice Landau gauge gluon propagator: lattice spacing and volume dependence. O Oliveira, P J Silva, arXiv:1207.3029Phys. Rev. D. 86114513O. Oliveira and P. J. Silva, The lattice Landau gauge gluon propagator: lattice spacing and volume dependence, Phys. Rev. D 86, 114513 (2012) [arXiv:1207.3029].
Källén-Lehmann spectroscopy for (un)physical degrees of freedom. D Dudal, O Oliveira, P J Silva, arXiv:1310.4069Phys. Rev. D. 8914010D. Dudal, O. Oliveira and P. J. Silva, Källén-Lehmann spectroscopy for (un)physical degrees of freedom, Phys. Rev. D 89, 014010 (2014) [arXiv:1310.4069].
Non-perturbative constraints on the quark and ghost propagators. P Lowdon, arXiv:1711.07569Nucl. Phys. B. 935242P. Lowdon, Non-perturbative constraints on the quark and ghost propagators, Nucl. Phys. B 935, 242 (2018) [arXiv:1711.07569].
Dyson-Schwinger equation constraints on the gluon propagator in BRST quantised QCD. P Lowdon, arXiv:1801.09337Phys. Lett. B. 786399P. Lowdon, Dyson-Schwinger equation constraints on the gluon propagator in BRST quantised QCD, Phys. Lett. B 786, 399 (2018) [arXiv:1801.09337].
N N Bogolubov, A A Logunov, A I Oksak, General Principles of Quantum Field Theory. Kluwer Academic PublishersN. N. Bogolubov, A. A. Logunov and A. I. Oksak, General Principles of Quantum Field Theory, Kluwer Academic Publishers (1990).
F Strocchi, An Introduction to Non-Perturbative Foundations of Quantum Field Theory. Oxford University PressF. Strocchi, An Introduction to Non-Perturbative Foundations of Quantum Field Theory, Oxford University Press (2013).
Locality, charges and quark confinement. F Strocchi, Phys. Lett. B. 6260F. Strocchi, Locality, charges and quark confinement, Phys. Lett. B 62, 60 (1976).
Local and covariant gauge quantum theories. Cluster property, superselection rules, and the infrared problem. F Strocchi, Phys. Rev. D. 172010F. Strocchi, Local and covariant gauge quantum theories. Cluster property, superselection rules, and the infrared problem, Phys. Rev. D 17, 2010 (1978).
Spectral density constraints in quantum field theory. P Lowdon, arXiv:1504.00486Phys. Rev. D. 9245023P. Lowdon, Spectral density constraints in quantum field theory, Phys. Rev. D 92, 045023 (2015) [arXiv:1504.00486].
Quark and gluon propagators in quantum chromodynamics. R Oehme, W Zimmermann, Phys. Rev. D. 21471R. Oehme and W. Zimmermann, Quark and gluon propagators in quantum chromodynamics, Phys. Rev. D 21, 471 (1980).
Gauge field propagator and the number of fermion fields. R Oehme, W Zimmermann, Phys. Rev. D. 211661R. Oehme and W. Zimmermann, Gauge field propagator and the number of fermion fields, Phys. Rev. D 21, 1661 (1980).
Positivity violations in QCD. J M Cornwall, arXiv:1310.7897Mod. Phys. Lett. A. 281330035J. M. Cornwall, Positivity violations in QCD, Mod. Phys. Lett. A 28, 1330035 (2013) [arXiv:1310.7897].
|
[] |
[
"Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation",
"Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation"
] |
[
"Member, IEEENikos Deligiannis ",
"Member, IEEEJoão F C Mota ",
"Member, IEEEBruno Cornelis ",
"Senior Member, IEEEMiguel R D Rodrigues ",
"Fellow, IEEEIngrid Daubechies "
] |
[] |
[] |
In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the frontand back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component models features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single-and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data-taken from digital acquisition of the Ghent Altarpiece (1432)-confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.
|
10.1109/tip.2016.2623484
|
[
"https://arxiv.org/pdf/1607.04147v1.pdf"
] | 8,171,175 |
1607.04147
|
b4c0c76fdf3ab7a3d8ca3a6bde2ea8e40757b80b
|
Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation
Member, IEEENikos Deligiannis
Member, IEEEJoão F C Mota
Member, IEEEBruno Cornelis
Senior Member, IEEEMiguel R D Rodrigues
Fellow, IEEEIngrid Daubechies
Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation
1Index Terms-Source separationcoupled dictionary learningmulti-scale image decompositionmulti-modal data analysis
In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the frontand back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component models features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single-and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data-taken from digital acquisition of the Ghent Altarpiece (1432)-confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.
I. INTRODUCTION
B IG DATA sets-produced by scientific experiments or projects-often contain heterogeneous data obtained by capturing a physical process or object using diverse sensing modalities [2]. The result is a rich set of signals, heterogeneous in nature but strongly correlated due to their being generated by a common underlying phenomenon. Multi-modal signal processing and analysis is thus gaining momentum in various The work is supported by the VUB research programme M3D2, the EPSRC grant EP/K033166/1, and the VUB-UGent-UCL-Duke International Joint Research Group (grant VUB: DEFIS41010). A preliminary version of this work is accepted for presentation at the IEEE Int. Conf. Image Process. (ICIP) 2016 [1].
N. Deligiannis Eve, (centre) the respective paintings on the back, (right) corresponding Xray images containing a mixture of components. c KIK-IRPA research disciplines ranging from medical diagnosis [3] to remote sensing and computer vision [4]. In particular, the analysis of high-resolution multi-modal digital acquisitions of paintings in support of art scholarship has proved a challenging field of research. Examples include the numerical characterization of brushstrokes [5], [6] for the authentication or dating of paintings, canvas thread counting [7]- [9] with applications in art forensics, and the (semi-) automatic detection and digital inpainting of cracks [10]- [12].
The Lasting Support project has focused on the investigation of the Ghent Altarpiece (1432), also known as The Adoration of the Mystic Lamb, a polyptych on wood panel painted by Jan and Hubert van Eyck. One of the most admired and influential masterpieces in the history of art, it has given rise to many puzzling questions for art historians. Currently, the Ghent Altarpiece is undergoing a major conservation and restoration campaign that is planned to end in 2017. The panels of the masterpiece were documented with various imaging modalities, amongst which visual macrophotography, infrared macrophotography, infrared reflectography and X-radiography [12]. A massive visual data set (comprising over 2TB of data) has been compiled by capturing small areas of the polyptych separately and stitching the resulting image blocks into one image per panel [13].
X-ray images are common tools in painting investigation, since they reveal information about the composition of the materials, the variations in paint thickness, the support as well as the cracks and losses in the ground and paint layers. The problem we address in this paper relates to the outer side panels, namely, the panels showing near life-sized depictions of Adam and Eve, shown in Fig. 1. Due to X-ray penetration, the scans of these panels are a mixture of the paintings from each side of the panel as well as the wood panel itself. The presence of all these components makes the reading of the X-ray image difficult for art experts, who would welcome an effective approach to separate the components. The task of separating a mixture of signals into its constituent components is a popular field of research. Most work addresses the blind source separation (BSS) problem, where the goal is to retrieve the different sources from a given linear mixture. Several methods attempt to solve the BSS problem by imposing constraints on the sources' structure. Independent component analysis (ICA) [14] commonly assumes that the components are independent non-Gaussian and attempts to separate them by minimizing the mutual information [15]. Nonnegative matrix factorization is another approach to solve the problem, where it is assumed that the sources are nonnegative (or they are transformed to a nonnegative representation) [16]. In an alternative path, the problem has been cast into a Bayesian framework, where either the sources are viewed as latent variables [17], or the problem is solved by maximizing the joint posterior density of the sources [18]. Under the Bayesian methodology, spatial smoothing priors (via, for example, Markov random fields) have been used to regularize blind image separation problems [19]. These assumptions do not fit our particular problem as both components have similar statistical properties and they are certainly not statistically independent.
Sparsity is another source prior heavily exploited in BSS [20], [21], as well as in various other inverse problems, such as, compressed sensing [22], [23], image inpainting [24], [25], denoising [26], and deconvolution [27]. Morphological component analysis (MCA), in particular, is a stateof-the-art sparsity-based regularization method, initially designed for the single-mixture problem [20], [28] and then extended to the multi-mixture case [29]. The crux of the method is the basic assumption that each component has its own characteristic morphology; namely, each has a highly sparse representation over a set of bases (or, dictionaries), while being highly non-sparse for the dictionaries of the other components. Prior work in digital painting analysis has employed MCA to remove cradling artifacts within X-ray images of paintings on a panel [30]. The cradling and painting components have very different morphologies, captured by different predefined dictionaries. Namely, complex wavelets [31] provide a sparse representation for the smooth X-ray image and shearlets [32] were used to represent the texture of the wood grain. Alternatively, dictionaries can be learned from a set of training signals; several algorithms have been proposed to construct dictionaries including the method of optimal directions (MOD) [33] and the K-SVD algorithm [34]. Both utilize the orthogonal matching pursuit (OMP) [35] method for sparse decomposition but they differ in the way they update the dictionary elements while learning. Recently, multi-mixture MCA has been combined with K-SVD, resulting in a method where dictionaries are learned adaptively while separating [36].
However, in our particular separation problem we have a simple mixture of two X-ray components that are morphologically very similar [see Fig. 1]. Hence, as we will show in the experimental section, simply using fixed or learned dictionaries is insufficient to discriminate one component from the other. Unlike prior work, in our setup we have access to high-quality photographic material from each side of the panel that can be used to assist the X-ray image separation process.
In this work, we elaborate on a novel method to perform separation of X-ray images from a single mixture by using images of another modality as side information. Our contributions are as follows:
• We present a new model based on parsimonious representations, which captures both the inherent similarities and the discrepancies among heterogeneous correlated data. The model decomposes the data into a sparse component that is common to the different modalities and a sparse component that uniquely describes each data type. Our model enables the formulation of appropriately regularized convex optimization procedures that address the separation problem at hand. • We propose a novel dictionary learning approach that trains dictionaries coupling the images from the different modalities. Our approach introduces a new modified OMP algorithm that is tailored to our data model. • We devise a novel method that ignores craquelure pixels-namely, pixels that visualize cracks in the surface of paintings-when learning coupled dictionaries. Paying no heed to these pixels avoids contaminating the dictionaries with high frequency noise, thereby leading to higher separation performance. Our approach bears similarities with inpainting approaches, e.g., [25]; it is, however, different in the way the dictionary learning problem is posed and solved. • We devise a novel multi-scale image separation strategy that is based on a recursive decomposition of the mixed X-ray and visual images into low-and high-pass bands. As such, the method enables the accurate separation of high-resolution images even when a local sparsity prior is assumed. Our approach differs from existing multi-scale dictionary learning methods [25], [37], [38] not only by considering imaging data gleaned from diverse modalities but also in the way the multi-scale decomposition is constructed. • We conduct experiments using synthetic and real data proving that the use of side information is crucial in the separation of X-ray images from double-sided paintings.
In the remainder of the paper: Section II reviews related work and Section III poses our source separation with side information problem. Section IV describes the proposed coupled dictionary learning algorithm. Section V presents our method that ignores cracks when learning dictionaries and, Section VI elaborates on our single-and multi-scale approaches to X-ray image separation. Section VII presents the evaluation of our algorithms while, Section VIII draws our conclusions.
II. RELATED WORK A. Source Separation
Adhering to a formal definition, MCA [20], [28] decomposes a source or image mixture x = κ i=1 x i , with x, x i ∈ R n×1 , into its constituents, with the assumption that each x i admits a sparse decomposition in a different overcomplete dictionary Φ i ∈ R n×di , (n d i ). Namely, each component can be expressed as
x i = Φ i z i , where z i ∈ R di×1
is a sparse vector comprising a few non-zero coefficients:
z i 0 = #{ξ : z i ξ = 0, ξ = 1, . . . , d i } = s i d i , with · 0 denoting the 0 pseudo-norm.
The BSS problem is thus addressed as [20], [28] (ẑ 1 , . . . ,ẑ κ ) = arg min z1,...,zκ
κ i=1 z i 0 s.t. x = κ i=1 Φ i z i . (1)
Unlike the BSS problem, informed source separation (ISS) methods utilise some form of prior information to aid the task at hand. ISS methods are tailored to the application they address (to the best of our knowledge they are applied only for audio mixtures [39], [40]). For instance, an encoding/decoding framework is proposed in [39], where the sources are mixed at the encoder and the mixtures are sent to the decoder together with side information that is embedded by means of quantization index modulation (QIM) [41]. Unlike these methods, we propose a generic source separation framework that incorporates side information gleaned from a correlated heterogeneous source by means of a new dictionary learning method that couples the heterogenous sources.
B. Dictionary Learning
Dictionary learning factorizes a matrix composed of training signals X = [x 1 , . . . , x k ] ∈ R n×k into the product ΦZ as
Φ, Z = arg min Φ ,Z X −Φ Z 2 F s.t. z i 0 ≤ s, i = 1, . . . , k,(2)
where Z = [z 1 , . . . , z k ] ∈ R d×k contains the sparse vectors corresponding to the signals X = [x 1 , . . . , x k ] and · F is the Frobenius norm of a matrix. The columns of the dictionary Φ are typically constrained to have unit norm so as to improve the identifiability of the dictionary. To solve Problem (2), which is non-convex, Olshausen and Field [42] proposed to iterate through a step that learns the sparse codes and a step that updates the dictionary elements. The same strategy is followed in subsequent studies [33], [34], [43]- [45]. Alternatively, polynomial-time algorithms that are guaranteed to reach a globally optimal solution appear in [46], [47].
In order to capture multi-scale traits in natural signals, a method to construct multi-scale dictionaries was presented in [25]. The multi-scale representation was obtained by using a quadtree decomposition of the learned dictionary. Alternatively, the work in [37], [38] applied dictionary learning in the domain of a fixed multi-scale operator (wavelets). In our approach we follow a different multi-scale strategy, based on a pyramid decomposition, similar to the Laplacian pyramid [48].
There exist dictionary learning approaches designed to couple multi-modal data. Monaci et al. [49] proposed an approach to learn basis functions representing audio-visual structures. The approach, however, enforces synchrony between the different modalities. Alternatively, Yang et al. [50], [51] considered the problem of learning two dictionaries D x and D y for two families of signals x, y, coupled by a mapping function F [with y = F(x)]. The constraint was that the sparse representation of x in D x is the same as that of y in D y . The application targeted was image super-resolution, where x (resp. y) is the low (resp. high) resolution image. The study in [4] followed a similar approach with the difference that the mapping function was applied to the sparse codes, i.e., z y = F(z x ), rather than the signals. Jia et al. [52] proposed dictionary learning via the concept of group sparsity so as to couple the different views in human pose estimation. Our coupled dictionary learning method is designed to address the challenges of the targeted source separation application and as such, the model we consider to represent the correlated sources is fundamentally different from previous work. Moreover, we extend coupled dictionary learning to the multi-scale case and we provide a way to ignore certain noisy parts of the training signals (corresponding to cracks in our case).
III. IMAGE SEPARATION WITH SIDE INFORMATION
We denote by x ray 1 and x ray 2 two vectorized X-ray image patches that we wish to separate from each other given a mixture patch m, where m = x ray 1 + x ray 2 . Let y 1 and y 2 be the co-located (visual) image patches of the front and back of the painting. These patches play the role of side information that aids the separation. The use of side information has proven beneficial in various inverse problems [53]- [59]. In this work, we consider the signals x ray 1 , x ray 2 , y 1 , y 2 ∈ R n to obey (superpositions of) sparse representations in some dictionaries:
y 1 = Ψ c z 1c y 2 = Ψ c z 2c ,(3)
and
x ray 1 = Φ c z 1c + Φv 1 x ray 2 = Φ c z 2c + Φv 2 ,(4)
where z ic ∈ R γ×1 , with z ic 0 = s z γ and i = 1, 2, denotes the sparse component that is common to the images in the visible and the X-ray domain with respect to dictionaries Ψ c , Φ c ∈ R n×γ , respectively. The parameter s z denotes the sparsity of the vector z ic . Moreover, v i ∈ R d×1 , with v i 0 = s v d denotes the sparse innovation component of the X-ray image, obtained with respect to the dictionary Φ ∈ R n×d . The common components express global features and structural characteristics that underlie both modalities. The innovation components capture parts of the signal specific to the X-ray modality, that is, traces of the wooden panel or even footprints of the vertical and horizontal wooden slats attached to the back the painting. We acknowledge the relation of our model with the sparse common component and innovations model that captures intra-and inter-signal correlation of physical signals in wireless sensor networks [56], [60]. Our approach is however more generic, since we decompose the signals in learnt dictionaries rather than fixed canonical bases, as in [60]. Given the proposed model and provided that the dictionaries Ψ c , Φ, and Φ c are known, the corresponding X-ray separation problem can be formulated as minimize z1c,z2c,v1,v2
z 1c 1 + z 2c 1 + v 1 1 + v 2 1 s.t. m = Φ c z 1c + Φ c z 2c + Φv 1 + Φv 2 y 1 = Ψ c z 1c y 2 = Ψ c z 2c(5)
where we applied convex relaxation by replacing the 0 -pseudo norm with the 1 -norm, denoted as · 1 . Problem (5) is underdetermined, namely, v 1 and v 2 cannot be distinguished due to the symmetry in the constraints. A solution to the unmixing problem can be obtained when v 1 = v 2 = v, which is formally written as
minimize z1c,z2c,v z 1c 1 + z 2c 1 +2 v 1 s.t. m = Φ c z 1c + Φ c z 2c + 2Φv y 1 = Ψ c z 1c y 2 = Ψ c z 2c(6)
Problem (6) boils down to basis pursuit, which is solved by convex optimization tools, e.g., [61]. The assumption v 1 = v 2 is not only practical but is also motivated by the actual problem. Since the paintings are mounted on the same wooden panel, the sparse components that decompose the X-ray images via the dictionary Φ are expected to be the same.
IV. COUPLED DICTIONARY LEARNING ALGORITHM
In order to address the source separation with side information problem, we learn coupled dictionaries, Ψ c , Φ c , Φ, by using image patches sampled from visual and X-ray images of single-sided panels, which do not suffer from superposition phenomena. The images were registered using the algorithm described in [62]. Let Y = [y 1 , . . . , y t ], X = [x 1 , . . . , x t ] ∈ R n×t represent a set of t co-located vectorized visual and Xray patches, each containing √ n× √ n pixels. As per our model in (3) and (4), the columns of X and Y are decomposed as
Y = Ψ c Z (7a) X = Φ c Z + ΦV ,(7b)
where we collect their common components into the columns of the matrix Z = [z 1 , . . . , z t ] ∈ R γ×t and their innovation components into the columns of V = [v 1 , . . . , v t ] ∈ R d×t . We formulate the coupled dictionary learning problem as
minimize Ψ c ,Z Φ c ,Φ,V 1 2 Y − Ψ c Z 2 F + 1 2 X − Φ c Z − ΦV 2 F , s.t. z τ 0 ≤ s z , v τ 0 ≤ s v , ∀τ = 1, 2, . . . , t.(8)
Similar to related work [25], [34], [38], we solve Problem (8) by alternating between a sparse-coding step and a dictionary update step. Particularly, given initial estimates for dictionaries Ψ c , Φ, and Φ c -in line with prior work [34] we use the overcomplete discrete cosine transform (DCT) for initializationwe iterate on k between a sparse-coding step:
(Z k+1 , V k+1 ) = arg min Z,V 1 2 Y X − Ψ ck 0 Φ ck Φ k Z V 2 F , s.t. z τ 0 ≤ s z , v τ 0 ≤ s v , ∀τ = 1, 2, . . . , t,(9)
which is performed to learn the sparse codes Z, V having the dictionaries fixed, and a dictionary update step
(Ψ ck+1 , Φ ck+1 ,Φ k+1 ) = arg min Ψ c ,Φ c ,Φ 1 2 Y X − Ψ c 0 Φ c Φ Z k+1 V k+1 2 F .(10)
which updates the dictionaries given the calculated sparse codes. The algorithm iterates between these steps until no additional iteration reduces the value of the cost function below a chosen threshold, or until a predetermined number of iterations is reached. In what follows, we provide details regarding the solution of the problem at each stage. Sparse-coding step. Problem (9) decomposes into t problems, each of which can be solved in parallel:
(z k+1 τ , v k+1 τ ) = arg min zτ ,vτ 1 2 y τ x τ − Ψ ck 0 Φ ck Φ k z τ v τ 2 F , s.t. z τ 0 ≤ s z , v τ 0 ≤ s v .(11)
To address each of the t problems in (11), we propose a greedy algorithm that constitutes a modification of the OMP method [see Algorithm 1]. Our method adapts OMP [35] to solve:
minimize w b − Θw 2 2 s.t. w(I) 0 ≤ s z , w(J ) 0 ≤ s v ,(12)
where w(I) [resp., w(J )] denotes the components of vector w ∈ R (γ+d)×1 indexed by the index set I (resp., J ), with I ∪ J = {1, 2, . . . , γ +d}, I ∩J = {∅}. Each sub-problem in (11) translates to (12) by replacing:
b = y τ x τ , Θ = Ψ ck 0 Φ ck Φ k , and w = z τ v τ .
Dictionary update step. Problem (10) can be written as
minimize Ψ c ,Φ 1 2 Y − Ψ c Z k+1 2 F + 1 2 X − Φ V k+1 2 F ,(13)
where Φ = Φ c Φ and V k+1 = Z k+1 V k+1 . Problem (13) decouples into two (independent) problems, that is, Output: w (approximate) solution of (12). Sort the indices ζ = {1, 2, . . . , γ + d}, corresponding to the θ ζ columns of Θ, such that | ri−1, θ ζ | are in descending order (where α, β denotes the inner product of the vectors α, β). Then, put the ordered indices in the vector qi.
minimize Ψ c 1 2 Y − Ψ c Z k+1 2 F (14)
11:
Set G = ∅ and auxiliary iterator iter = 0. if κ ∈ I AND z < sz then 16: Set G = κ and increase: z = z + 1.
17:
else 18: if κ ∈ J AND v < sv then 19: Set G = κ and increase: v = v + 1.
X − Φ V k+1 2 F .(15)
Provided that Z k+1 and V k+1 are full row-rank, each of these problems has a closed-form solution, namely,
Ψ ck+1 = Y Z k+1 T Z k+1 Z k+1 T −1 and Φ k+1 = XV k+1 T V k+1 V k+1 T −1 .
When Z k+1 and V k+1 are rank-deficient, (14) and (15) have multiple solutions, from which we select the one with minimal Frobenius norm. This is done by taking a thin singular value
decomposition of Z k+1 = G z k+1 Σ z k+1 U T z k+1 and V k+1 = G v k+1 Σ v k+1 U T v k+1 , and calculating Ψ ck+1 = Y U z k+1 Σ −1 z k+1 G T z k+1 and Φ k+1 = XU v k+1 Σ −1 v k+1 G T v k+1 .
V. WEIGHTED COUPLED DICTIONARY LEARNING
Visual and X-ray images of paintings contain a high number of pixels that depict cracks. These are fine patterns of dense cracking formed within the materials. When taking into account these pixels, the learned dictionaries comprise atoms that correspond to high frequency components. As a consequence, the reconstructed images are contaminated by high frequency noise. In order to improve the separation performance, our objective is to obtain dictionaries that ignore pixels representing cracks. To identify such pixels, we generate binary masks identifying the location of cracks by applying our method in [10]. Each sampled image patch may contain a variable number of crack pixels, meaning that each column of the data matrix contains a different number of meaningful entries. To address this issue, we introduce a weighting scheme that adds a weight of 0 or 1 to the pixels that do or do not correspond to cracks, respectively. These crack-induced weights are included using a Hadamard product, namely, our model in (7) is modified to
Y Λ = (Ψ c Z) Λ (16a) X Λ = (Φ c Z + ΦV ) Λ. (16b)
where the matrix Λ has exactly the same dimensions as X and Y and its entries are 0 or 1 depending on whether a pixel is part of a crack or not, respectively. We now formulate the weighted coupled dictionary learning problem as
minimize Ψ c ,Z Φ c ,Φ,V 1 2 (Y − Ψ c Z) Λ 2 F + 1 2 (X − Φ c Z − ΦV ) Λ 2 F , s.t. z τ 0 ≤ s z , v τ 0 ≤ s v , ∀τ = 1, 2, . . . t.(17)
Similar to (8), the solution for (17) is obtained by alternating between a sparse-coding and a dictionary update step.
Sparse-coding step. Similar to (11), the sparse-coding problem decomposes into t problems that can be solved in parallel:
(z k+1 τ , v k+1 τ ) =arg min zτ ,vτ 1 2 y τ x τ λ τ λ τ − Ψ ck 0 Φ ck Φ k λ τ λ τ 1 T z τ v τ 2 2 s.t. z τ 0 ≤ s z , v τ 0 ≤ s v , ∀τ = 1, 2, . . . t,(18)
where we used λ τ to represent column τ of Λ and 1 T to denote a row-vector of ones with dimension equal to γ +d. To address each of the t sub-problems in (18), we use the mOMP algorithm described in Algorithm 1, as each sub-problem in (18) reduces to (12) by replacing:
b = y τ x τ λ τ λ τ , Θ = Ψ ck 0 Φ ck Φ k λ τ λ τ 1 T , and w = z τ v τ .
Dictionary update step. The dictionary update problem is now written as
minimize Ψ c ,Φ 1 2 Y Λ − (Ψ c Z k+1 ) Λ 2 F + 1 2 X Λ − (Φ V k+1 ) Λ 2 F ,(19)
and it decouples into:
minimize Ψ c 1 2 Y Λ − (Ψ c Z k+1 ) Λ 2 F (20a) minimize Φ 1 2 X Λ − (Φ V k+1 ) Λ 2 F .(20b)
We present only the solution of the first problem since the solution of the other follows the same logic. Specifically, we express the Frobenius norm in (20a) as the sum of t 2 -norm terms, each corresponding to a vectorized training patch
t τ =1 y τ λ τ − (Ψ c z τ ) λ τ 2 2 .(21)
By replacing the Hadamard product with multiplication with a diagonal matrix ∆ τ = diag(λ τ ), (21) can be written as
t τ =1 ∆ τ y τ − ∆ τ Ψ c z τ 2 2 .(22)
To minimize the expression in (22), we take the derivative with respect to the dictionary Ψ c and set it to zero:
∂ ∂Ψ c t τ =1 ∆ τ y τ − ∆ τ Ψ c z τ 2 2 = 0 =⇒ t τ =1 ∂ ∂Ψ c (∆ τ y τ − ∆ τ Ψ c z τ ) T (∆ τ y τ − ∆ τ Ψ c z τ ) = 0 =⇒2 t τ =1 ∂ ∂Ψ c y T τ ∆ T τ ∆ τ Ψ c z τ = t τ =1 ∂ ∂Ψ c z T τ Ψ cT ∆ T τ ∆ τ Ψ c z τ =⇒ t τ =1 ∆ T τ ∆ τ y τ 1 T (1z T τ ) = t τ =1 Ψ c z τ z T τ ((λ τ λ τ )1 T ).(23)
Before proceeding with the method to solve (23), we recall that the entries of λ τ are either 0 or 1. To avoid dividing by zero when solving (23), we have to update the rows of the dictionary matrix one-by-one. Specifically, for each row i of Ψ c , we consider the matrix A i = τ ∈Si z τ z T τ , where S i is the support 1 of the i-th row of Λ, and z τ is the τ -th column of Z. We also create a vector
c i = τ ∈Si Y (i, τ )z τ , where Y (i, τ )
is the (i, τ )-th entry of Y . Provided that A i is invertible, the i-th row of Ψ c (which we denote by the row-vector ψ c i ) will be given by
ψ c i = c i A −1 i .(24)
If each z τ is drawn randomly, A i is invertible with probability 1 whenever the cardinality of S i is at least equal to 1 The support S i is defined by the indices where the i-th row of Λ is equal to 1. the number of columns of Ψ c . Although in practice each z τ is not randomly drawn, we still obtain an invertible A i by guaranteeing that the number of training samples is large enough.
VI. SINGLE-AND MULTI-SCALE SEPARATION METHODS
A. Single-Scale Approach
Given the trained coupled dictionaries, the source separation method described in Section III is applied locally, per overlapping patch of the X-ray image. Let the corresponding patches from the mixed X-ray and the two corresponding visual images be denoted as m u , y u 1 , and y u 2 , respectively. Each patch contains √ n × √ n pixels and has top-left coordinates
u = ( · u 1 , · u 2 ), 0 ≤ u 1 < H , 1 ≤ u 2 < W ,
where ∈ Z + , 1 ≤ < √ n is the overlap step-size, • is the floor function, and H, W are the image height and width, respectively. Prior to separation, the DC value is removed from the pixels in each overlapping patch and the residual values are vectorized. The solution of Problem (6) yields the sparse components z u 1c , z u 2c , and v u corresponding to the patch with coordinates u. The texture of each separated patch is then reconstructed following the model in (4), that is, x u 1 = Φ c z u 1c + Φv u and x u 2 = Φ c z u 2c + Φv u . In certain cases, the v component may capture parts of the actual content; for example, vertical brush strokes can be misinterpreted as the wood texture of the panel. In this case, we can choose to skip the v component; namely, we can reconstruct the texture of the X-ray patches as x u 1 = Φ c z u 1c and x u 2 = Φ c z u 2c . The DC values are weighted according to the DC values of the colocated patches in the visual images and then added back to the corresponding separated X-ray patches. Finally, the pixels in each separated X-ray are recovered as the average of the co-located pixels in each overlapping patch.
B. Multi-Scale Approach
Due to the restricted patch size in comparison to the high resolution of the X-ray image, the DC values of all patches carry a considerable amount of the total image energy. In the single-scale approach, these DC values are common to the two separated X-rays, thereby leading to poor separation. To address this issue, we devise a multi-scale image separation approach. In contrast with the techniques in [25], [37], [38], the proposed multi-scale approach performs a pyramid decomposition of the mixed X-ray and visual images, that is, the images are recursively decomposed into low-pass and highpass bands. The decompositions at scale l = {1, 2, . . . , L} are constructed as follows. The images at scale l-where we use the notation M l , Y 1,l , Y 2,l , to refer to the mixed X-ray and the two visuals, respectively-are divided into overlapping patches m u l l , y u l 1,l , and y u l 2,l , each of size √ n l × √ n l pixels. Each patch has top-left coordinates
u l = ( l ·u 1,l , l ·u 2,l ), 0 ≤ u 1,l < H l l , 0 ≤ u 2,l < W l l ,
where l ∈ Z + , 1 ≤ l < √ n l is the overlap step-size, and Fig. 2 and exemplified in Fig. 3. The texture of the mixed X-ray image at scale l is separated patch-by-patch by solving Problem (6). The texture of each separated patch is then reconstructed as, x u l 1,l = Φ c l z u l 1c,l + Φ l v u l l and x u l 2,l = Φ c l z u l 2c,l + Φ l v u l l ; or alternatively, as x u l 1,l = Φ c l z u l 1c,l and x u l 2,l = Φ c l z u l 2c,l . Note that the dictionary learning process can be applied per scale, yielding a triple of coupled dictionaries (Ψ c l , Φ c l , Φ l ) per scale l. In practice, due to lack of training data in the higher scales, dictionaries are learned only from the low-scale decompositions and then copied to the higher scales.
The separated X-ray images are finally reconstructed by following the reverse operation: descending the pyramid, the
VII. EXPERIMENTS
A. Experiments with Synthetic Data
As in [33], [34], we first evaluate the performance of our coupled dictionary learning algorithm-described in Section IV-and our source separation with side information method (see Section III) using synthetic data. Firstly, we generate synthetic signals, x, y, according to model (3), (4), using random dictionaries and then, given the data, we assess whether the algorithm recovers the original dictionaries. The random dictionaries Ψ c , Φ, and Φ c of size 40 × 60 contain entries drawn from the standard normal distribution and their columns are normalized to have unit 2 -norm. Given the dictionaries, t = 1500 sparse vectors Z and V were produced, each with dimension γ = d = 60. The column-vectors z τ and v τ , τ = 1, 2, . . . , t, comprised of respectively s z = 2 and s v = 3 non-zero coefficients distributed uniformly and placed in random and independent locations. Combining the dictionaries and the sparse vectors according to the model in (7) yields the correlated data signals X and Y , to which white Gaussian noise with a varying signal-to-noise ratio (SNR) has been added.
To retrieve the initial dictionaries, we apply the proposed method in Section IV with the dictionaries initialised randomly and the maximum number of iterations set to 100experimental evidence has shown that this value strikes a good balance between complexity and dictionary identifiability. To compare the retrieved dictionaries with the original ones, we adhere to the approach in [34]: per generating dictionary, we sweep through its columns and identify the closest column in the retrieved dictionary. The distance between the two columns is measured as 1 − |δ iδ T j |, where δ i is the i-th column in the original dictionary Ψ c , Φ c , or Φ, andδ j is the j-th column in the corresponding recovered dictionary. Similar to [34], a distance less than 0.01 signifies a success. The percentage of successes per dictionary and for various SNR values is reported in Table I. The results, which are averaged over 100 trials, show that for very noisy data (that is, SNR ≤ 15) the dictionary identifiability performance is low. However, for SNR values higher than 20 dB, the percentage of recovered dictionary atoms is up to 96.78%. The obtained performance is systematic for different dictionary and signal sizes as well as for different sparsity levels.
In a second stage, given the learned dictionaries, we separate signal pairs (x 1 , x 2 ) from mixtures m = x 1 + x 2 by solving Problem (6) using the corresponding pair (y 1 , y 2 ) as side information. The pairs are taken from the correlated data signals X and Y , to which white Gaussian noise with a varying SNR has been added. Table II reports the normalized meansquared error between the reconstructed-defined byx i -and the original signals, that is,
xi−xi 2 2 xi 2 2 , i = {1, 2}.
The results show that at low and moderate noise SNRs the reconstruction error is very low. When the noise increases, the recovery performance drops; this is to be expected as the noise affects both the dictionary leaning and the generation of the mixtures.
B. Experiments with Real Data
We consider eight image pairs-each consisting of an X-ray scan and the corresponding photograph-taken from digital acquisitions [12] of single-sided panels of the Ghent Altarpiece (1432). Furthermore, we are given access to eight crack masks (one per visual/X-ray image pair) that indicate the pixel positions referring to cracks (these masks were obtained using our method in [10]). Fig. 4 depicts two such pairs with the crack masks, one visualizing a face and the other a piece of fabric. An example X-ray mixture (of size 1024×1024 pixels) together with its two registered visual images corresponding to the two sides of the painting are depicted in Fig. 5.
Firstly, adhering to the single-scale approach, described in Section VI-A, we train a dictionary triplet, (Ψ c , Φ c , Φ), using our method in Section IV. We use t = 46400 patches, each containing 8 × 8 pixels, the dictionaries, Ψ c , Φ c , Φ, have a dimension of 64 × 256, and we set s z = 10 and s v = 8. The separated X-rays that correspond to the mixture in Fig. 5 are depicted in the first column of Fig. 6. We observe that our single-scale approach separates the texture of the X-rays; this is demonstrated by the accurate separation of the cracks. Still, however, the low-pass band content is not properly split over the images; namely, part of the cloth and the face are present in both separated images. Next, we apply the multiscale framework, where we use L = 4 scales with parameters √ n l = 8, l = {1, 2, 3, 4}, 1 = 4, 2 = 4, 3 = 7, and 4 = 8. Dictionary triplets (Ψ c , Φ c , Φ ), each with dimension of 64 × 256, are trained for the first three scales and the dictionaries of the third scale are used for the forth. We use t 1 = 46400, t 2 = 46400 and t 3 = 35500 patches for scale 1, 2 and 3, respectively. The visualizations in the second column of Fig. 6 show that, compared to the single scale approach, the multi-scale method properly discriminates the low-pass frequency content of the two images (most part of the cloth is allocated to "Separated Side 1" while the face is only visible in "Separated Side 2"), thereby leading to a higher separation performance. Finally, we also construct dictionary triplets according to our weighted dictionary learning method in Section V. The remaining dictionary learning parameters are as before. It is worth mentioning that, in order to obtain a solution in (24), the number of training samples t needs to be higher that the total dimension of the dictionary. Namely, to update the columns of dictionary Ψ c we need at least 16384 samples. Correspondingly, to update the rows of dictionary Φ we need more than 32768 samples. The visual results in the third column of Fig. 6 corroborate that the quality of the separation is improved when the dictionaries are learned from only non-crack pixels. Indeed, with this configuration, the separated images are not only smoother but also the separation is more prominent.
It is worth mentioning that the results of our method, depicted in Fig. 6, are obtained without including the v component during the reconstruction; namely, we reconstructed each X-ray patch as x 1 = Φ c z 1c and x 2 = Φ c z 2c . The visual results of our method when including the v component during the reconstruction are depicted in Fig. 7. These results are obtained with the same dictionaries that yield the result in the third column of Fig. 6. By comparing the two reconstructions, we can make the following observations. First, the v component successfully expresses the X-ray specific features, such as the wood grain, visualized by the periodic vertical stripes in the X-ray scan. The reconstruction of these stripes is much more evident in Fig. 7. Secondly, in this case, the v component also captures parts of the actual content that we wish to separate. For example, we can discern a faint outline of the eye in Fig. 7(a) as well as a fold of fabric appearing in Fig. 7(b).
We compare our best performing multi-scale approach (namely, the one that omits cracks when learning dictionaries) with the state-of-the-art MCA method [20], [28]. Two configurations of the latter are considered. Based on prior work [30], in one configuration we use fixed dictionaries, namely, the discrete wavelet and curvelet transforms are applied on blocks of 512 × 512 pixels. Inherently, the low-frequency content cannot be split by MCA and it is equally divided between both retrieved components. In the other configuration, we learn dictionaries with K-SVD using the same training X-ray images as in the previous experiment. One dictionary is trained on the X-ray images depicting fabric and the other on the images of faces. The K-SVD parameters are the same as the ones used in our method. Furthermore, the same multi-scale strategy is applied to the configuration of MCA with K-SVD trained dictionaries. The results are depicted in Fig. 8 and Fig. 9. Note that the third column in Fig. 8 and Fig. 9 are without and with taking the v component into account, respectively. It is clear that MCA with fixed dictionaries can only separate based on morphological properties; for example, the wood grain of the panel is captured entirely by curvelets and not by the wavelets. It is, however, unsuitable to separate painted content-it is evident that part of the cloth and face appear in both separated components. Furthermore, MCA with K-SVD dictionaries is also unable to separate the X-ray content. Nevertheless, we do observe that most cracks are captured by the face dictionary, as more cracks are present in that type of content. Unlike both state-of-the-art configurations of MCA, the proposed method separates the X-ray content accurately (the cloth is always depicted on "Separated Side 1" while the face is only visible in "Separated Side 2"), leading to better visual performance. These results corroborate the benefit of using side information by means of photographs to separate mixtures of X-ray images.
C. Experiments on Simulated Mixtures
Due to the lack of proper ground truth data, we generate simulated X-ray image mixtures in an attempt to assess our method in an objective manner. To this end, we utilised the Xray images from single-sided panels, depicting content similar to the mixture in Fig. 5(c) and (f). We generated mixtures by summing these independent X-ray images 2 and then we assessed the separation performance of the proposed method vis-à-vis MCA either with fixed or K-SVD trained dictionaries. For this set of experiments, patches of size 256 × 256 pixels were considered and the parameters of the different methods were kept the same as in the previous section. Table III reports the quality of the reconstructed X-ray components by means Fig. 5(c); (first row) separated side 1, (second row) separated side 2. The configurations are: (first column) single-scale method (Section VI-A) with the coupled dictionary learning algorithm described in Section IV, (second column) multi-scale method (Section VI-B) with the coupled dictionary learning method from Section IV, (third column) multi-scale method (Section VI-B) with the weighted coupled dictionary learning method from Section V and without including the v component.
of the peak-signal-to-noise-ratio (PSNR) and structural similarity index metric (SSIM) [63]. It is clear that the proposed method outperforms the alternative state-of-the-art methods both in terms of PSNR and SSIM performance. Compared to MCA with fixed dictionaries, the proposed method brings an improvement in the quality of the separation by up to 1.26dB in PSNR and 0.0741 in SSIM for "Mixture 3". The maximum gains against MCA with K-SVD trained dictionaries are 1.41dB and 0.0953 for "Mixture 3" again. While we realize that PSNR and SSIM are not necessarily the right image quality metrics in this scenario, they do demonstrate objectively the improvements that our method brings over the state of the art.
VIII. CONCLUSION
We have proposed a novel sparsity-based regularization method for source separation guided by side information. Our method learns dictionaries, coupling registered acquisitions from diverse modalities, and comes both in a single-and multi-scale framework. The proposed method is applied in the separation of X-ray images of paintings on wooden panels that are painted on both sides, using the photographs of each side as side information. Experiments on real data, consisting of digital acquisitions of the Ghent Altarpiece (1432), verify that the use of side information can be highly beneficial in this application. Furthermore, due to the high resolution of the data relative to the restricted patch size, the multi-scale version of the proposed algorithm improves the quality of the results significantly. We also observed experimentally that omitting the high frequency crack pixels in the dictionary learning process results in smoother and visually more pleasant separation results. Finally, the superiority of our method, compared to the state-of-the-art MCA technique [20], [21], [36], was validated visually using real data and objectively using simulated X-ray image mixtures.
Figure 1 .
1Panels from the Ghent Altarpiece: (left) panels of Adam and
b ∈ R (2n)×1 2: A matrix Θ ∈ R (2n)×(γ+d) ,3: The indices I, J , with I ∪J = {1, 2, . . . , γ +d}, I ∩J = {∅} 4: The sparsity levels sz, sv of vectors z and v, respectively.
Initialization 5 :
5Initialize the residual r0 = b. 6: Set the total sparsity of vector w as sw = sz + sv. 7: Set the counters for the sparsity of z and v to z = 0, v = 0. 8: Initialize the set of non-zero elements of w to Ω = ∅.Algorithm9: for i = 1, 2, . . . , sw do 10:
index of the Θ matrix that corresponds to the value of iter: κ = qi[iter].
set of non-zero elements of w, i.e., Ωi = Ωi−1 ∪ {κ}, and the matrix of chosen atoms: Θi = [Θi−1 θκ].
wi = arg minw b − Θiw 2.
new residual: ri = b − Θiwi.
Figure 2 .
2Schema of a 3-scale pyramid decomposition in the proposed multiscale dictionary learning and source separation approaches.
Figure 3 .
3Example of a 4-scale pyramid decomposition of a mixed X-ray image. The original image resolution is 1024 × 1024 pixels. At scale 1, the image is split into non-overlapping patches of 8 × 8 pixels and the DC value of every patch is extracted, thereby generating the high-pass component. The aggregated DC values compose the low-pass component at scale 2, the resolution of which is 128 × 128 pixels. Dividing this component into nonoverlapping patches of 4 × 4 pixels and extracting the DC value from every patch yields the high-pass band in scale 2. The procedure is repeated until finally the low-pass band at scale 4 has a resolution of 8 × 8 pixels.
H
l , W l are the height and width of the image decomposition at scale l. The DC value is extracted from each patch, thereby constructing the high frequency band of the image at scale l. The aggregated DC values comprise the low-pass component of the image, the resolution of which is H l l × W l l pixels. The low-pass component is then decomposed further at the subsequent scale (l + 1). The pyramid decomposition is schematically sketched in
at the coarser scale is up-sampled and added to the separated component of the finer scales.
Figure 4 .
4Examples of images from single-sided panels of the Ghent Altarpiece and the corresponding crack masks.
Figure 5 .
5Image set cropped from a double-sided panel of the altarpiece, on which we assess the proposed method; (a) and (d) photograph of side 1, (b) and (e) photograph of side 2; (c) and (f) corresponding X-ray image. The resolution is 1024 × 1024 pixels.
Figure 6 .
6Visual evaluation of the different configurations of the proposed method in the separation of the X-ray image in
Figure 7 .
7Visual evaluation of the proposed multi-scale method in the separation of the X-ray image inFig. 5(c); (a) separated side 1, (b) separated side 2. The reconstructions include the X-ray specific v component.
and B. Cornelis are with the Department of Electronics and Informatics, Vrije Universiteit Brussel, Brussels 1050, Belgium, and with iMinds, 9050 Ghent, Belgium (e-mail: {ndeligia, bcorneli}@etro.vub.ac.be). B. Cornelis was with the Department of Mathematics, Duke University, Durham, NC 27708 USA. J. Mota and M. Rodrigues are with the Department of Electronic and Electrical Engineering, University College London, UK (e-mail: j.mota,[email protected]). I. Daubechies is with the Departments of Mathematics and Electronic and Computer Engineering, Duke University, Durham, NC 27708 USA (e-mail: [email protected]).
Table I DICTIONARY
IIDENTIFIABILITY OF THE PROPOSED ALGORITHM BASED ON SYNTHETIC DATA, EXPRESSED IN TERMS OF THE PERCENTAGE OF RETRIEVED ATOMS FOR THE DICTIONARIES IN MODEL (7).SNR [dB] ∞
40
35
30
25
20
15
Ψ c
96% 95.18% 95.38% 95.65% 95.20% 90.42% 12.53%
Φ c
96.78% 95.97% 96.53% 96.50% 95.48% 74.35% 0.27%
Φ
92.95% 91.90% 91.73% 91.27% 91.50% 88.25% 3.07%
Table II
RECONSTRUCTION ERROR OF THE PROPOSED SOURCE SEPARATION WITH
SIDE INFORMATION METHOD BASED ON SYNTHETIC DATA.
SNR [dB]
∞
40
35
30
25
20
15
x
Table III OBJECTIVE
IIIQUALITY ASSESSMENT OF THE X-RAY SEPARATION PERFORMANCE OF DIFFERENT METHODS ON SIMULATED MIXTURES. Image PSNR [dB] SSIM PSNR [dB] SSIM PSNR [dB] SSIM PSNR [dB] SSIM PSNR [dB] SSIMMixture 1
Mixture 2
Mixture 3
Mixture 4
Mixture 5
MCA fixed
X-ray 1
25.69
0.7941
30.87
0.9003
27.28
0.7915
27.99
0.7972
26.96
0.8473
X-ray 2
25.50
0.8134
30.73
0.8818
27.15
0.8198
27.86
0.8628
26.78
0.8068
MCA trained
X-ray 1
26.04
0.8245
31.07
0.8381
28.13
0.7703
27.56
0.7783
27.24
0.8258
X-ray 2
25.83
0.8485
31.15
0.8189
27.23
0.6966
27.41
0.8464
27.05
0.7927
Proposed
X-ray 1
26.21
0.8583
31.91
0.9072
28.54
0.8656
28.31
0.8266
27.34
0.8592
X-ray 2
26.00
0.8759
31.75
0.8892
28.36
0.8859
28.16
0.8921
27.14
0.8329
We divided the sum by two to bring the mixture to the same range as the independent components.
ACKNOWLEDGMENT Miguel Rodrigues acknowledges valuable feedback from Jonathon Chambers.
X-ray image separation via coupled dictionary learning. N Deligiannis, J F C Mota, B Cornelis, M R D Rodrigues, I Daubechies, arXiv:1605.06474IEEE Int. Conf. Image Process. (ICIP). [Available. N. Deligiannis, J. F. C. Mota, B. Cornelis, M. R. D. Rodrigues, and I. Daubechies, "X-ray image separation via coupled dictionary learning," in IEEE Int. Conf. Image Process. (ICIP). [Available: arXiv:1605.06474], 2016, pp. 1-5.
Big data: A survey. M Chen, S Mao, Y Liu, Mobile Networks and Applications. 192M. Chen, S. Mao, and Y. Liu, "Big data: A survey," Mobile Networks and Applications, vol. 19, no. 2, pp. 171-209, 2014.
Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease. D Zhang, D Shen, NeuroImage. 592D. Zhang and D. Shen, "Multi-modal multi-task learning for joint pre- diction of multiple regression and classification variables in Alzheimer's disease," NeuroImage, vol. 59, no. 2, pp. 895-907, 2012.
Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. S Wang, L Zhang, Y Liang, Q Pan, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Wang, L. Zhang, Y. Liang, and Q. Pan, "Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis," in IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), 2012, pp. 2216-2223.
Image processing for artist identification. C R Johnson, E Hendriks, I J Berezhnoy, E Brevdo, S M Hughes, I Daubechies, J Li, E Postma, J Z Wang, IEEE Signal Process. Mag. 254C. R. Johnson, E. Hendriks, I. J. Berezhnoy, E. Brevdo, S. M. Hughes, I. Daubechies, J. Li, E. Postma, and J. Z. Wang, "Image processing for artist identification," IEEE Signal Process. Mag., vol. 25, no. 4, pp. 37-48, 2008.
Toward discovery of the artist's style: Learning to recognize artists by their artworks. N Van Noord, E Hendriks, E Postma, IEEE Signal Process. Mag. 324N. van Noord, E. Hendriks, and E. Postma, "Toward discovery of the artist's style: Learning to recognize artists by their artworks," IEEE Signal Process. Mag., vol. 32, no. 4, pp. 46-54, July 2015.
A thread counting algorithm for art forensics. D Johnson, C JohnsonJr, A Klein, W Sethares, H Lee, E Hendriks, IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop. D. Johnson, C. Johnson Jr., A. Klein, W. Sethares, H. Lee, and E. Hen- driks, "A thread counting algorithm for art forensics," in IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE), Jan 2009, pp. 679-684.
Quantitative canvas weave analysis using 2-D synchrosqueezed transforms: Application of time-frequency analysis to art investigation. H Yang, J Lu, W Brown, I Daubechies, L Ying, IEEE Signal Process. Mag. 324H. Yang, J. Lu, W. Brown, I. Daubechies, and L. Ying, "Quantitative can- vas weave analysis using 2-D synchrosqueezed transforms: Application of time-frequency analysis to art investigation," IEEE Signal Process. Mag., vol. 32, no. 4, pp. 55-63, July 2015.
Automatic thread-level canvas analysis: A machine-learning approach to analyzing the canvas of paintings. L Van Der Maaten, R Erdmann, IEEE Signal Process. Mag. 324L. van der Maaten and R. Erdmann, "Automatic thread-level canvas analysis: A machine-learning approach to analyzing the canvas of paintings," IEEE Signal Process. Mag., vol. 32, no. 4, pp. 38-45, July 2015.
Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece. B Cornelis, T Ružić, E Gezels, A Dooms, A Pižurica, L Platiša, J Cornelis, M Martens, M De Mey, I Daubechies, Signal ProcessB. Cornelis, T. Ružić, E. Gezels, A. Dooms, A. Pižurica, L. Platiša, J. Cornelis, M. Martens, M. De Mey, and I. Daubechies, "Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece," Signal Process., 2012.
Bayesian crack detection in ultra high resolution multimodal images of paintings. B Cornelis, Y Yang, J Vogelstein, A Dooms, I Daubechies, D Dunson, ArXiv:1304.5894arXiv preprintB. Cornelis, Y. Yang, J. Vogelstein, A. Dooms, I. Daubechies, and D. Dunson, "Bayesian crack detection in ultra high resolution multi- modal images of paintings," arXiv preprint, ArXiv:1304.5894, 2013.
Digital image processing of the Ghent Altarpiece: Supporting the painting's study and conservation treatment. A Pizurica, L Platisa, T Ruzic, B Cornelis, A Dooms, M Martens, H Dubois, B Devolder, M De Mey, I Daubechies, IEEE Signal Process. Mag. 324A. Pizurica, L. Platisa, T. Ruzic, B. Cornelis, A. Dooms, M. Martens, H. Dubois, B. Devolder, M. De Mey, and I. Daubechies, "Digital image processing of the Ghent Altarpiece: Supporting the painting's study and conservation treatment," IEEE Signal Process. Mag., vol. 32, no. 4, pp. 112-122, 2015.
Digital painting analysis, at the cross section of engineering, mathematics and culture. B Cornelis, A Dooms, J Cornelis, F Leen, P Schelkens, Eur. Signal Process. Conf. (EUSIPCO). B. Cornelis, A. Dooms, J. Cornelis, F. Leen, and P. Schelkens, "Digital painting analysis, at the cross section of engineering, mathematics and culture," in Eur. Signal Process. Conf. (EUSIPCO), 2011, pp. 1254- 1258.
Independent component analysis. A Hyvärinen, J Karhunen, E Oja, John Wiley & Sons46A. Hyvärinen, J. Karhunen, and E. Oja, Independent component analysis. John Wiley & Sons, 2004, vol. 46.
Gaussian moments for noisy independent component analysis. A Hyvärinen, IEEE Signal Process. Lett. 66A. Hyvärinen, "Gaussian moments for noisy independent component analysis," IEEE Signal Process. Lett., vol. 6, no. 6, pp. 145-147, 1999.
Static and dynamic source separation using nonnegative factorizations: A unified view. P Smaragdis, C Févotte, G Mysore, N Mohammadiha, M Hoffman, IEEE Signal Process. Mag. 313P. Smaragdis, C. Févotte, G. Mysore, N. Mohammadiha, and M. Hoff- man, "Static and dynamic source separation using nonnegative factor- izations: A unified view," IEEE Signal Process. Mag., vol. 31, no. 3, pp. 66-75, 2014.
Ensemble learning for independent component analysis. J W Miskin, Advances in Independent Component Analysis. J. W. Miskin, "Ensemble learning for independent component analysis," in Advances in Independent Component Analysis, 2000.
A Bayesian approach to blind source separation. D B Rowe, Journal of Interdisciplinary Mathematics. 51D. B. Rowe, "A Bayesian approach to blind source separation," Journal of Interdisciplinary Mathematics, vol. 5, no. 1, pp. 49-76, 2002.
Bayesian separation of images modeled with MRFs using MCMC. K Kayabol, E Kuruoglu, B Sankur, IEEE Trans. Image Process. 185K. Kayabol, E. Kuruoglu, and B. Sankur, "Bayesian separation of images modeled with MRFs using MCMC," IEEE Trans. Image Process., vol. 18, no. 5, pp. 982-994, 2009.
Sparsity and morphological diversity in blind source separation. J Bobin, J.-L Starck, J Fadili, Y Moudden, IEEE Trans. Image Process. 1611J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, "Sparsity and mor- phological diversity in blind source separation," IEEE Trans. Image Process., vol. 16, no. 11, pp. 2662-2674, 2007.
Blind source separation by sparse decomposition in a signal dictionary. M Zibulevsky, B Pearlmutter, Neural Computation. 134M. Zibulevsky and B. Pearlmutter, "Blind source separation by sparse decomposition in a signal dictionary," Neural Computation, vol. 13, no. 4, pp. 863-882, 2001.
Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. E Candès, J Romberg, T Tao, IEEE Trans. Inf. Theory. 522E. Candès, J. Romberg, and T. Tao, "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information," IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489-509, 2006.
Compressed sensing. D Donoho, IEEE Trans. Inf. Theory. 524D. Donoho, "Compressed sensing," IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289-1306, 2006.
Image inpainting: Overview and recent advances. C Guillemot, O Le Meur, IEEE Signal Process. Mag. 311C. Guillemot and O. Le Meur, "Image inpainting: Overview and recent advances," IEEE Signal Process. Mag., vol. 31, no. 1, pp. 127-144, 2014.
Learning multiscale sparse representations for image and video restoration. J Mairal, G Sapiro, M Elad, Multiscale Modeling & Simulation. 71J. Mairal, G. Sapiro, and M. Elad, "Learning multiscale sparse rep- resentations for image and video restoration," Multiscale Modeling & Simulation, vol. 7, no. 1, pp. 214-241, 2008.
Image denoising via sparse and redundant representations over learned dictionaries. M Elad, M Aharon, IEEE Trans. Image Process. 1512M. Elad and M. Aharon, "Image denoising via sparse and redundant representations over learned dictionaries," IEEE Trans. Image Process., vol. 15, no. 12, pp. 3736-3745, 2006.
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. I Daubechies, M Defrise, C De Mol, Comm. Pure Appl. Math. 5714131541I. Daubechies, M. Defrise, and C. De Mol, "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint," Comm. Pure Appl. Math, vol. 57, p. 14131541, 2004.
Visual evaluation of the proposed multi-scale method in the separation of the X-ray image in Fig. 5(c); (first row) separated side 1, (second row) separated side 2. The competing methods are: (first column) MCA with fixed dictionaries [30], (second column) multi-scale MCA with K-SVD. Figure 8. third column) Proposed without including the v componentFigure 8. Visual evaluation of the proposed multi-scale method in the separation of the X-ray image in Fig. 5(c); (first row) separated side 1, (second row) separated side 2. The competing methods are: (first column) MCA with fixed dictionaries [30], (second column) multi-scale MCA with K-SVD, (third column) Proposed without including the v component.
Redundant multiscale transforms and their application for morphological component separation. J.-L Starck, M Elad, D Donoho, Advances in Imaging and Electron Physics. 132J.-L. Starck, M. Elad, and D. Donoho, "Redundant multiscale trans- forms and their application for morphological component separation," Advances in Imaging and Electron Physics, vol. 132, pp. 287-348, 2004.
Morphological diversity and source separation. J Bobin, Y Moudden, J.-L Starck, M Elad, IEEE Signal Process. Lett. 137J. Bobin, Y. Moudden, J.-L. Starck, and M. Elad, "Morphological diversity and source separation," IEEE Signal Process. Lett., vol. 13, no. 7, pp. 409-412, 2006.
Digital cradle removal in X-ray images of art paintings. R Yin, D Dunson, B Cornelis, B Brown, N Ocon, I Daubechies, IEEE Int. Conf. Image Process. (ICIP). R. Yin, D. Dunson, B. Cornelis, B. Brown, N. Ocon, and I. Daubechies, "Digital cradle removal in X-ray images of art paintings," in IEEE Int. Conf. Image Process. (ICIP), Oct 2014, pp. 4299-4303.
The dual-tree complex wavelet transform: a new technique for shift invariance and directional filters. N G Kingsbury, Proc. 8th IEEE DSP Workshop. 8th IEEE DSP Workshop886N. G. Kingsbury, "The dual-tree complex wavelet transform: a new technique for shift invariance and directional filters," in Proc. 8th IEEE DSP Workshop, vol. 8, 1998, p. 86.
Optimally sparse multidimensional representation using shearlets. K Guo, D Labate, SIAM journal on Mathematical Analysis. 391K. Guo and D. Labate, "Optimally sparse multidimensional representa- tion using shearlets," SIAM journal on Mathematical Analysis, vol. 39, no. 1, pp. 298-318, 2007.
Method of optimal directions for frame design. K Engan, S O Aase, J Hakon-Husoy, IEEE Int. Conf. Acoustics, Speech, Signal Process. (ICASSP). K. Engan, S. O. Aase, and J. Hakon-Husoy, "Method of optimal directions for frame design," in IEEE Int. Conf. Acoustics, Speech, Signal Process. (ICASSP), 1999, pp. 2443-2446.
K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. M Aharon, M Elad, A Bruckstein, IEEE Trans. Signal Process. 5411M. Aharon, M. Elad, and A. Bruckstein, "K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation," IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311-4322, Nov. 2006.
Signal recovery from random measurements via orthogonal matching pursuit. J A Tropp, A C Gilbert, IEEE Trans. Inf. Theory. 5312J. A. Tropp and A. C. Gilbert, "Signal recovery from random mea- surements via orthogonal matching pursuit," IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655-4666, 2007.
Blind separation of image sources via adaptive dictionary learning. V Abolghasemi, S Ferdowsi, S Sanei, IEEE Trans. Image Process. 216V. Abolghasemi, S. Ferdowsi, and S. Sanei, "Blind separation of image sources via adaptive dictionary learning," IEEE Trans. Image Process., vol. 21, no. 6, pp. 2921-2930, 2012.
Nonlocal hierarchical dictionary learning using wavelets for image denoising. R Yan, L Shao, Y Liu, IEEE Trans. Image Process. 2212R. Yan, L. Shao, and Y. Liu, "Nonlocal hierarchical dictionary learning using wavelets for image denoising," IEEE Trans. Image Process., vol. 22, no. 12, pp. 4689-4698, 2013.
Multi-scale dictionary learning using wavelets. B Ophir, M Lustig, M Elad, IEEE J. Sel. Topics Signal Process. 55B. Ophir, M. Lustig, and M. Elad, "Multi-scale dictionary learning using wavelets," IEEE J. Sel. Topics Signal Process., vol. 5, no. 5, pp. 1014- 1024, 2011.
Informed source separation through spectrogram coding and data embedding. A Liutkus, J Pinel, R Badeau, L Girin, G Richard, Signal Process. 928A. Liutkus, J. Pinel, R. Badeau, L. Girin, and G. Richard, "Informed source separation through spectrogram coding and data embedding." Signal Process., vol. 92, no. 8, pp. 1937-1949, 2012.
Informed separation of spatial images of stereo music recordings using second-order statistics. S Gorlow, S Marchand, Machine Learning for Signal Processing. IEEE2013 IEEE International Workshop onS. Gorlow and S. Marchand, "Informed separation of spatial images of stereo music recordings using second-order statistics," in Machine Learning for Signal Processing (MLSP), 2013 IEEE International Work- shop on. IEEE, 2013, pp. 1-6.
Quantization index modulation: A class of provably good methods for digital watermarking and information embedding. B Chen, G W Wornell, IEEE Trans. Inf. Theory. 474B. Chen and G. W. Wornell, "Quantization index modulation: A class of provably good methods for digital watermarking and information embedding," IEEE Trans. Inf. Theory, vol. 47, no. 4, pp. 1423-1443, 1999.
Emergence of simple-cell receptive field properties by learning a sparse code for natural images. B A Olshausen, D J Field, Nature. 3816583B. A. Olshausen and D. J. Field, "Emergence of simple-cell receptive field properties by learning a sparse code for natural images," Nature, vol. 381, no. 6583, pp. 607-609, 1996.
Dictionary learning algorithms for sparse representation. K Kreutz-Delgado, J F Murray, B D Rao, K Engan, T.-W Lee, T J Sejnowski, Neural Computation. 152K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T.-W. Lee, and T. J. Sejnowski, "Dictionary learning algorithms for sparse representa- tion," Neural Computation, vol. 15, no. 2, pp. 349-396, 2003.
Dictionary design for distributed compressive sensing. W Chen, I Wassell, M R Rodrigues, IEEE Signal Process. Lett. 221W. Chen, I. Wassell, and M. R. Rodrigues, "Dictionary design for distributed compressive sensing," IEEE Signal Process. Lett., vol. 22, no. 1, pp. 95-99, 2015.
Dictionary learning for blind one bit compressed sensing. H Zayyani, M Korki, F Marvasti, IEEE Signal Process. Lett. 232187191H. Zayyani, M. Korki, and F. Marvasti, "Dictionary learning for blind one bit compressed sensing," IEEE Signal Process. Lett., vol. 23, no. 2, p. 187191, 2016.
Exact recovery of sparselyused dictionaries. D A Spielman, H Wang, J Wright, Conference on Learning Theory (COLT). D. A. Spielman, H. Wang, and J. Wright, "Exact recovery of sparsely- used dictionaries," in Conference on Learning Theory (COLT), 2012.
New algorithms for learning incoherent and overcomplete dictionaries. S Arora, R Ge, A Moitra, arXiv:1308.6273arXiv preprintS. Arora, R. Ge, and A. Moitra, "New algorithms for learning incoherent and overcomplete dictionaries," arXiv preprint arXiv:1308.6273, 2013.
The Laplacian pyramid as a compact image code. P J Burt, E H Adelson, IEEE Trans. Commun. 314P. J. Burt and E. H. Adelson, "The Laplacian pyramid as a compact image code," IEEE Trans. Commun., vol. 31, no. 4, pp. 532-540, 1983.
Learning multimodal dictionaries. G Monaci, P Jost, P Vandergheynst, B Mailhe, S Lesage, R Gribonval, IEEE Trans. Image Process. 169G. Monaci, P. Jost, P. Vandergheynst, B. Mailhe, S. Lesage, and R. Gribonval, "Learning multimodal dictionaries," IEEE Trans. Image Process., vol. 16, no. 9, pp. 2272-2283, 2007.
Image super-resolution via. J Yang, J Wright, T S Huang, Y Ma, J. Yang, J. Wright, T. S. Huang, and Y. Ma, "Image super-resolution via
Visual evaluation of the proposed multi-scale method in the separation of the X-ray image in Fig. 5(f); (first row) separated side 1, (second row) separated side 2. The competing methods are: (first column) MCA with fixed dictionaries [30], (second column) multi-scale MCA with K-SVD, (third column) Proposed including the v component. sparse representation. IEEE Trans. Image Process. 1911Figure 9Figure 9. Visual evaluation of the proposed multi-scale method in the separation of the X-ray image in Fig. 5(f); (first row) separated side 1, (second row) separated side 2. The competing methods are: (first column) MCA with fixed dictionaries [30], (second column) multi-scale MCA with K-SVD, (third column) Proposed including the v component. sparse representation," IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861-2873, 2010.
Coupled dictionary training for image super-resolution. J Yang, Z Wang, Z Lin, S Cohen, T Huang, IEEE Trans. Image Process. 218J. Yang, Z. Wang, Z. Lin, S. Cohen, and T. Huang, "Coupled dictio- nary training for image super-resolution," IEEE Trans. Image Process., vol. 21, no. 8, pp. 3467-3478, 2012.
Factorized latent spaces with structured sparsity. Y Jia, M Salzmann, T Darrell, Advances in Neural Information Processing Systems. Y. Jia, M. Salzmann, and T. Darrell, "Factorized latent spaces with structured sparsity," in Advances in Neural Information Processing Systems, 2010, pp. 982-990.
Modified-cs: Modifying compressive sensing for problems with partially known support. N Vaswani, W Lu, IEEE Trans. Signal Process. 589N. Vaswani and W. Lu, "Modified-cs: Modifying compressive sensing for problems with partially known support," IEEE Trans. Signal Pro- cess., vol. 58, no. 9, pp. 4595-4607, 2010.
Compressed sensing with prior information: Optimal strategies, geometry, and bounds. J F C Mota, N Deligiannis, M R D Rodrigues, arXiv:1408.5250arXiv preprintJ. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, "Compressed sensing with prior information: Optimal strategies, geometry, and bounds," arXiv preprint arXiv:1408.5250, 2014.
Compressed sensing with side information: Geometrical interpretation and performance bounds. IEEE Global Conference on Signal and Information Processing. IEEE--, "Compressed sensing with side information: Geometrical interpre- tation and performance bounds," in IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2014, pp. 512-516.
Classification and reconstruction of high-dimensional signals from low-dimensional noisy features in the presence of side information. F Renna, L Wang, X Yuan, J Yang, G Reeves, R Calderbank, L Carin, M R Rodrigues, arXiv:1412.0614arXiv preprintF. Renna, L. Wang, X. Yuan, J. Yang, G. Reeves, R. Calderbank, L. Carin, and M. R. Rodrigues, "Classification and reconstruction of high-dimensional signals from low-dimensional noisy features in the presence of side information," arXiv preprint arXiv:1412.0614, 2014.
Compressed sensing with prior information: Information-theoretic limits and practical decoders. J Scarlett, J S Evans, S Dey, IEEE Trans. Signal Process. 612J. Scarlett, J. S. Evans, and S. Dey, "Compressed sensing with prior information: Information-theoretic limits and practical decoders," IEEE Trans. Signal Process., vol. 61, no. 2, pp. 427-439, 2013.
Bayesian compressed sensing with heterogeneous side information. E Zimos, J F C Mota, M R D Rodrigues, N Deligiannis, IEEE Data Compression Conf. (DCC). E. Zimos, J. F. C. Mota, M. R. D. Rodrigues, and N. Deligiannis, "Bayesian compressed sensing with heterogeneous side information," in IEEE Data Compression Conf. (DCC), 2016.
Weighted 1 minimization for sparse recovery with prior information. M A Khajehnejad, W Xu, A S Avestimehr, B Hassibi, IEEE Int. Symp. Inf. Theory (ISIT. IEEEM. A. Khajehnejad, W. Xu, A. S. Avestimehr, and B. Hassibi, "Weighted 1 minimization for sparse recovery with prior information," in IEEE Int. Symp. Inf. Theory (ISIT). IEEE, 2009, pp. 483-487.
Recovery of jointly sparse signals from few random projections. M B Wakin, M F Duarte, S Sarvotham, D Baron, R G Baraniuk, Proc. Neural Inform. Process. Syst., (NIPS). Neural Inform. ess. Syst., (NIPS)M. B. Wakin, M. F. Duarte, S. Sarvotham, D. Baron, and R. G. Baraniuk, "Recovery of jointly sparse signals from few random projections," in Proc. Neural Inform. Process. Syst., (NIPS), 2005.
Probing the Pareto frontier for basis pursuit solutions. E Van Den, M P Berg, Friedlander, SIAM Journal on Scientific Computing. 312E. van den Berg and M. P. Friedlander, "Probing the Pareto frontier for basis pursuit solutions," SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890-912, 2008.
Consistent and elastic registration of histological sections using vector-spline regularization. I Arganda-Carreras, C O S Sorzano, R Marabini, J M Carazo, C O De Solorzano, J Kybic, Computer Vision Approaches to Medical Image Analysis. I. Arganda-Carreras, C. O. S. Sorzano, R. Marabini, J. M. Carazo, C. O. de Solorzano, and J. Kybic, "Consistent and elastic registration of histological sections using vector-spline regularization," in Computer Vision Approaches to Medical Image Analysis, May 2006, pp. 85-95.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE Trans. Image Process. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.
|
[] |
[
"A new mapped WENO scheme using order-preserving mapping",
"A new mapped WENO scheme using order-preserving mapping"
] |
[
"Ruo Li \nCAPT, LMAM and School of Mathematical Sciences\nPeking University\n100871BeijingChina\n",
"Wei Zhong \nSchool of Mathematical Sciences\nPeking University\n100871BeijingChina\n\nNorthwest Institute of Nuclear Technology\nXi'an 710024China\n"
] |
[
"CAPT, LMAM and School of Mathematical Sciences\nPeking University\n100871BeijingChina",
"School of Mathematical Sciences\nPeking University\n100871BeijingChina",
"Northwest Institute of Nuclear Technology\nXi'an 710024China"
] |
[] |
Existing mapped WENO schemes can hardly prevent spurious oscillations while preserving high resolutions at long output times. We reveal in this paper the essential reason for such phenomena. It is actually caused by that the mapping function in these schemes can not preserve the order of the nonlinear weights of the stencils. The nonlinear weights may be increased for non-smooth stencils and be decreased for smooth stencils. It is then indicated to require the set of mapping functions to be Order-Preserving in mapped WENO schemes. Therefore, we propose a new mapped WENO scheme with a set of mapping functions to be order-preserving which exhibits a remarkable advantage over the mapped WENO schemes in references. For long output time simulations, the new scheme has the capacity to attain high resolutions and avoid spurious oscillations near discontinuities meanwhile.Henrick et al.[15]pointed out that the classic WENO-JS scheme was less than fifth-order for many cases such as at or near critical points of order n cp = 1 in the smooth regions. Here, we refer to n cp as the order of the critical point; e.g., n cp = 1 corresponds to f = 0, f 0 and n cp = 2 corresponds to f = 0, f = 0, f 0, etc. The necessary and sufficient conditions on the nonlinear weights for optimality of the convergence rate of the fifth-order WENO schemes were derived by Henrick et al. in[15]. These conditions were reduced to a simple sufficient condition[2]which could be easily extended to the (2r − 1)th-order WENO schemes[8]. Then, by designing a mapping function that satisfies the sufficient condition to achieve the optimal order of accuracy, the original mapped WENO scheme, named WENO-M, was devised by Henrick et al.[15].Recently, Feng et al.[7]noted that, when the WENO-M scheme was used for solving the problems with discontinuities for long output times, its mapping function may amplify the effect from the non-smooth stencils leading to a potential loss of accuracy near discontinuities. To address this issue, they proposed a piecewise polynomial mapping function with two additional requirements, that is, g (0) = 0 and g (1) = 0 (g(x) denotes the mapping function), to the original criteria in[15]. However, the resultant WENO-PMk scheme [7] may generate the non-physical oscillations near the discontinuities as shown inFig. 8of [8] and Figs. 3-8 of [34]. Later, Feng et al. [8] devised an improved mapping method which is referred to as WENO-IM(k, A) where k is a positive even integer and A a positive real number. The broader class of the improved mapping functions of the family of the WENO-IM(k, A) schemes includes the mapping function of the WENO-M scheme as a special case by taking k = 2 and A = 1. Feng et al. indicated that by taking k = 2 and A = 0.1 in the fifth-order WENO-IM(k, A) scheme, far better numerical solutions with less dissipation and higher resolution could be obtained than that of the fifth-order WENO-M scheme. However, the possible over-amplification of the contributions from non-smooth stencils exists for the WENO-IM(k, A) scheme as the first derivative of its mapping function satisfies g IM s (0; k, A) = 1 + 1 Ad k−1 s
|
10.4208/cicp.oa-2021-0150
|
[
"https://arxiv.org/pdf/2104.04467v1.pdf"
] | 233,204,757 |
2104.04467
|
25a31b683b27ca72b6018cfc23b2f94887b0a9e4
|
A new mapped WENO scheme using order-preserving mapping
Ruo Li
CAPT, LMAM and School of Mathematical Sciences
Peking University
100871BeijingChina
Wei Zhong
School of Mathematical Sciences
Peking University
100871BeijingChina
Northwest Institute of Nuclear Technology
Xi'an 710024China
A new mapped WENO scheme using order-preserving mapping
mapped WENOOrder-preserving MappingHyperbolic Problems
Existing mapped WENO schemes can hardly prevent spurious oscillations while preserving high resolutions at long output times. We reveal in this paper the essential reason for such phenomena. It is actually caused by that the mapping function in these schemes can not preserve the order of the nonlinear weights of the stencils. The nonlinear weights may be increased for non-smooth stencils and be decreased for smooth stencils. It is then indicated to require the set of mapping functions to be Order-Preserving in mapped WENO schemes. Therefore, we propose a new mapped WENO scheme with a set of mapping functions to be order-preserving which exhibits a remarkable advantage over the mapped WENO schemes in references. For long output time simulations, the new scheme has the capacity to attain high resolutions and avoid spurious oscillations near discontinuities meanwhile.Henrick et al.[15]pointed out that the classic WENO-JS scheme was less than fifth-order for many cases such as at or near critical points of order n cp = 1 in the smooth regions. Here, we refer to n cp as the order of the critical point; e.g., n cp = 1 corresponds to f = 0, f 0 and n cp = 2 corresponds to f = 0, f = 0, f 0, etc. The necessary and sufficient conditions on the nonlinear weights for optimality of the convergence rate of the fifth-order WENO schemes were derived by Henrick et al. in[15]. These conditions were reduced to a simple sufficient condition[2]which could be easily extended to the (2r − 1)th-order WENO schemes[8]. Then, by designing a mapping function that satisfies the sufficient condition to achieve the optimal order of accuracy, the original mapped WENO scheme, named WENO-M, was devised by Henrick et al.[15].Recently, Feng et al.[7]noted that, when the WENO-M scheme was used for solving the problems with discontinuities for long output times, its mapping function may amplify the effect from the non-smooth stencils leading to a potential loss of accuracy near discontinuities. To address this issue, they proposed a piecewise polynomial mapping function with two additional requirements, that is, g (0) = 0 and g (1) = 0 (g(x) denotes the mapping function), to the original criteria in[15]. However, the resultant WENO-PMk scheme [7] may generate the non-physical oscillations near the discontinuities as shown inFig. 8of [8] and Figs. 3-8 of [34]. Later, Feng et al. [8] devised an improved mapping method which is referred to as WENO-IM(k, A) where k is a positive even integer and A a positive real number. The broader class of the improved mapping functions of the family of the WENO-IM(k, A) schemes includes the mapping function of the WENO-M scheme as a special case by taking k = 2 and A = 1. Feng et al. indicated that by taking k = 2 and A = 0.1 in the fifth-order WENO-IM(k, A) scheme, far better numerical solutions with less dissipation and higher resolution could be obtained than that of the fifth-order WENO-M scheme. However, the possible over-amplification of the contributions from non-smooth stencils exists for the WENO-IM(k, A) scheme as the first derivative of its mapping function satisfies g IM s (0; k, A) = 1 + 1 Ad k−1 s
Introduction
Many numerical methods have been studied to solve the hyperbolic problems which may generate discontinuities as time evolves in its solution even if the initial condition is smooth, especially for nonlinear cases. As the discontinuities often cause spurious oscillations in numerical calculations, it is very difficult to design high order numerical schemes. Thus, the essentially non-oscillatory (ENO) schemes [12,14,13,11] and weighted ENO (WENO) schemes [30,31,22,17,29] have developed quite successfully in recent decades, and they are very popular methods to solve the hyperbolic conservation laws because of their success to the ENO property. The goal of this paper is to devise a new version of the fifth-order finite volume WENO scheme for solving the following hyperbolic conservation laws
∂u ∂t + d α=1 ∂f α (u) ∂x α = 0, x α ∈ R, t > 0, u(x, 0) = u 0 (x),(1)
with proper boundary conditions. Here, u = (u 1 , · · · , u m ) ∈ R m are the conserved variables and f α : R m → R m , α = 1, 2, · · · , d are the Cartesian components of flux. The first WENO scheme was developed by Liu et al. [22] in the finite volume version. It converts an rth-order ENO scheme [12,14,13] into an (r + 1)th-order WENO scheme through a convex combination of all candidate substencils instead of only using the optimal smooth candidate stencil by the original ENO scheme. By introducing a different definition of the smoothness indicators used to measure the smoothness of the numerical solutions on substencils, the classic (2r − 1)th-order WENO-JS scheme was proposed by Jiang and Shu [17]. The WENO-JS scheme has been successfully used in a wide number of applications. The WENO methodology within the general framework of smoothness indicators and nonlinear weights proposed in the WENO-JS scheme [17] is still in development to improve the convergence rate in smooth regions and reduce the dissipation near the discontinuities [15,8,7,2,3,16,5,1].
, and this excessive amplification of the weights of non-smooth stencils causes spurious oscillations to occur in the solution which may even render the algorithm unstable [33], especially for higher order cases. The numerical experiments in [34] showed that the seventh-and ninth-order WENO-IM(2, 0.1) schemes generated evident spurious oscillations near discontinuities when the output time is large. In addition, our calculations as shown in Figs. 14-20 of this paper indicate that, even for the fifth-order WENO-IM(2, 0.1) scheme, the spurious oscillations are also produced when the grid number increases or a different initial condition is used.
Many other improved mapped WENO schemes, such as the WENO-PPMn [21], WENO-RM(mn0) [34], WENO-MAIMi [19], WENO-ACM [20] schemes and et al., have been successfully developed to improve the performances of the classic WENO-JS scheme in some ways, like achieving optimal convergence orders near critical points in smooth regions, having lower numerical dissipations and achieving higher resolutions near discontinuities, and we refer to the references for more details. However, as mentioned in previously published works of literature [8,34], most of the existing improved mapped WENO schemes could not prevent the generation of the spurious oscillations near discontinuities, especially for long output time simulations.
Taking a long output time simulation of the linear advection problem with discontinuities as an example, we make a further study of the nonlinear weights of the existing mapped WENO schemes developed in [7,8,19] and the MIP-WENO-ACMk scheme, which is a generalized form of the WENO-ACM scheme [20] (see details in subsection 3.1). We find that, in many points, the order of the nonlinear weights for the substencils of the same global stencil has been changed in the mapping process of all these considered mapped WENO schemes. The order change of the nonlinear weights is caused by weights increasing of non-smooth substencils and weights decreasing of smooth substencils. Through theoretical analysis and extensive numerical tests, we reveal that this is the essential cause of the potential loss of accuracy of the WENO-M scheme and the spurious oscillation generation of the existing improved mapped WENO schemes.
Indicated by the observation above, we give the definition of the order-preserving (see Definition 2 below) mapping and suggest it as an additional criterion in the design of the mapping function. Then we propose a new mapped WENO scheme, referred to as MOP-WENO-ACMk below, with its mapping function satisfying the additional criterion. This new version of the mapped WENO scheme achieves the optimal convergence order of accuracy even at critical points. It also has low numerical dissipation and high resolution but does not generate spurious oscillation near discontinuities even if the output time is large.
At first, a series of accuracy tests show the capacity of the proposed scheme to achieve the optimal convergence order in smooth regions with first-order critical points and its advantage in long output time simulations of the prob-lems with very high-order critical points. Some linear advection examples with long output times are then presented to demonstrate that the proposed scheme can obtain high resolution and does not generate spurious oscillation near discontinuities. At last, some benchmark problems modeled via the two-dimensional Euler equations are computed by various considered WENO schemes to compare with the proposed scheme. It is clear that the proposed scheme exhibits significant advantages in preventing spurious oscillations.
The organization of this paper is as follows. Preliminaries to understand the finite volume method and the procedures of the WENO-JS [17], WENO-M [15], WENO-PM6 [7] and WENO-IM(k, A) [8] schemes are reviewed in Section 2. In Section 3, we present a detailed analysis on how the nonlinear weights are mapped in some existing mapped WENO schemes and the consequences of these mappings on the numerical solutions. In Section 4, we propose a set of mapping functions that is order-preserving, as well as its properties, and apply it to construct the new mapped WENO scheme. Numerical experiments are presented in Section 5 to illustrate the advantages of the proposed WENO scheme. Finally, some concluding remarks are made in Section 6 to close this paper.
Brief review of the fifth-order WENO schemes
Finite volume methodology
We consider the finite volume method for the following one-dimensional scalar hyperbolic conservation laws
∂u ∂t + ∂ f (u) ∂x = 0, x l < x < x r , t > 0, u(x, 0) = u 0 (x).(2)
For brevity in the description, we assume that the computational domain is discretized into uniform cells I j = [x j−1/2 , x j+1/2 ], j = 1, · · · , N with the uniform cell size ∆x = x r −x l N , and the associated cell centers and cell boundaries are denoted by x j = x l + ( j − 1/2)∆x and x j±1/2 = x j ± ∆x/2 respectively. Letū(x j , t) = 1 ∆x
x j+1/2 x j−1/2 u(ξ, t)dξ be the cell average of I j , then by integrating Eq.(2) over the control volumes I j and employing some simple mathematical manipulations, we approximate Eq.(2) by the following finite volume conservative formulation
dū j (t) dt ≈ − 1 ∆x f (u − j+1/2 , u + j+1/2 ) −f (u − j−1/2 , u + j−1/2 ) ,(3)
whereū j (t) is the numerical approximation to the cell averageū(x j , t), and the numerical fluxf (u − j±1/2 , u + j±1/2 ) where u ± j±1/2 refer to the limits of u is a replacement of the physical flux function f (u) at the cell boundaries x j±1/2 . The values of u ± j±1/2 can be obtained by the technique of reconstruction like some WENO reconstructions narrated later. In this paper, we choose the global Lax-Friedrichs fluxf (a, b)
= 1 2 [ f (a) + f (b) − α(b − a)], where α = max u | f (u)
| is a constant and the maximum is taken over the whole range of u. For the hyperbolic conservation laws system, a local characteristic decomposition is commonly used and more details can be found in [17]. In two-dimensional Cartesian meshes, two classes of finite volume WENO schemes are studied in detail in [35], and we take the one denoted as class A in this paper.
The ordinary differential equation (ODE) system Eq. (3) can be solved using a suitable time discretization, and the following explicit, third-order, strong stability preserving (SSP) Runge-Kutta method [30,9,10] is employed in our calculations u (1) = u n + ∆tL(u n ),
u (2) = 3 4 u n + 1 4 u (1) + 1 4 ∆tL(u (1) ), u n+1 = 1 3 u n + 2 3 ∆tL(u (2) ), where L(u j ) := − 1 ∆x f (u − j+1/2 , u + j+1/2 ) −f (u − j−1/2 , u + j−1/2 ) ,
and u (1) , u (2) are the intermediate stages, u n is the value of u at time level t n = n∆t, and ∆t is the time step satisfying some proper CFL condition. As mentioned earlier, the WENO reconstructions will be applied to obtain L(u j ).
WENO-JS
We review the process of the fifth-order WENO-JS reconstruction [17]. As the right-biased reconstruction u + j+1/2 can easily be obtained by mirror symmetry with respect to the location x j+1/2 of that for the left-biased one u − j+1/2 , we describe only the reconstruction procedure of u − j+1/2 . For simplicity of notation, we do not use the subscript "-" in the following content.
For constructing the values of u j+1/2 from known cell average values, the fifth-order WENO-JS scheme uses a 5-point global stencil S 5 = {I j−2 , I j−1 , I j , I j+1 , I j+2 }. Normally, S 5 is subdivided into three 3-point substencils S s = {I j+s−2 , I j+s−1 , I j+s }, s = 0, 1, 2. Explicitly, the third-order approximations of u(x j+1/2 , t) associated with these substencils are given by
u 0 j+1/2 = 1 6 (2ū j−2 − 7ū j−1 + 11ū j ), u 1 j+1/2 = 1 6 (−ū j−1 + 5ū j + 2ū j+1 ), u 2 j+1/2 = 1 6 (2ū j + 5ū j+1 − 2ū j+2 ).(4)
The above three third-order approximations are combined in a weighted average to define the fifth-order WENO approximation of u(x j+1/2 , t),
u j+1/2 = 2 s=0 ω s u s j+1/2 ,(5)
where ω s is the nonlinear weight of the substencil S s . In the classic WENO-JS scheme, ω s is calculated as
ω JS s = α JS s 2 l=0 α JS l , α JS s = d s ( + β s ) 2 , s = 0, 1, 2,(6)where d 0 = 0.1, d 1 = 0.6, d 2 = 0.3 are ideal weights of ω s satisfying 2 s=0 d s u s j+1/2 = u(x j+1/2 , t) + O(∆x 5 )
in smooth regions, is a small positive number introduced to prevent the denominator from becoming zero, and the parameters β s are the smoothness indicators for the third-order approximations u s j+1/2 and their explicit forms defined by Jiang and Shu [17] is given as
β 0 = 13 12 ū j−2 − 2ū j−1 +ū j 2 + 1 4 ū j−2 − 4ū j−1 + 3ū j 2 , β 1 = 13 12 ū j−1 − 2ū j +ū j+1 2 + 1 4 ū j−1 −ū j+1 2 , β 2 = 13 12 ū j − 2ū j+1 +ū j+2 2 + 1 4 3ū j − 4ū j+1 +ū j+2 2 .
The fifth-order WENO-JS scheme is able to achieve optimal order of accuracy in smooth regions without critical points. However, it loses accuracy and its order of accuracy decreases to third-order or even less at critical points. More details can be found in [15].
WENO-M
It has been indicated that [15,2,8,26,6] a sufficient condition that ensures the fifth-order WENO schemes retaining optimal order of convergence is simply given by
ω ± s − d s = O(∆x 3 ), s = 0, 1, 2.(7)
The condition Eq.(7) may not hold in the case of smooth extrema or at critical points when the fifth-order WENO-JS scheme is used. Henrick et al. [15] proposed a fix to this deficiency in their WENO-M scheme by introducing a mapping function that makes ω s approximating the ideal weights d s with increased accuracy. The mapping function of the nonlinear weights ω ∈ [0, 1] is given by
g M s (ω) = ω d s + d 2 s − 3d s ω + ω 2 d 2 s + (1 − 2d s )ω , s = 0, 1, 2.(8)
One can verify that g M s (ω) meets the requirement in Eq. (7), and clearly, this mapping function is a non-decreasing monotone function on [0, 1] with finite slopes which satisfies the following properties. Lemma 1. The mapping function g M s (ω) defined by Eq.(8) satisfies:
C1. 0 ≤ g M s (ω) ≤ 1, g M s (0) = 0, g M s (1) = 1; C2. g M s (d s ) = d s ; C3. g M s (d s ) = g M s (d s ) = 0.
With the mapping function defined by Eq. (8), the nonlinear weights of the WENO-M scheme are defined as
ω M s = α M s 2 l=0 α M l , α M s = g M s (ω JS s ), s = 0, 1, 2,
where ω JS s are calculated by Eq.(6). In [15], it has been analyzed and proved in detail that the WENO-M scheme can retain the optimal order of accuracy in smooth regions even near the first-order critical points.
WENO-PMk
Recently, Feng et al. [7] noticed that the mapping function g M s (ω) in Eq.(8) amplifies the effect from the nonsmooth stencils by a factor of (1 + 1/d s ) as its first derivative satisfies g M s (0) = 1 + 1/d s . They argued that this may cause the potential loss of accuracy near the discontinuities or the parts with sharp gradients. To address this issue, Feng et al. [7] add two requirements, that is, g s (0) = 0 and g s (1) = 0, to the original criteria as shown in Lemma 1. To meet these criteria they proposed a new mapping by the following piecewise polynomial function
g PM s (ω) = c 1 (ω − d s ) k+1 (ω + c 2 ) + d s , k ≥ 2, s = 0, 1, 2,(9)
where c 1 , c 2 are constants with specified parameters k and d s , taking the following forms
c 1 = (−1) k k + 1 d k+1 s , 0 ≤ ω ≤ d s , − k + 1 (1 − d s ) k+1 , d s < ω ≤ 1, c 2 = d s k + 1 , 0 ≤ ω ≤ d s , d s − (k + 2) k + 1 , d s < ω ≤ 1.
Lemma 2. The mapping function g PM s (ω) defined by Eq.(9) satisfies:
C1. g PM s (ω) ≥ 0, ω ∈ [0, 1]; C2. g PM s (0) = 0, g PM s (1) = 1, g PM s (d s ) = d s ; C3. g PM s (d s ) = · · · = g PM (k) s (d s ) = 0; C4. g PM s (0) = g PM s (1) = 0.
Similarly, with the mapping function defined by Eq. (9) where the parameter k is taken to be 6 as recommended in [7], the WENO-PM6 scheme is proposed by computing the nonlinear weights as
ω PM6 s = α PM6 s 2 l=0 α PM6 l , α PM6 s = g PM6 s (ω JS s ), s = 0, 1, 2.
It has been shown by numerical experiments [7,34] that the two additional requirements are effective and the resolution near discontinuities of the WENO-PM6 scheme is significantly higher than those of the WENO-JS and WENO-M schemes, especially for long output times. We refer to [7] for more details.
WENO-IM(k, A)
Feng et al. [8] has proposed the WENO-IM(k, A) scheme by rewriting the mapping function of the WENO-M scheme as shown in Eq. (8). The broader class of improved mapping functions g IM s (ω; k, A) is defined by
g IM s (ω; k, A) = d s + ω − d s k+1 A ω − d s k A + ω(1 − ω) , A > 0, k = 2n, n ∈ N + , s = 0, 1, 2.(10)
Then, the associated nonlinear weights are given by
ω IM s = α IM s 2 l=0 α IM l , α IM s = g IM s (ω JS s ; k, A), s = 0, 1, 2.
It is trivial to show that g M s (ω) belongs to the g IM s (ω; k, A) family of functions as g M s (ω) = g IM s (ω; 2, 1). Actually, the selection of parameters k and A has been discussed carefully in [8], and k = 2, A = 0.1 was recommended.
C1. g IM s (ω; k, A) ≥ 0, ω ∈ [0, 1]; C2. g IM s (0; k, A) = 0, g IM s (1; k, A) = 1; C2. g IM s (d s ; k, A) = d s ; C3. g IM s (d s ; k, A) = · · · = g IM (k) s (d s ; k, A) = 0, g IM (k+1) s (d s ; k, A) 0.
We refer to [8] for the detailed proof of Lemma 3.
Analysis of the nonlinear weights of the existing mapped WENO schemes
Monotone increasing piecewise mapping function and the generalized WENO-ACM schemes
In [20], the present authors have proposed the fifth-order WENO-ACM scheme. It has been demonstrated that taking narrower transition intervals (standing for the intervals over which the mapping results are in a transition from 0 to d s or from d s to 1) of the mapping function does not bring any adverse effects on the resolutions and convergence orders, and the associated scheme still performs very well even if the transition intervals are infinitely close to 0. Therefore, we can set the transition intervals to be 0 leading to a simpler form of the mapping function as follows
g MIP−ACM s (ω) = 0, ω ∈ Ω 1 = [0, CFS s ), d s , ω ∈ Ω 2 = [CFS s , CFS s ], 1, ω ∈ Ω 3 = (CFS s , 1],(11)
where CFS s is the same as that in [19,20] satisfying CFS s ∈ (0, d s ) , and CFS s = 1 − 1−d s d s × CFS s with CFS s ∈ (d s , 1). Clearly, g MIP−ACM s (ω) is a discontinuous function with two jump discontinuities at ω = CFS s and ω = CFS s on the interval [0, 1], while differentiable mapping functions on the interval [0, 1] were required in previously published mapped WENO schemes [15,7,8,34,21,32,33,19,20]. However, after extensive numerical tests, we find that a continuous mapping function is not essential in the design of the mapped WENO scheme. Actually, in the evaluation at ω JS s of the Taylor series approximations of the mapping function about the optimal weights d s , which plays the core role in the convergence analysis of the mapped WENO schemes (originally proposed by Henrick in the statement of page 556 in [15]), one needs only the mapping function to be differentiable near the neighborhood of ω = d s but not over the whole range of ω ∈ [0, 1] . Therefore, we innovatively propose the definition of the monotone increasing piecewise mapping function. Definition 1. (monotone increasing piecewise mapping function) Let Ω = [0, 1], and assume that Ω is divided into a sequence of nonoverlapping intervals Ω i , i = 1, 2, · · · , M, that is, Ω = Ω 1 ∪ Ω 2 ∪ · · · ∪ Ω M and Ω i ∩ Ω j = ∅, for ∀i, j = 1, 2, · · · , M and i j. Let Ω i = {ω|ω ∈ Ω i and ω ∂Ω i }, and suppose that g MIP−X s (ω) is a mapping function on the interval [0, 1], then g MIP−X s (ω) is called a monotone increasing piecewise mapping function, if it satisfies the following conditions: (C1) for ∀ω ∈ Ω i , i = 1, · · · , M, g MIP−X s (ω) is differentiable and g MIP−X s (ω) ≥ 0 ; (C2) for ∀ω i , ω j ∈ Ω, if ω i ≥ ω j , then g MIP−X s (ω i ) ≥ g MIP−X s (ω j ). 6
It is trivial to verify that g MIP−ACM s (ω) defined by Eq.(11) is a monotone increasing piecewise mapping function and it satisfies the following properties.
Lemma 4. The mapping function g MIP−ACM s (ω) defined by Eq.(11) satisfies the following properties:
C1. for ∀ω ∈ Ω i , i = 1, 2, 3, g MIP−ACM s (ω) ≥ 0; C2. for ∀ω ∈ Ω, 0 ≤ g MIP−ACM s (ω) ≤ 1; C3. d s ∈ Ω 2 , g MIP−ACM s (d s ) = d s , g MIP−ACM s (d s ) = g MIP−ACM s (d s ) = · · · = 0; C4. g MIP−ACM s (0) = 0, g MIP−ACM s (1) = 1, g MIP−ACM s (0 + ) = g MIP−ACM s (1 − ) = 0.
Naturally, we can derive a generalized version of the mapping function g MIP−ACM s (ω) as follows
g MIP−ACMk s (ω) = k s ω, ω ∈ Ω 1 , d s , ω ∈ Ω 2 , 1 − k s (1 − ω), ω ∈ Ω 3 ,(12)
where Ω 1 , Ω 2 , Ω 3 are the same as in Eq.(11) and k s ∈ 0, d s CFS s . Obviously, if k s is taken to be 0, g MIP−ACMk s (ω) exactly turns into g MIP−ACM s (ω). Thus, we need only discuss g MIP−ACMk s (ω). Similarly, it is easy to know that g MIP−ACMk s (ω) is a monotone increasing piecewise mapping function and it satisfies the following properties.
Lemma 5. The mapping function g MIP−ACMk s (ω) defined by Eq.(12) satisfies the following properties:
C1. for ∀ω ∈ Ω i , i = 1, 2, 3, g MIP−ACMk s (ω) ≥ 0; C2. for ∀ω ∈ Ω, 0 ≤ g MIP−ACMk s (ω) ≤ 1; C3. d s ∈ Ω 2 , g MIP−ACMk s (d s ) = d s , g MIP−ACMk s (d s ) = g MIP−ACMk s (d s ) = · · · = 0; C4. g MIP−ACMk s (0) = 0, g MIP−ACMk s (1) = 1, g MIP−ACMk s (0 + ) = g MIP−ACMk s (1 − ) = k s .
As the proofs of Lemma 4 and Lemma 5 are very easy, we do not state them here and we can observe these properties intuitively from the g MIP−ACMk s (ω) ∼ ω curves as shown in Fig. 3 below. Now, we give the monotone increasing piecewise approximate-constant-mapped WENO scheme, denoted as MIP-WENO-ACMk , with the mapped weights as follows
ω MIP−ACMk s = α MIP−ACMk s 2 l=0 α MIP−ACMk l , α MIP−ACMk s = g MIP−ACMk s (ω JS s ).(13)
We present Theorem 2, which will show that the MIP-WENO-ACMk scheme can recover the optimal convergence orders for different values of n cp in smooth regions. Theorem 1. When CFS s d s , for ∀n cp < r − 1, the (2r − 1)th-order MIP-WENO-ACMk scheme can achieve the optimal convergence rates of accuracy if the new mapping function g MIP−ACMk s (ω) is applied to the weights of the (2r − 1)th-order WENO-JS scheme.
We can prove Theorem 1 by employing the Taylor series analysis and using Lemma 5 of this paper and Lemma 1 and Lemma 2 in the statement of page 456 to 457 in [8], and the detailed proof process is almost identical to the one in [15].
Discussion about the effects of g (0) of the mapped WENO schemes on resolutions and spurious oscillations
In order to study the effects of g (0) of the mapped WENO schemes on resolutions and spurious oscillations in simulating the problems with discontinuities, we calculate the one-dimensional linear advection equation
u t + u x = 0,(14)
with the following initial condition [17] u(
x, 0) = 1 6 G(x, β, z −δ) + 4G(x, β, z) + G(x, β, z +δ) , x ∈ [−0.8, −0.6], 1, x ∈ [−0.4, −0.2], 1 − 10(x − 0.1) , x ∈ [0.0, 0.2], 1 6 F(x, α, a −δ) + 4F(x, α, a) + F(x, α, a +δ) , x ∈ [0.4, 0.6], 0, otherwise,(15)
where 2 , 0 , and the constants are z = −0.7,δ = 0.005, β = log 2 36δ 2 , a = 0.5 and α = 10. The periodic boundary condition is used in the two directions and the CFL number is set to be 0.1. This problem consists of a Gaussian, a square wave, a sharp triangle and a semi-ellipse. For brevity in the presentation, we call this Linear Problem SLP as it is presented by Shu et al. in [17].
G(x, β, z) = e −β(x−z) 2 , F(x, α, a) = max 1 − α 2 (x − a)
The following two groups of mapped WENO schemes with various values of g X (0) (X stands for some specific mapped WENO scheme) are employed in the discussion.
Study on the WENO-PM6 and WENO-IM(k, A) schemes
In this subsection, we focus on the performances of the WENO-PM6 scheme [7] and the WENO-IM(k, A) schemes [8] with k = 2 and A = 0.1, 0.5 on solving SLP. A uniform mesh size of N = 400 is used and the output time is set to be t = 200.
From Fig. 1, we can easily see that the values of g X (0) satisfy Fig. 2 shows the calculating results, and Table 1 shows the L 1 , L 2 , L ∞ errors and the order of these errors (in brackets in descending manner), i.e., in the second column, 1 indicates the largest L 1 error and 2 indicates the second largest one, etc. From Fig. 2 and Table 1, we can observe that: (1) the WENO-IM(2, 0.1) scheme, whose g IM(2,0.1) (0) is the largest, presents the smallest spurious oscillation (actually, there is no spurious oscillation in present computing conditions) and gives the smallest L 1 , L 2 and L ∞ errors; (2) the WENO-PM6 scheme, whose g PM6 (0) is the smallest and satisfies g PM6 (0) = 0, presents the largest spurious oscillation and gives the second largest L 1 , L 2 and L ∞ errors;
g PM6 (0) < g IM(2,0.5) (0) < g IM(2,0.1) (0).(16)
(3) the WENO-IM(2, 0.5) scheme shows the lowest resolutions and presents evident spurious oscillation at the top of the square wave, and it gives the largest L 1 , L 2 and L ∞ errors, while its g IM(2,0.5) (0) is neither largest nor smallest.
Study on the WENO-MAIM2 and MIP-WENO-ACMk schemes
In this subsection, we focus on the performances of the WENO-MAIM2 [19] and MIP-WENO-ACMk schemes on solving SLP. We still use a uniform mesh size of N = 400 and choose the output time t = 200. As shown in Table 2, 6 different test schemes (ts-i,i = 1, · · · , 6) of the WENO-MAIM2 and MIP-WENO-ACMk schemes with specified parameters leading to various values of g ts−i (0) are used in the discussion. In Fig. 3, we plot the curves of g ts−i (ω) ∼ ω, i = 1, · · · , 6, and we can intuitively observe that the values of g ts−i (0) satisfy g ts−3 (0) = g ts−5 (0) < g ts−1 (0) < g ts−4 (0) < g ts−6 (0) < g ts−2 (0). Fig. 4 shows the calculating results, and Table 3 shows the L 1 , L 2 , L ∞ errors and the order of these errors (in brackets in descending manner). From Fig. 4 and Table 3 , we can observe that: (1) all the 6 test schemes present spurious oscillations; (2) ts-2 shows more in number and bigger in size of the spurious oscillations than ts-1, and the L 1 , L 2 , L ∞ errors of ts-2 are larger than those of ts-1; (3) however, although g ts−4 (0) > g ts−3 (0) and g ts−3 (0) = 0, ts-4 shows fewer in number and smaller in size of the spurious oscillations than ts-3, and the L 1 , L 2 , L ∞ errors of ts-4 are smaller than those of ts-3; (4) in addition, although g ts−5 (0) g ts−6 (0) and g ts−5 (0) = 0, ts-5 shows comparable spurious oscillations both in number and in size with ts-6, and the L 1 , L 2 , L ∞ errors of ts-5 are very close to those of ts-6, or in other words, the L 1 error of ts-5 is slightly smaller than that of ts-6, while the L 2 , L ∞ errors of ts-5 are slightly larger than those of ts-6.
Analysis of the real-time mapping relationship
Definition of order-preserving/non-order-preserving mapping and important numerical experiments
From the discussion above, we can conclude that it is not essential to prevent the corresponding mapped WENO scheme from generating spurious oscillations or causing potential loss of accuracy near discontinuities that the first Table 2. 6 different test schemes (ts-1, · · · , 6) of the WENO-MAIM2 and MIP-WENO-ACMk schemes with specified parameters. Table 2
ts-i WENO-X Parameters g X (0) ts-1 WENO-MAIM2 k = 10, A = 1.0e−6, Q = 1.0, CFS s = 0.05 1 ts-2 WENO-MAIM2 k = 10, A = 1.0e−6, Q = 0.25, CFS s = 0.05 1 ts-3 MIP-WENO-ACMk k s = 0, CFS s = 0.1d s 0 ts-4 MIP-WENO-ACMk k s = 10, CFS s = 0.1d s 10 ts-5 MIP-WENO-ACMk k s = 0, CFS s = 0.01d s 0 ts-6 MIP-WENO-ACMk k s = 100, CFS s = 0., d 0 = 0.1, d 1 = 0.6, d 2 = 0.3.
derivatives of the mapping functions tend to 0 or a small value when ω is close to 0. In this subsection, to discover the essential cause of the spurious oscillation generation and potential loss of accuracy, we will make a further analysis of the real-time mapping relationship g X (ω) ∼ ω, that stands for the mapping relationship obtained from the calculation of some specific problem at specified output time but not directly obtained from the mapping function. Before conducting the numerical experiments for the analysis, we propose the definition of order-preserving mapping and non-order-preserving mapping.
Definition 2. (order-preserving/non-order-preserving mapping) Suppose that g X s (ω), s = 0, · · · , r−1 is a monotone increasing piecewise mapping function of the (2r − 1)th-order mapped WENO-X scheme. We say the set of mapping
functions g X s (ω), s = 0, · · · , r − 1 is order-preserving (OP), if for ∀ω a ≥ ω b , g X m (ω a ) ≥ g X n (ω b ), ∀m, n ∈ {0, · · · , r − 1},(18)
where the equality holds if and only if ω a = ω b . Otherwise, we say the set of mapping functions g X s (ω), s = 0, · · · , r − 1 is non-order-preserving (non-OP).
It is trivial to know that, even when the set of mapping functions g X s (ω), s = 0, · · · , r − 1 is non-OP, Eq.(18) may also hold at some points. Therefore, we add the following definition of OP point and non-OP point.
Definition 3. (OP point, non-OP point) Let S 2r−1 denote the (2r − 1)-point global stencil centered around x j . Assume that S 2r−1 is subdivided into r-point substencils {S 0 , · · · , S r−1 } and ω s are the nonlinear weights corresponding to the substencils S s with s = 0, · · · , r − 1, which are used as the independent variables by the mapping function. Suppose that g X s (ω), s = 0, · · · , r − 1 is the mapping function of the mapped WENO-X scheme, then we say that a non-OP mapping process occurs at x j , if ∃m, n ∈ {0, · · · , r − 1}, s.t. Table 2 for the SLP with N = 400 at long output time t = 200. And we say x j is a non-OP point. Otherwise, we say x j is an OP point.
ω m − ω n g X m (ω m ) − g X n (ω n ) ≤ 0, if ω m ω n , g X m (ω m ) g X n (ω n ), if ω m = ω n .(19)
After extensive numerical experiments, we have discovered that, for almost all previously published mapped WENO schemes at least as far as we know, the non-OP mapping process will definitely occur when they are used for solving the problems with discontinuities. To demonstrate this, we still take the SLP as an example. The WENO-PM6 [7], WENO-IM(2, 0.1) [8] schemes and the MIP-WENO-ACMk scheme with parameters k s = 0, CFS s = d s 10 are used. A uniform mesh size of N = 400 and two output times t = 2 (short) and t = 200 (long) are taken in all calculations. In Fig. 5 to Fig. 7, we present the real-time mapping relationship g X (ω) ∼ ω of the considered mapped WENO schemes, where two of the non-OP points are selected and highlighted in solid symbols for demonstration. In Table 4, we present the computed values of the nonlinear weights, both before and after the mapping process, associated with these highlighted non-OP points. We also give the order (in brackets in descending manner) of these nonlinear weights. It is evident that the order of the nonlinear weights has been changed when the mapping process is implemented at each non-OP point. In addition, as shown in Fig. 8 and Fig. 9, we see that there are many non-OP points in the numerical solutions of the WENO-PM6 scheme for both short and long output times. Similarly, we also find many non-OP points in the results of the WENO-IM(2, 0.1) and MIP-WENO-ACMk schemes, while we do not present them here just for brevity.
Effects of the non-OP mapping process on the numerical solutions
Without loss of generality, we assume that the weights ω JS s , which would be substituted into some mapping function, satisfy ω JS 0 > ω JS 1 > ω JS 2 , the mapped weights ω non−OP
u non−OP j+1/2 = 2 s=0 d non−OP s u s j+1/2 + 2 s=0 (ω non−OP s − d non−OP s )u s j+1/2 .(20)
In smooth regions, the first term in the right-hand side of Eq.
Similarly, we have Before the mapping (the order)
u OP j+1/2 = 2 s=0 d OP s u s j+1/2 + 2 s=0 (ω OP s − d OP s )u s j+1/2 ,(22)
After the mapping (the order)
Schemes, X Time, t Point Position, x ω 0 ω 1 ω 2 g 0 (ω 0 ) g 1 (ω 1 ) g 2 (ω 2 ) WENO-PM6 2 A1 -0.
So the second term in the right-hand side of Eq. (20) or Eq.(22) must be at least an O(∆x 6 ) quantity to ensure the convergence rate to be approximated at 5th-order, and this is the key point that the mapped WENO methods have focused on. However, in the parts of solutions with discontinuities, Eq.(21) and Eq.(23) usually do not hold. Furthermore, it is easy to know that the non-OP mapping process will amplify the effect from the relatively non-smooth stencils and decrease the effect from the relatively smooth stencils, so that we can probably get the following inequality
2 s=0 d non−OP s u s j+1/2 − u(x j+1/2 ) > 2 s=0 d OP s u s j+1/2 − u(x j+1/2 ) .(24)
Now, we analyze the effect of the second term in the right-hand side of Eq.
) + g non−OP s (d non−OP s ) ω JS s − d non−OP s + · · · + g non−OP (n) s (d non−OP s ) n! ω JS s − d non−OP s n + O ω JS s − d non−OP s n+1 = d non−OP s + g non−OP (n) s (d non−OP s ) n! ω JS s − d non−OP s n + O ω JS s − d non−OP s n+1 .(25)
Similarly, when g OP
s (d OP s ) = d OP s , g OP s (d OP s ) = · · · = g OP (n−1) s (d OP s ) = 0, g OP (n) s (d OP s ) 0, we have α OP s = g OP s (d OP s ) + g OP s (d OP s ) ω JS s − d OP s + · · · + g OP (n) s (d OP s ) n! ω JS s − d OP s n + O ω JS s − d OP s n+1 = d OP s + g OP (n) s (d OP s ) n! ω JS s − d OP s n + O ω JS s − d OP s n+1 .(26)
14 Then, from Eq. (25) and Eq. ( 26), we obtain
α non−OP s − d non−OP s α OP s − d OP s ≈ g non−OP (n) s (d non−OP s ) g OP (n) s (d OP s ) × ω JS s − d non−OP s ω JS s − d OP s n .(27)
One may probably get
ω JS s − d non−OP s ω JS s − d OPα non−OP s − d non−OP s α OP s − d OP s .(28)
Thus, according to Eq.(46) in [15], it is trivial to know that
ω non−OP s − d non−OP s ω OP s − d OP s .(29)
Now, from Eq. (20)(22)(24)(29) and considering the accumulation of the errors when t gets larger, we can conclude that the non-OP mapping process might probably be the essential cause of the spurious oscillation generation and potential loss of accuracy when the mapped WENO schemes are used to simulate the problems with discontinuities for long output times.
To illustrate this, and for brevity in the discussion but without loss of generality, we assume that there is a global stencil S 5 which is divided into 3 substencils S 0 , S 1 , S 2 , and there is an isolated discontinuity on S 2 , as shown in Fig. 10. We suppose u L = 1, u R = −1 and u 0 j+1/2 = u 1 j+1/2 = 1, u 2 j+1/2 = −1. According to Eq.(5), we calculate the approximation of u(x j+1/2 ) on the stencil S 5 , by applying the mapped weights of the non-OP points C1, A2, C3 in Table 4 to the corresponding substencils in Fig. 10, respectively. For comparison, we also calculate the results by applying the weights of the WENO-JS scheme and the mapped weights of some OP points. Here, for simplicity but without loss of generality, we directly use the same values of the mapped weights of the non-OP points C1, A2, C3 in Table 4, but change their order, to set the values of the mapped weights of the OP points. We present the computing conditions and results in detail in Table 5. From Table 5, we find that the errors computed by using the mapped weights of the non-OP points are much larger than the solutions computed by using the mapped weights of the OP points. Although we only present a pseudo-test example here, it is highly conducive to describe and understand the way that the non-OP mapping process causes the potential loss of accuracy. In practice tests with short output times, this phenomenon, as well as the spurious oscillation, may not be easy to be noticed. However, the errors are accumulated and will be demonstrated when the output time gets larger, and then the spurious oscillation and potential loss of accuracy can be observed. We will show these through numerical experiments in Section 5.
Design and properties of the order-preserving mapping functions
Design of the new mapping function
To design a new mapped WENO scheme that can prevent the non-OP mapping process, we devise a set of mapping functions that is OP.
Let D = {d 0 , d 1 , · · · , d r−1 } be an array of all the ideal weights of the (2r − 1)th-order WENO schemes. We build a new array by sorting the elements of D in ascending order, that is, D = { d 0 , d 1 , · · · , d r−1 }. In other words, the arrays D and D have the same elements with different arrangements, and the elements of D satisfy
d 0 < d 1 < · · · < d r−1 .(30)
The following notations are introduced to simplify the expressionŝ
Ω 1 = [0, CFS 0 ),Ω 2 = CFS 0 , d 0 + d 1 2 , · · · ,Ω r+1 = d r−2 + d r−1 2 , CFS 1 ,Ω r+2 = CFS 1 , 1 ,(31)
where 0 < CFS 0 ≤ d 0 and d r−1 ≤ CFS 1 ≤ 1. It is easy to verify that: (1)Ω =Ω 1 ∪Ω 2 ∪ · · · ∪Ω r+2 ; (2) if i j , then Ω i ∩Ω j = ∅, ∀i, j = 1, 2, · · · , r + 2. Now, we give a new mapping function as follows
g MOP−ACMk s (ω) = k 0 ω, ω ∈Ω 1 , d 0 , ω ∈Ω 2 , d 1 , ω ∈Ω 3 , . . . d r−1 , ω ∈Ω r+1 , 1 − k 1 (1 − ω), ω ∈Ω r+2 ,(32)
where
k 0 ∈ 0, d 0 CFS 0 , k 1 ∈ 0, 1 − d r−1 1 − CFS 1 .(33)
Actually, we can verify that g MOP−ACMk s (ω) is independent of the parameter s, that is, g MOP−ACMk 0 (ω) = · · · = g MOP−ACMk r−1 (ω), which is significantly different from the previously published mapping functions. Thus, for simplicity, we will drop the subscript s of g MOP−ACMk s (ω) in the following. Proof. According to Eq.(30)(31)(32), we can easily verify that g MOP−ACMk (ω) satisfies Definition 2.
Properties of the new mapping function
As mentioned earlier, g MOP−ACMk (ω) is a monotone increasing piecewise mapping function. It satisfies the following properties.
Lemma 7.
Let Ω i = {ω ∈Ω i and ω ∂Ω i }, then the mapping function g MOP−ACMk (ω) defined by Eq.(32) satisfies the following properties:
C1. for ∀ω ∈ Ω i , i = 1, · · · , r + 2, g MOP−ACMk (ω) ≥ 0; C2. for ∀ω ∈ Ω, 0 ≤ g MOP−ACMk (ω) ≤ 1; C3. for ∀s ∈ {0, 1, · · · , r − 1}, d s ∈ Ω s+2 , g MOP−ACMk (d s ) = d s , g MOP−ACMk (d s ) = g MOP−ACMk (d s ) = · · · = 0; C4. g MOP−ACMk (0) = 0, g MOP−ACMk (1) = 1, g MOP−ACMk (0) = k 0 , g MOP−ACMk (1) = k 1 .
As the proof of Lemma 7 is very easy, we do not state them here and we can observe these properties intuitively in Fig. 11. Now, we give the new mapped WENO scheme with monotone increasing piecewise and order-preserving mapping, denoted as MOP-WENO-ACMk. The mapped weights are given by
ω MOP−ACMk s = α MOP−ACMk s r−1 l=0 α MOP−ACMk l , α MOP−ACMk s = g MOP−ACMk (ω JS s ).(34)
We present Theorem 2, which will show that the MOP-WENO-ACMk scheme can recover the optimal convergence rates for different values of n cp in smooth regions.
Theorem 2. When CFS 0 d 0 and CFS 1 d r−1 , for ∀n cp < r − 1, the (2r − 1)th-order MOP-WENO-ACMk scheme can achieve the optimal convergence rate of accuracy if the mapping function g MOP−ACMk (ω) is applied to the weights of the (2r − 1)th-order WENO-JS scheme.
We can prove Theorem 2 by employing the Taylor series analysis and using Lemma 7 of this paper and Lemma 1 and Lemma 2 in the statement of page 456 to 457 in [8]. The detailed proof process is almost identical to the one in [15].
We
g MOP−ACMk (ω) = k 0 ω, ω ∈Ω 1 , 0.1, ω ∈Ω 2 , 0.3, ω ∈Ω 3 , 0.6, ω ∈Ω 4 , 1 − k 1 (1 − ω), ω ∈Ω 5 ,(35)CFS 1 < 1, k 0 ∈ 0, 0.1 CFS 0 , k 1 ∈ 0, 0.4 1 − CFS 1 .
In Fig. 11, we plot the curve of g MOP−ACMk (ω) varying with ω by taking CFS 0 = 0.04, CFS 1 = 0.92 and k 0 = 2.5, k 1 = 5.
Numerical experiments
In this section, we compare the numerical performance of the MOP-WENO-ACMk scheme with the WENO-JS scheme [17] The numerical presentation of this section starts with the accuracy test of the one-dimensional linear advection equation with four kinds of initial conditions, followed by the solutions at long output times of one-dimensional linear advection equation with two kinds of initial condition with discontinuities, and finishes with 2D calculations on the two-dimensional Riemann problem and the shock-vortex interaction.
Accuracy test
Example 1. (Accuracy test without critical points [8]) We consider the one-dimensional linear advection equation Eq. (14) with the periodic boundary condition and the following initial condition u(x, 0) = sin(πx).
(36)
It is easy to know that the initial condition in Eq.(36) has no critical points. The CFL number is set to be (∆x) 2/3 to prevent the convergence rates of error from being influenced by time advancement. The L 1 , L 2 , L ∞ norms of the error are given as
L 1 = h · j u exact j − (u h ) j , L 2 = h · j (u exact j − (u h ) j ) 2 , L ∞ = max j u exact j − (u h ) j ,
where h = ∆x is the uniform spatial step size, (u h ) j is the numerical solution and u exact j is the exact solution. The L 1 , L 2 , L ∞ errors and corresponding convergence orders of various considered WENO schemes for Example 1 at output time t = 2.0 are shown in Table 6. The results of the three rows are L 1 -, L 2 -and L ∞ -norm errors and convergence orders in turn (similarly hereinafter). Unsurprisingly, the MOP-WENO-ACMk scheme has gained the fifth-order convergence like the other considered schemes. It can be found that the MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes give more accurate numerical solutions than the WENO-JS scheme in general. Table 6. Convergence properties of considered schemes solving u t + u x = 0 with initial condition u(x, 0) = sin(πx). (14) with the periodic boundary condition and the following initial condition
u(x, 0) = sin πx − sin(πx) π .(37)
The particular initial condition Eq.(37) has two first-order critical points, which both have a non-vanishing third derivative. As mentioned earlier, the CFL number is set to be (∆x) 2/3 . Table 7 shows the L 1 , L 2 , L ∞ errors and corresponding convergence orders of the various considered WENO schemes at output time t = 2.0. We can see that the L ∞ convergence order of the WENO-JS scheme drops by almost 2 orders leading to an overall accuracy loss shown with L 1 and L 2 convergence orders. It is evident that the MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-M, WENO-PM6 and WENO-IM(2,0.1) schemes can retain the optimal orders even in the presence of critical points. It is noteworthy that when the grid number is too small, like N ≤ 40, in terms of accuracy, the MOP-WENO-ACMk scheme provides less accurate results than those of the MIP-WENO-ACMk scheme. The cause of this kind of accuracy loss is that the mapping function of the MOP-WENO-ACMk scheme has narrower optimal weight intervals (standing for the intervals about ω = d s over which the mapping process attempts to use the corresponding optimal weights, see [19,20]) than the MIP-WENO-ACMk scheme. However, this issue can surely be overcome by increasing the grid number. Therefore, we can find that, as expected, the MOP-WENO-ACMk scheme gives equally accurate numerical solutions like those of the MIP-WENO-ACMk scheme when the grid number N ≥ 80. Table 7. Convergence properties of considered schemes solving u t + u x = 0 with initial condition u(x, 0) = sin(πx − sin(πx)/π). 14) with the periodic boundary condition and the following initial condition [7] u(x, 0) = sin 9 (πx)
It is trivial to verify that the initial condition in Eq.(38) has high-order critical points. Again, the CFL number is set to be (∆x) 2/3 . We calculate this problem using the MOP-WENO-ACMk, MIP-WENO-ACMk , WENO-JS and WENO-M schemes. Table 8 shows the L 1 , L 2 , L ∞ errors of these WENO schemes at several output times with a uniform mesh size of ∆x = 1/200. We also present the corresponding increased errors (in percentage) compared to the errors of the MIP-WENO-ACMk scheme which gives the most accurate results. Taking the L 1 -norm error as an example, its increased error at output time t is calculated by
L X 1 (t)−L * 1 (t) L * 1 (t)
× 100% , where L * 1 (t) and L X 1 (t) are the L 1 -norm error of the MIP-WENO-ACMk scheme and the WENO-X scheme (X = WENO-JS, WENO-M, or MOP-WENO-ACMk) at output time t. Clearly, the WENO-JS scheme gives the largest increased errors for both short and long output times. At short output times, like t ≤ 100, the solutions computed by the WENO-M scheme are most close to those of the MIP-WENO-ACMk scheme, leading to the smallest increased errors. However, when the output time increases to t ≥ 200, the solutions computed by the MOP-WENO-ACMk scheme are most close to those of the MIP-WENO-ACMk scheme. Furthermore, when the output time is large, the errors of the WENO-M scheme increase significantly leading to evidently larger increased errors, while the increased errors of the MOP-WENO-ACMk scheme do not get larger as it provides comparably small errors. The errors of the MOP-WENO-ACMk are not as small as those of the MIP-WENO-ACMk scheme. The cause of this kind of accuracy loss is the same as stated in Example 2 that the mapping function of the MOP-WENO-ACMk scheme has narrower optimal weight intervals than the MIP-WENO-ACMk scheme. As mentioned earlier, one can surely address this issue by increasing the grid number. In order to verify this, we calculate this problem using the same schemes at the same output times but with a larger grid number of N = 800, and the results are shown in Table 9. From Table 9, we can see that the errors of the MOP-WENO-ACM k scheme are closer to those of the MIP-WENO-ACMk scheme when the grid number increases from N = 200 to N = 800, resulting in the significantly decreasing of the increased errors. Moreover, for long output times, the increased errors of the MOP-WENO-ACMk scheme are much smaller than those of the WENO-JS and WENO-M schemes. Actually, it is an important advantage of the MOP-WENO-ACMk scheme that can maintain comparably high resolution for long output times. In the next subsection we will demonstrate this again. Fig. 12 shows the performance of the WENO-JS, WENO-M, MIP-WENO-ACMk and MOP-WENO-ACMk schemes at output time t = 1000 with a uniform mesh size of ∆x = 1/200. Clearly, the MIP-WENO-ACMk and MOP-WENO-ACMk schemes give the highest resolution, followed by the WENO-M scheme, whose resolution decreases significantly. The WENO-JS scheme shows the lowest resolution.
Example 4. (Accuracy test with discontinuous initial condition) We consider the SLP modeled by the one-dimensional linear advection equation Eq. (14) with the initial condition Eq. (15). In this problem, the CFL number is taken to be 0.1. Table 10 shows the L 1 , L 2 , L ∞ errors and corresponding convergence orders of various considered WENO schemes for this example at output times t = 2 and t = 2000. At the short output time t = 2, we find that: (1) for all considered schemes, the L 1 and L 2 orders are approximately 1.0 and 0.4 to 0.5, respectively, and the L ∞ orders are all negative; (2) the MIP-WENO-ACMk, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes present more accurate results than the MOP-WENO-ACMk and WENO-JS schemes. At the long output time t = 2000, we find that: (1) for the WENO-JS and WENO-M schemes, the L 1 , L 2 orders decrease to very small values and even become negative; (2) however, for the MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-PM6 and WENO-IM(2, 0.1) schemes, their L 1 orders are clearly larger than 1.0, and their L 2 orders increase to approximately 0.6 to 0.9; (3) for all considered schemes, the L ∞ orders are very small and even become negative; (4) in terms of accuracy, on the whole, the MOP-WENO-ACMk scheme produces accurate and comparable results as the other considered mapped WENO schemes except the WENO-M scheme. However, if we take a closer look, we can find that: (1) the resolution of the result computed by the WENO-M scheme is significantly lower than that of the MOP-WENO-ACMk scheme; (2) the WENO-PM6, WENO-IM(2,0.1) and MIP-WENO-ACMk schemes generate spurious oscillations but the MOP-WENO-ACMk scheme does not. More results will be presented carefully to demonstrate this in the following subsection. Table 8. Performance of various considered schemes solving u t + u x = 0 with u(x, 0) = sin 9 (πx), ∆x = 1/200.
Linear advection examples with discontinuities at long output times for comparison
In this subsection, we will make a further study on calculating linear advection examples with discontinuities at long output times by various considered WENO schemes. The objective is to demonstrate the advantage of the MOP-WENO-ACMk scheme that can obtain high resolution and do not generate spurious oscillations, especially for long output time simulations.
The one-dimensional linear advection problem Eq. (14) was used in this study. It was solved with the following two initial conditions.
and the periodic boundary condition is used in the two directions. Case 1 is the SLP used earlier in this paper. Case 2 consists of several constant states separated by sharp discontinuities at x = ±0.8, ±0.6, ±0.4, ±0.2. We call Case 2 BiCWP for brevity in the presentation as the profile of the exact solution for this Problem looks like the Breach in City Wall.
We use the uniform mesh of N = 800 with the output time t = 2000 and use N = 1600, 3200, 6400 with the output time t = 200 respectively to solve both SLP and BiCWP by all considered WENO schemes. Fig. 13 and Fig. 14 show the comparison of various schemes when t = 2000 and N = 800. We can observe that: (1) the MOP-WENO-ACMk scheme provides the numerical results with a significantly higher resolution than those of the WENO-JS and WENO-M schemes, and it does not generate spurious oscillations while the WENO-PM6 and MIP-WENO-ACMk schemes do, when solving both SLP and BiCWP; (2) when solving SLP on present computing condition, the WENO-IM(2, 0.1) scheme does not seem to generate spurious oscillations and it gives better resolution than the MOP-WENO-ACMk scheme in most of the region; (3) however, from Fig. 13(b), we observe that the MOP-WENO-ACMk scheme gives a better resolution of the Gaussian than the WENO-IM(2, 0.1) scheme, and if we take a closer look, we can see that the WENO-IM(2, 0.1) scheme generate a very slight spurious oscillation near x = −0.435 as shown in Fig. 13(c); (4) it is very evident as shown in Fig. 14 It should be noted that the WENO-JS scheme could be treated as a mapped WENO scheme whose mapping function is an identical mapping, that is, g JS s (ω) = ω, s = 0, 1, 2, and it is trivial to verify that the set of mapping functions g JS s (ω), s = 0, 1, 2 is OP while its optimal weight intervals are zero. Thus, to sum up, we can conclude that a set of mapping functions which is OP can help to improve the resolution of the corresponding mapped WENO scheme and prevent it from generating spurious oscillations in the simulation of problems with discontinuities, especially for long output times. And in the upcoming follow-up study of this article, we will provide more examples and evidence to further verify this conclusion.
Two-dimensional Euler system
In this subsection, we solve the two-dimensional Euler system of gas dynamics. We consider the numerical solutions of the 2D Riemann problem [28,27,18] and the shock-vortex interaction problem [4,23,25]. The two- dimensional Euler system is given by the following strong conservation form
U t + F U x + G U y = 0,(40)
where U = ρ, ρu, ρv, E T , F U = ρu, ρu 2 + p, ρuv, u(E + p) T , G U = ρv, ρvu, ρv 2 + p, v(E + p) T , and ρ, u, v, p and E are the density, component of velocity in the x and y coordinate directions, pressure and total energy, respectively. The relation of the pressure and the total energy, the component of velocity in the x and y coordinate directions is defined by the equation of state for an ideal polytropic gas, taking the form
p = (γ − 1) E − 1 2 ρ(u 2 + v 2 ) ,
where γ is the ratio of specific heats and we choose γ = 1.4 here. In all numerical examples of this subsection, the CFL number is set to be 0.5.
Example 5. (2D Riemann problem)
The series of 2D Riemann problems proposed in [28,27] has become favorable cases to test the resolution of numerical methods [18,21,24]. It is calculated over a unit square domain [0, 1] × [0, 1], initially involves the constant states of flow variables over each quadrant which is got by dividing the computational domain using lines x = x 0 and y = y 0 . Configuration 4 in [18] is taken here for the test. The initial condition of this configuration is given by The transmission boundary condition is used on all boundaries. The numerical solutions are calculated using considered WENO schemes on 800 × 800 cells, and the computations proceed to t = 0.25.
In Fig. 22, we have shown the numerical results of density obtained by using the WENO-JS, WENO-M, WENO-PM6, WENO-IM(2, 0.1), MIP-WENO-ACMk and MOP-WENO-ACMk schemes. We can see that all considered schemes can capture the main structure of the solution. However, we can also observe that there are obvious numerical oscillations (as marked by the pink boxes), which are unfavorable for the fidelity of the results, in the solutions of the WENO-M, WENO-PM6, WENO-IM(2, 0.1) and MIP-WENO-ACMk schemes. These numerical oscillations can be seen more clearly from the cross-sectional slices of density profile along the plane y = 0.5 as presented in Fig. 23, where the reference solution is obtained by using the WENO-JS scheme with a uniform mesh size of 3000 × 3000. Noticeably, there are almost no numerical oscillations in the solutions of the MOP-WENO-ACMk and WENO-JS schemes, and this should be an advantage of the mapped WENO schemes whose mapping functions are OP.
Example 6. (Shock-vortex interaction)
We solve the shock-vortex interaction problem [4,23,25] that consists of the interaction of a left moving shock wave with a right moving vortex. The initial condition is given by
ρ, u, v, p (x, y, 0) = U L , x < 0.5, U R , x ≥ 0.5,
where the left state is taken as U L = (ρ L , u L , v L , p L ) = (1, √ γ, 0, 1), and the right state
U R = (ρ R , u R , v R , p R ) is given by p R = 1.3, ρ R = ρ L γ − 1 + (γ + 1)p R γ + 1 + (γ − 1)p R u R = u L 1 − p R γ − 1 + p R (γ + 1) , v R = 0.
A vortex given by the following perturbations is superimposed onto the left state U L , The problem has been calculated by the considered WENO schemes with a uniform mesh size of 800 × 800 and the output time is taken as t = 0.35. The final structures of the shock and vortex in the density profile have been shown in Fig. 24. It is observed that all the considered WENO schemes perform well in capturing the main structure Again, we argue that this should be an advantage of the mapped WENO schemes whose mapping functions are OP.
δρ = ρ 2 L (γ − 1)p L δT, δu = y − y c r c e α(1−r 2 ) , δv = − x − x c r c e α(1−r 2 ) , δp = γρ 2 L (γ − 1)ρ L δT,
Conclusions
This paper has proposed a new mapped weighted essentially non-oscillatory scheme named as MOP-WENO-ACMk. The motivation to design this new scheme is that: (1) the WENO-JS and WENO-M schemes generate numerical solutions with very low resolutions when solving hyperbolic problems with discontinuities for long output times;
(2) although various existing improved mapped WENO schemes can successfully address the drawback above, as far as we know, almost all of them introduce unfavorable spurious oscillations as the non-OP mapping process occurs in their mappings. By introducing a set of mapping functions that is order-preserving (OP), the MOP-WENO-ACMk scheme can prevent the non-OP mapping process, which should be the essential cause of the spurious oscillation generation and potential loss of accuracy. Therefore, the MOP-WENO-ACMk scheme has a significant advantage in that it not only can obtain comparable high resolutions but also can prevent generating spurious oscillations, when solving problems with discontinuities, especially for long output times. Numerical experiments have shown that the proposed scheme yields lower dissipation and higher resolution near discontinuities than the WENO-JS and WENO-M schemes, especially for long output times, and it enjoys better robustness than the WENO-PM6, WENO-IM(2, 0.1) and MIP-WENO-ACMk schemes.
Lemma 3 .
3The mapping function g IM s (ω; k, A) defined by Eq.(10) satisfies:
Fig. 1 .
1The mapping functions of the WENO-PM6, WENO-IM(2, 0.1) and WENO-IM(2, 0.5) schemes, d 0 = 0.1, d 1 = 0.6, d 2 = 0.3.
Fig. 2 .
2Performance of the fifth-order WENO-PM6, WENO-IM(2, 0.1) and WENO-IM(2, 0.5) schemes for SLP with N = 400 at long output time t = 200.Table 1. The L 1 , L 2 , L ∞ errors for SLP with N = 400 at long output time t = 200, computed by the WENO-PM6, WENO-IM(2, 0.1) and WENO-IM(
Fig. 3 .
3The mapping functions of the test schemes shown in
Fig. 4 .
4Performance of the test schemes shown in
Fig. 5 .Fig. 6 .
56The real-time mapping relationship g PM6 (ω) ∼ ω of the SLP. A uniform mesh size of N = 400 and two output times t = 2 (left) and t = 200 (right) are used. The real-time mapping relationship g IM(2,0.1) (ω) ∼ ω of the SLP. A uniform mesh size of N = 400 and two output times t = 2 (left) and t = 200 (right) are used.is non-OP satisfy ω non−the mapped weights ω OP s computed by some set of mapping functions which is OP satisfy ω OP 0
2 = u(x j+1/2 ) + O(∆x 5 ).
Fig. 7 .
7The real-time mapping relationship g ACMk (ω) ∼ ω of the SLP. A uniform mesh size of N = 400 and two output times t = 2 (left) and t = 200 (right) are used.
Table 4 .
4The mapping results of the SLP on highlighted non-OP points, computed by WENO-PM6, WENO-IM(2, 0.1) and MIP-WENO-ACMk, with a uniform mesh size of N = 400 and output times t = 2, 200.
Fig. 8 .
8The non-OP points in the numerical solutions of the WENO-PM6 scheme. A uniform mesh size of N = 400 is used and the output time is t = 2.
Fig. 9 .
9The non-OP points in the numerical solutions of the WENO-PM6 scheme. A uniform mesh size of N = 400 is used and the output time is t = 200. Then, evaluation at ω JS s of the Taylor series approximations of g non−OP s (ω) about d non−OP s yields α non−OP s = g non−OP s (d non−OP s
non-OP point where a non-OP mapping process occurs. Then, as n is a possibly large positive integer, Eq.(27) yields
Fig. 10 .
10Schematic of fifth-order WENO stencils with an isolated discontinuity on one of the substencils.
Lemma 6 .
6The mapping function g MOP−ACMk (ω) defined by Eq.(32) is order-preserving (OP).
can get the fifth-order MOP-WENO-ACMk scheme by choosing r = 3 in Eq.(32)(34). In this case, it is trivial to obtain the arrays D, D, that is, D = {0.1, 0.6, 0.3}, D = {0.1, 0.3, 0.6}. In other words, we have d 0 = 0.1, d 1 = 0.3, d 2 = 0.6. Then, we can write the mapping function explicitly as follows
whereΩ 1 =
1[0, CFS 0 ),Ω 2 = [CFS 0 , 0.2),Ω 3 = [0.2, 0.45),Ω 4 = [0.45, CFS 1 ],Ω 5 = CFS 1 , 1 , 0 < CFS 0 ≤ 0.1, 0.6 ≤
Fig. 11 .
11The mapping function of the MOP-WENO-ACMk scheme with CFS 0 = 0.04, CFS 1 = 0.92, k 0 = 2.5, k 1 = 5.
and its various versions with mapping, WENO-M [15], WENO-PM6 [7], WENO-IM(2, 0.1) [8] and MIP-WENO-ACMk which is proposed in subsection 3.1. In all the numerical experiments below, MOP-WENO-ACMk refers to the definition in Eq.(34) and Eq.(35) with k 0 = k 1 = 0, CFS 0 = 0.01, CFS 1 = 0.94, and the parameters in the MIP-WENO-ACMk scheme are chosen to be k s = 0, CFS s = d s /10.
resolution performance test with high-order critical points) We consider the one-dimensional linear advection equation Eq.(
Case 1. (SLP) The initial condition is given by Eq.(15) in subsection 3.2. Case 2. (BiCWP) The initial condition is given by
Fig. 12 .
12Performance of the MOP-WENO-ACMk, MIP-WNEO-ACMk, WENO-JS and WENO-M schemes for Example 3 at output time t = 1000 with a uniform mesh size of ∆x = 1/200. are zero. Actually, there are many non-OP points for all considered mapped schemes whose mapping functions are non-OP, like the WENO-M, WENO-PM6, WENO-IM(2, 0.1)) and MIP-WENO-ACMk schemes. As expected, there are no non-OP points for the MOP-WENO-ACMk and WENO-JS schemes for all computing cases here. We do not show the results of the non-OP points for all computing cases here just for the simplicity of illustration.
Fig. 13 .
13Performance of the fifth-order MOP-WENO-ACMk, MIP-WNEO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the SLP with N = 800 at output time t = 2000.
Fig. 14 .Fig. 15 .
1415Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the BiCWP with N = 800 at long output time t = 2000. Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the SLP with N = 1600 at long output time t = 200.
Fig. 16 .Fig. 17 .
1617Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the SLP with N = 3200 at long output time t = 200. Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the SLP with N = 6400 at long output time t = 200. ρ, u, v, p (x, y, x ≤ 1.0, 0.5 ≤ y ≤ 1.0, (0.5065, 0.8939, 0.0, 0.35), 0.0 ≤ x ≤ 0.5, 0.5 ≤ y ≤ 1.0, (1.1, 0.8939, 0.8939, 1.1), 0.0 ≤ x ≤ 0.5, 0.0 ≤ y ≤ 0.5, (0.5065, 0.0, 0.8939, 0.35), 0.5 ≤ x ≤ 1.0, 0.0 ≤ y ≤ 0.5.
Fig. 18 .Fig. 19 .
1819Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the BiCWP with N = 1600 at long output time t = 200. Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the BiCWP with N = 3200 at long output time t = 200.
where = 0.3, r c = 0.05, α = 0.204, x c = 0.25, y c = 0.5, r = ((x − x c ) 2 + (y − y c ) 2 )/r 2 c , δT = −(γ−1) 2 e 2α(1−r 2 ) /(4αγ). The transmissive boundary condition is used on all boundaries.
Fig. 20 .
20Performance of the fifth-order MOP-WENO-ACMk, MIP-WENO-ACMk, WENO-JS, WENO-M, WENO-PM6 and WENO-IM(2, 0.1) schemes for the BiCWP with N = 6400 at long output time t = 200. of the shock and vortex after the interaction. We can see that there are clear numerical oscillations in the solutions of the WENO-IM(2, 0.1) and MIP-WENO-ACMk schemes, and the numerical oscillations can also be observed in the solutions of the WENO-M and WENO-PM6 schemes although they are not so severe as those of the WENO-IM(2, 0.1) and MIP-WENO-ACMk schemes. However, in the solutions of the MOP-WENO-ACMk and WENO-JS schemes, we almost did not find the numerical oscillations. To further demonstrate this, we have plotted the cross sectional slices of density profile along the plane y = 0.65 in Fig. 25. The reference solution is obtained using the WENO-JS scheme with a uniform mesh size of 1600 × 1600. It is evident that the MIP-WENO-ACMk scheme produces the numerical oscillations with the biggest amplitudes followed by those of the WENO-IM(2, 0.1) scheme. The WENO-PM6 and WENO-M schemes also generate clear numerical oscillations with the amplitudes slightly smaller than that of the WENO-IM(2, 0.1) scheme. Obviously, the solutions of the MOP-WENO-ACMk and WENO-JS schemes almost generate no numerical oscillations or only generate some imperceptible numerical oscillations, and their solutions are most close to the reference solution.
Fig. 21 .
21The non-OP points in the numerical solutions of SLP computed by the WENO-M and MOP-WENO-ACMk schemes with N = 3200, t = 200, and the non-OP points in the numerical solutions of BiCWP computed by the MIP-WENO-ACMk and MOP-WENO-ACMk schemes with N = 6400, t = 200.
Fig. 22 .
22Density plots for the 2D Riemann problem using 30 contour lines with range from 0.5 to 1.9, computed using the WENO-JS, WENO-M, WENO-PM6, WENO-IM(2, .01), MIP-WENO-ACMk and MOP-WENO-ACMk schemes.
Fig. 23 .
23The cross-sectional slices of density plot along the plane y = 0.5, computed using the WENO-JS, WENO-M, WENO-PM6, WENO-IM (2, 0.1), MIP-WENO-ACMk and MOP-WENO-ACMk schemes.
Fig. 24 .Fig. 25 .
2425Density plots for the Shock-vortex interaction using 30 contour lines with range from 0.9 to 1.4, computed using the WENO-JS, WENO-M, WENO-PM6, WENO-IM(2, .01), MIP-WENO-ACMk and MOP-WENO-ACMk schemes. The cross-sectional slices of density plot along the plane y = 0.65, computed using the WENO-JS, WENO-M, WENO-PM6, WENO-IM (2, 0.1), MIP-WENO-ACMk and MOP-WENO-ACMk schemes.
Table 3 .
3The L 1 , L 2 , L ∞ errors for the SLP with N = 400 at long output time t = 200, computed by the test schemes shown inTable 2.Schemes, ts-i
L 1 error
L 2 error
L ∞ error
ts-1
5.71367e-02(5)
1.06259e-01(5)
4.76278e-01(3)
ts-2
6.80647e-02(1)
1.09276e-01(1)
4.91997e-01(1)
ts-3
5.91473e-02(4)
1.07220e-01(4)
4.89545e-01(2)
ts-4
5.55635e-02(6)
1.04530e-01(6)
4.52208e-01(6)
ts-5
6.60760e-02(3)
1.08378e-01(2)
4.74908e-01(4)
ts-6
6.62020e-02(2)
1.08289e-01(3)
4.69467e-01(5)
Table 5 .
5The computing conditions and results for comparison.WENO-PM6, Point C1
WENO-IM(2,0.1), Point A2
MIP-WENO-ACMk, Point C3
(ω JS
0 , ω JS
1 , ω JS
2 )
(0.37291, 0.53663, 0.09046)
(0.57568, 0.38416, 0.04016)
(0.54547, 0.39684, 0.05769)
(ω non−OP
0
, ω non−OP
1
, ω non−OP
2
)
(0.10939, 0.64825, 0.24236)
(0.14069, 0.59737, 0.26194)
(0.10000, 0.60000, 0.30000)
(ω OP
0 , ω OP
1 , ω OP
2 )
(0.24236, 0.64825, 0.10939)
(0.59737, 0.26194, 0.14069)
(0.60000, 0.30000, 0.10000)
(u JS
j+1/2 , u non−OP
j+1/2 , u OP
j+1/2 )
(0.81908, 0.51528, 0.78122)
(0.91968, 0.47611, 0.71862)
(0.88462, 0.40000, 0.80000)
u JS
j+1/2 − u(x j+1/2 )
0.18092(18.09%)
0.08031(8.03%)
0.11538(11.54%)
u non−OP
j+1/2 − u(x j+1/2 )
0.48472(48.47%)
0.52389(52.39%)
0.60000(60.00%)
u OP
j+1/2 − u(x j+1/2 )
0.21878(21.88%)
0.28138(28.14%)
0.20000(20.00%)
6400 for SLP and BiCWP, respectively. From these solutions computed with larger grid numbersthat, when solving BiCWP, the WENO-IM(2, 0.1) scheme generates the spurious
oscillations.
In Figs. 15, 16, 17 and Figs. 18, 19, 20, we show the comparison of various schemes when t = 200 and
N = 1600, 3200,
, x B1 = -0.5175 d 2 , x B1 = -0.5175 d 0 , x A1 = -0.4175 d 1 , x A1 = -0.4175 d 2 , x A1 = -0.4175 WENO-PM6 t = 2 (a)
Table 9. Performance of various considered schemes solving u t + u x = 0 with u(x, 0) = sin 9 (πx), ∆x = 1/800.
. Mip-Weno-Acmk, 8MIP-WENO-ACMk 8.75629e-02(-)
. Weno-Im, 0.1) 4.40293e-02(-) 2.02331e-02(1.1217) 1.01805e-02(0.9909) 2.17411e-01(-) 1.12590e-01(0.9493) 5.18367e-02(1.1190WENO-IM(2,0.1) 4.40293e-02(-) 2.02331e-02(1.1217) 1.01805e-02(0.9909) 2.17411e-01(-) 1.12590e-01(0.9493) 5.18367e-02(1.1190)
. Mip-Weno-Acmk, 2.03667e-02(1.1278) 1.02183e-02(0.9951) 2.21312e-01(-) 1.10365e-01(1.0038) 4.76589e-02(1.2115) 9.24356e-02(-) 6.70230e-02(0.4638) 4.96081e-02(0.43414MIP-WENO-ACMk 4.45059e-02(-) 2.03667e-02(1.1278) 1.02183e-02(0.9951) 2.21312e-01(-) 1.10365e-01(1.0038) 4.76589e-02(1.2115) 9.24356e-02(-) 6.70230e-02(0.4638) 4.96081e-02(0.4341) 2.28433e-01(-)
1) scheme generates spurious oscillations but the MOP-WENO-ACMk scheme does not while provides an improved resolution when solving SLP; (2) although the resolutions of the results computed by the WENO-JS and WENO-M schemes are significantly improved for both SLP and BiCWP, the MOP-WENO-ACMk scheme still evidently provides better resolutions than those of these two schemes; (3) the spurious oscillations generated by the WENO-PM6, WENO-IM(2,0.1) and MIP-WENO-ACMk schemes appear to be more evident and more intense when the grid number gets larger, while the MOP-WENO-ACMk scheme can still prevent generating spurious oscillations but obtain great improvement of the resolution, when solving both SLP and BiCWP. As examples, in Fig. 21, we present the non-OP points in the numerical solutions of SLP with a uniform mesh size of N = 3200 computed by the WENO-M and MOP-WENO-ACMk schemes, and the non-OP points in the numerical solutions of BiCWP with a uniform mesh size of N = 6400 computed by the MIP-WENO-ACMk and MOP-WENO-ACMk schemes. still long output time, we observe that, as the grid number increases: (1) firstly, the WENO-IM(2, 0.. We can see that there are a great many non-OP points in the solutions of the SLP computed by the WENO-M scheme and in the solutions of the BiCWP computed by the MIP-WENO-ACMk scheme, while the numbers of the non-OP points in the solutions of these two cases computed by the MOP-WENO-ACMk schemeand a shortened but still long output time, we observe that, as the grid number increases: (1) firstly, the WENO-IM(2, 0.1) scheme generates spurious oscillations but the MOP-WENO-ACMk scheme does not while provides an improved resolution when solving SLP; (2) although the resolutions of the results computed by the WENO-JS and WENO- M schemes are significantly improved for both SLP and BiCWP, the MOP-WENO-ACMk scheme still evidently provides better resolutions than those of these two schemes; (3) the spurious oscillations generated by the WENO- PM6, WENO-IM(2,0.1) and MIP-WENO-ACMk schemes appear to be more evident and more intense when the grid number gets larger, while the MOP-WENO-ACMk scheme can still prevent generating spurious oscillations but obtain great improvement of the resolution, when solving both SLP and BiCWP. As examples, in Fig. 21, we present the non-OP points in the numerical solutions of SLP with a uniform mesh size of N = 3200 computed by the WENO-M and MOP-WENO-ACMk schemes, and the non-OP points in the numerical solutions of BiCWP with a uniform mesh size of N = 6400 computed by the MIP-WENO-ACMk and MOP-WENO- ACMk schemes. We can see that there are a great many non-OP points in the solutions of the SLP computed by the WENO-M scheme and in the solutions of the BiCWP computed by the MIP-WENO-ACMk scheme, while the numbers of the non-OP points in the solutions of these two cases computed by the MOP-WENO-ACMk scheme
Analysis of WENO schemes for full and global accuracy. F Arandiga, A Baeza, A M Belda, P Mulet, SIAM J. Numer. Anal. 49F. Arandiga, A. Baeza, A.M. Belda, P. Mulet, Analysis of WENO schemes for full and global accuracy, SIAM J. Numer. Anal. 49 (2011) 893-915.
An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws. R Borges, M Carmona, B Costa, D W S , J. Comput. Phys. 227R. Borges, M. Carmona, B. Costa, D.W. S., An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws, J. Comput. Phys. 227 (2008) 3101-3211.
High order weighted essentially non-oscillatory WENO-Z schemes for hyperbolic conservation laws. M Castro, B Costa, D W S , J. Comput. Phys. 230M. Castro, B. Costa, D.W. S., High order weighted essentially non-oscillatory WENO-Z schemes for hyperbolic conservation laws, J. Comput. Phys. 230 (2011) 1766-1792.
Shock wave deformation in shock-vortex interactions. A Chatterjee, Shock Waves. 9A. Chatterjee, Shock wave deformation in shock-vortex interactions, Shock Waves 9 (1999) 95-105.
Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes. W S Don, R Borges, J. Comput. Phys. 250W.S. Don, R. Borges, Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes, J. Comput. Phys. 250 (2013) 347-372.
A new smoothness indicator for improving the weighted essentially non-oscillatory scheme. P Fan, Y Q Shen, B L Tian, J. Comput. Phys. 269P. Fan, Y.Q. Shen, B.L. Tian, A new smoothness indicator for improving the weighted essentially non-oscillatory scheme, J. Comput. Phys. 269 (2014) 329-354.
A new mapped weighted essentially non-oscillatory scheme. H Feng, F Hu, R Wang, J. Sci. Comput. 51H. Feng, F. Hu, R. Wang, A new mapped weighted essentially non-oscillatory scheme, J. Sci. Comput. 51 (2012) 449-473.
An improved mapped weighted essentially non-oscillatory scheme. H Feng, C Huang, R Wang, Appl. Math. Comput. 232H. Feng, C. Huang, R. Wang, An improved mapped weighted essentially non-oscillatory scheme, Appl. Math. Comput. 232 (2014) 453-468.
Totalvariation diminishing runge-kutta schemes. S Gottlied, C W Shu, Math. Comput. 67S. Gottlied, C.W. Shu, Totalvariation diminishing runge-kutta schemes, Math. Comput. 67 (1998) 73-85.
Strong stability-preserving high-order time discretization methods. S Gottlied, C W Shu, E Tadmor, SIAM Rev. 43S. Gottlied, C.W. Shu, E. Tadmor, Strong stability-preserving high-order time discretization methods, SIAM Rev. 43 (2001) 89-112.
ENO schemes with subcell resolution. A Harten, J. Comput. Phys. 83A. Harten, ENO schemes with subcell resolution, J. Comput. Phys. 83 (1987) 148-184.
Uniformly high order essentially non-oscillatory schemes III. A Harten, B Engquist, S Osher, S Chakravarthy, J. Comput. Phys. 71A. Harten, B. Engquist, S. Osher, S. Chakravarthy, Uniformly high order essentially non-oscillatory schemes III, J. Comput. Phys. 71 (1987) 231-303.
Some results on uniformly high order accurate essentially non-oscillatory schemes. A Harten, B Osher, S Engquist, S Chakravarthy, Appl. Numer. Math. 2A. Harten, B. Osher, S. Engquist, S. Chakravarthy, Some results on uniformly high order accurate essentially non-oscillatory schemes, Appl. Numer. Math. 2 (1986) 347-377.
Uniformly high order essentially non-oscillatory schemes I. A Harten, S Osher, SIAM J. Numer. Anal. 24A. Harten, S. Osher, Uniformly high order essentially non-oscillatory schemes I, SIAM J. Numer. Anal. 24 (1987) 279-309.
Mapped weighted essentially non-oscillatory schemes: Achieving optimal order near critical points. A K Henrick, T D Aslam, J M Powers, J. Comput. Phys. 207A.K. Henrick, T.D. Aslam, J.M. Powers, Mapped weighted essentially non-oscillatory schemes: Achieving optimal order near critical points, J. Comput. Phys. 207 (2005) 542-567.
A modified fifth-order WENO-Z method hyperbolic conservation laws. F Hu, R Wang, C X , J. Comput. Appl. Math. 303F. Hu, R. Wang, C. X., A modified fifth-order WENO-Z method hyperbolic conservation laws, J. Comput. Appl. Math. 303 (2016) 56-68.
Efficient implementation of weighted ENO schemes. G S Jiang, C W Shu, J. Comput. Phys. 126G.S. Jiang, C.W. Shu, Efficient implementation of weighted ENO schemes, J. Comput. Phys. 126 (1996) 202-228.
Solution of two-dimensional Riemann problems of gas dynamics by positive schemes. P D Lax, X D Liu, SIAM J. Sci. Comput. 19P.D. Lax, X.D. Liu, Solution of two-dimensional Riemann problems of gas dynamics by positive schemes, SIAM J. Sci. Comput. 19 (1998) 319-340.
R Li, W Zhong, arXiv:2011.03916A modified adaptive improved mapped WENO method. arXiv preprintR. Li, W. Zhong, A modified adaptive improved mapped WENO method, arXiv preprint (2020) arXiv:2011.03916.
An efficient mapped WENO scheme using approximate constant mapping. R Li, W Zhong, arXiv:2102.00231arXiv preprintR. Li, W. Zhong, An efficient mapped WENO scheme using approximate constant mapping, arXiv preprint (2021) arXiv:2102.00231.
Piecewise Polynomial Mapping Method and Corresponding WENO Scheme with Improved Resolution. Q Liu, P Liu, H Zhang, Commun. Comput. Phys. 18Q. Liu, P. Liu, H. Zhang, Piecewise Polynomial Mapping Method and Corresponding WENO Scheme with Improved Resolution, Commun. Comput. Phys. 18 (2015) 1417-1444.
Weighted essentially non-oscillatory schemes. X D Liu, S Osher, T Chan, J. Comput. Phys. 115X.D. Liu, S. Osher, T. Chan, Weighted essentially non-oscillatory schemes, J. Comput. Phys. 115 (1994) 200-212.
A numerical study of two-dimensional shock-vortex interaction. S P Pao, M D Salas, 14th Fluid and Plasma Dynamics Conference. S.P. Pao, M.D. Salas, A numerical study of two-dimensional shock-vortex interaction, in: 14th Fluid and Plasma Dynamics Conference.
Numerical methods for high-speed flows. S Pirozzoli, Annu. Rev. Fluid Mech. 43S. Pirozzoli, Numerical methods for high-speed flows, Annu. Rev. Fluid Mech. 43 (2010) 163-194.
A characteristic-wise hybrid compact-WENO scheme for solving hyperbolic conservation laws. Y X Ren, M Liu, H Zhang, J. Comput. Phys. 192Y.X. Ren, M. Liu, H. Zhang, A characteristic-wise hybrid compact-WENO scheme for solving hyperbolic conservation laws, J. Comput. Phys. 192 (2003) 365-386.
A modified fifth-order WENO scheme for hyperbolic conservation laws. R Samala, G N Raju, Comput. Math. Appl. 75R. Samala, G.N. Raju, A modified fifth-order WENO scheme for hyperbolic conservation laws, Comput. Math. Appl. 75 (2018) 1531-1549.
Classification of the Riemann problem for two-dimensional gas dynamics. C W Schulz-Rinne, SIAM J. Math. Anal. 24C.W. Schulz-Rinne, Classification of the Riemann problem for two-dimensional gas dynamics, SIAM J. Math. Anal. 24 (1993) 76-88.
Numerical solution of the Riemann problem for two-dimensional gas dynamics. C W Schulz-Rinne, J P Collins, H M Glaz, SIAM J. Sci. Comput. 14C.W. Schulz-Rinne, J.P. Collins, H.M. Glaz, Numerical solution of the Riemann problem for two-dimensional gas dynamics, SIAM J. Sci. Comput. 14 (1993) 1394-1414.
Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. C W Shu, Advanced Numerical Approximation of Nonlinear Hyperbolic Equations. BerlinSpringer1697C.W. Shu, Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws, in: Advanced Numerical Approximation of Nonlinear Hyperbolic Equations. Lecture Notes in Mathematics, volume 1697, Springer, Berlin, 1998, pp. 325-432.
Efficient implementation of essentially non-oscillatory shock-capturing schemes. C W Shu, S Osher, J. Comput. Phys. 77C.W. Shu, S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes, J. Comput. Phys. 77 (1988) 439-471.
Efficient implementation of essentially non-oscillatory shock-capturing schemes II. C W Shu, S Osher, J. Comput. Phys. 83C.W. Shu, S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes II, J. Comput. Phys. 83 (1989) 32-78.
U S Vevek, B Zang, T H New, Tenth International Conference on Computational Fluid Dynamics. Barcelona, SpainA New Mapped WENO Method for Hyperbolic Problems, ICCFD10U.S. Vevek, B. Zang, T.H. New, A New Mapped WENO Method for Hyperbolic Problems, ICCFD10, in: Tenth International Conference on Computational Fluid Dynamics, Barcelona, Spain.
Adaptive mapping for high order WENO methods. U S Vevek, B Zang, T H New, J. Comput. Phys. 381U.S. Vevek, B. Zang, T.H. New, Adaptive mapping for high order WENO methods, J. Comput. Phys. 381 (2019) 162-188.
A New Mapped Weighted Essentially Non-oscillatory Method Using Rational Function. R Wang, H Feng, C Huang, J. Sci. Comput. 67R. Wang, H. Feng, C. Huang, A New Mapped Weighted Essentially Non-oscillatory Method Using Rational Function, J. Sci. Comput. 67 (2016) 540-580.
On the order of accuracy and numerical performance of two classes of finite volume WENO schemes. R Zhang, M Zhang, C W Shu, Commun. Comput. Phys. 9R. Zhang, M. Zhang, C.W. Shu, On the order of accuracy and numerical performance of two classes of finite volume WENO schemes, Commun. Comput. Phys. 9 (2011) 807-827.
|
[] |
[
"NEGLIGIBLE VARIATION AND THE CHANGE OF VARIABLES THEOREM",
"NEGLIGIBLE VARIATION AND THE CHANGE OF VARIABLES THEOREM"
] |
[
"Vermont Rutherfoord ",
"Yoram Sagher "
] |
[] |
[] |
In this note we prove a necessary and sufficient condition for the change of variables formula for the HK integral, with implications for the change of variables formula for the Lebesgue integral. As a corollary, we obtain a necessary and sufficient condition for the Fundamental Theorem of Calculus to hold for the HK integral.
|
10.1512/iumj.2012.61.4532
|
[
"https://arxiv.org/pdf/1212.4574v1.pdf"
] | 119,608,721 |
1212.4574
|
76d157a5ae08704f5979203c23f3cc2af7601b3a
|
NEGLIGIBLE VARIATION AND THE CHANGE OF VARIABLES THEOREM
19 Dec 2012
Vermont Rutherfoord
Yoram Sagher
NEGLIGIBLE VARIATION AND THE CHANGE OF VARIABLES THEOREM
19 Dec 2012
In this note we prove a necessary and sufficient condition for the change of variables formula for the HK integral, with implications for the change of variables formula for the Lebesgue integral. As a corollary, we obtain a necessary and sufficient condition for the Fundamental Theorem of Calculus to hold for the HK integral.
(1)ˆg (β) g (α) f (u) du =ˆβ α f (g (s)) g ′ (s) ds holds for all α, β in [a, b] if and only if F • g is absolutely continuous, where F (x) :=´x g(a) f (u) du.
K. Krzyzewski [Krz1] and G. Goodman [Goodman] proved several sufficient but not necessary conditions for (1) to hold for the Denjoy and Perron integrals. Both integrals are equivalent to Henstock-Kurzweil (HK) integral, for which we establish necessary and sufficient conditions for the change of variables theorem. As a consequence, we obtain the optimal condition for which (1) holds for a fixed α, β without the requirement that it holds for every subinterval. Furthermore, we show that even when´g (β) g(α) f (u) du is a Lebesgue integral, (1) holds under weaker conditions than those of Theorem 1.
Preliminaries
For excellent presentations of the HK integral, see [Bartle], [Lee & Vyborny], and [Gordon]. For the reader's convenience, we include the basic definitions necessary to follow the exposition below.
We denote the closed interval [a j , b j ] as I j and |I j | = b j − a j . Definition 1. A tagged partition P of [a, b], denoted P [a, b], is a finite set of the form {(x j , I j ) : 1 ≤ j ≤ n} such that n j=1 I j = [a, b], x j ∈ I j for all j, and i = j implies that (a i , b i ) ∩ (a j , b j ) = ∅.
Definition 2. A gauge for the interval [a, b] is a function from [a, b] to the positive real numbers, R + .
Definition 3. A tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} is said to be subordinate to the gauge δ if I j ⊆ (x j − δ (x j ) , x j + δ (x j )) for every j.
Definition 4. Given f : [a, b] → R and a tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n}, R P f := n j=1 f (x j ) · |I j | is called their Riemann sum.
Definition 5. A function f : [a, b] → R is said to be HK integrable over [a, b] if there exists a number, (HK)´b a f (x) dx, so that for any ǫ > 0 there exists a gauge δ on [a, b] such that any tagged partition P [a, b] subordinate to δ satisfies
R P f − (HK)´b a f (x) dx < ǫ. We also define (HK)´a b f (x) dx = −(HK)´b a f (x) dx.
Since two integrands which differ only on a set of measure zero have the same HK integral [Lee & Vyborny, Theorem 2.5.6], we adopt the convention that, given a function f that is defined almost everywhere, its HK integral is the integral of the function that is equal to f where f is defined, and is 0 where f is not defined.
Definition 6. A function f has negligible variation on a set E ⊆ [a, b] if for any ǫ > 0 there exists a gauge δ on [a, b] such that for any tagged partition P
[a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ, xj ∈E |f (b j ) − f (a j )| < ǫ.
From this point on, we will denote
∆ j f = f (b j ) − f (a j ).
The definition of negligible variation and a broad range of applications was introduced by [Vyborny]. One of them is the following theorem, which was proven in the following necessary and sufficient form in Theorem 5.12 of [Bartle].
Theorem 2 (Fundamental Theorem of Calculus for the HK Integral).
F (x) − F (a) = (HK)´x a f (s) ds for all x ∈ [a, b] if and only if there exists a set E ⊆ [a, b] such that F ′ (x) = f (x) for all x ∈ E and [a, b]
\E is a set of measure zero on which F has negligible variation.
In the context of the HK integral, negligible variation plays a role analogous to that played by absolute continuity in the context of the Lebesgue integral. Corollary 14.8 of [Bartle] proved the following relationship between the two.
Theorem 3. F is absolutely continuous on [a, b] if and only if F is of bounded variation on [a, b] and F has negligible variation on every subset of [a, b] that has measure zero.
Change of Variables on a Single Interval
Definition 7. A function f has negligible conditional variation on a set E ⊆ [a, b] if for any ǫ > 0 there exists a gauge δ on [a, b] such that for any tagged partition
P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ, xj ∈E ∆ j f < ǫ.
Some functions may have negligible conditional variation but not negligible variation on the set of points where they fail to be differentiable; a simple example is the indicator function of an open interval contained in [a, b]. We will present a continuous function with this property in Example 1 of Section 3.
The same examples show that, although a function that has negligible variation on a set also has negligible variation on all its subsets, this is not true for negligible conditional variation.
We will need the following theorem, which was proven in [Krz1] and [S&V]. For a stronger version of the theorem, see Theorem 7.
Theorem 4. If g has a derivative (finite or infinite) on a set E and g (E) has measure zero, then g ′ = 0 almost everywhere on E.
Lemma 1. Assume that both g : [a, b] → D and F : D → R have derivatives almost everywhere and that f = F ′ almost everywhere. Then g ′ (x) = 0 at almost every x ∈ [a, b] where the equality
(2) (F • g) ′ (x) = (f • g · g ′ ) (x)
fails, that is to say where (2) is false or either side is undefined.
Proof. Let Z be the null set where F does not have a derivative equal to f . By Theorem 4, g ′ (x) = 0 for almost every x ∈ g −1 (Z). On the complement of g −1 (Z),
holds at all x where g ′ (x) exists, and so almost everywhere.
Theorem 5. Assume that g : [a, b] → R has a derivative almost everywhere and that f : R → R is HK integrable on every interval with endpoints in the range of
g. 2 Define F (x) := (HK)´x g(a) f (u) du. Then (f • g) · g ′ is HK integrable on [a, b]
and the change of variables formula
(HK)ˆg (b) g(a) f (u) du = (HK)ˆb a f (g (s)) g ′ (s) ds
holds if and only if F • g has negligible conditional variation on the set where (F • g) ′ = f • g · g ′ fails.
2 If the HK integral of f exists on an interval then it also exists on every subinterval [Bartle,Corollary 3.8]. It is therefore sufficient to require that f be HK integrable over an interval containing the range of g.
Proof. Let B be the set where (F • g) ′ = f •g·g ′ fails and assume F •g has negligible
conditional variation there. Let h (x) = 0 if x ∈ B and h (x) = g ′ (x) otherwise.
By Theorem 2, F has a derivative equal to f almost everywhere and so by
Lemma 1, g ′ = h = 0 almost everywhere on B. Consequently, (HK)ˆb a (f • g · g ′ ) (s) ds = (HK)ˆb a (f • g · h) (s) ds.
Since F • g has a derivative on the complement of B, there exists for any ǫ > 0 a
function η ǫ : [a, b] \B → R + such that if y ∈ [x − η ǫ (x) , x + η ǫ (x)] ∩ [a, b] then (F • g) ′ (x) · (y − x) − ((F • g) (y) − (F • g) (x)) < ǫ |y − x| / (b − a) Also, because F • g has negligible conditional variation on B, there exists a gauge δ 1 on [a, b] such that for any tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subor- dinate to δ 1 , xj ∈B ∆ j f < ǫ/2. Let δ be a gauge on [a, b] so that δ (x) = η ǫ/2 (x) if x / ∈ B and δ (x) = δ 1 (x) if x ∈ B. Consider a tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ. The Riemann sum of f • g · h corresponding to this tagged partition is R P f • g · h = =0 xj ∈B (f • g · h) (x j ) · |I j | + xj / ∈B (f • g · h) (x j ) · |I j | = xj / ∈B (F • g) ′ (x j ) · |I j | = xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) + xj / ∈B ∆ j (F • g) = xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) + xj∈[a,b] ∆ j (F • g) − xj ∈B ∆ j (F • g) = xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) +(HK)ˆg (b) g(a) f (u) du − xj ∈B ∆ j (F • g) .
And so
R P f • g · h − (HK)ˆg (b) g(a) f (u) du (3) = xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) − xj ∈B ∆ j (F • g) .
Since F • g has negligible conditional variation on B and δ is chosen accordingly,
xj ∈B ∆ j (F • g) < ǫ/2.
Also, for any x j / ∈ B,
(F • g) ′ (x j ) · |I j | − ∆ j (F • g) < ǫ |I j | /2 (b − a) , and so xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) < ǫ/2. Therefore R P f • g · h − (HK)ˆg (b) g(a) f (u) du < ǫ,
proving that (HK)´b a f (g (s)) g ′ (s) ds exists and is equal to (HK)´g
(b)
g(a) f (u) du. Conversely, choose ǫ > 0, let B, h, and η ǫ be defined as above, and assume
(HK)´g (b) g(a) f (u) du = (HK)´b a f (g (s)) h (s) ds.
Thus there exists a gauge δ 1 on [a, b] so that for any tagged partition P subordinate to δ 1 ,
(4) R P f • g · h − (HK)ˆg (b) g(a) f (u) du < ǫ/2. Let δ be a gauge on [a, b] so that δ (x) = min δ 1 (x) , η ǫ/2 (x) if x / ∈ B, and δ (x) = δ 1 (x) if x ∈ B. Choose any tagged partition P [a, b] = {(x j , [a j , b j ]) : 1 ≤ j ≤ n} subordinate to δ.
Consequently it is also subordinate to δ 1 , and so (4) holds. By (3),
xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) − xj ∈B ∆ j (F • g) < ǫ/2. Also, xj / ∈B (F • g) ′ (x j ) · |I j | − ∆ j (F • g) < ǫ/2. Therefore xj ∈B ∆ j (F • g) < ǫ,
proving that F • g has negligible conditional variation on B.
By taking F (x) = x, we obtain as a corollary the following necessary and sufficient condition for the Fundamental Theorem of Calculus for the HK integral to hold for a particular interval, rather than all subintervals.
Corollary 1. Assume that g : [a, b] → R is differentiable almost everywhere on [a, b]. Then g ′ is HK integrable on [a, b] and g (b) − g (a) = (HK)´b a g ′ (s) ds if and
only if g has negligible conditional variation on the set where it is not differentiable.
Theorem 5 complements Theorem 1 by obtaining a necessary and sufficient condition for change of variables to hold on a single interval. However, even when one side of (1) is a Lebesgue integral, the integral on the other side sometimes must be taken in the HK sense. For example, take g as any function that is not an indefinite Lebesgue integral but which satisfies the condition of Corollary 1 and let f (x) = 1.
Change of Variables on All Subintervals
The following recasting of the Saks-Henstock Lemma for negligible variation provides a corollary to Theorem 5 where change of variables holds for each subinterval and, in this sense, provides the precise HK analog of the Serrin and Varberg theorem.
Lemma 2. Let f be a real-valued function on [a, b] and E a subset of [a, b]. Assume that f has negligible conditional variation on E∩[α, β] for every [α, β] ⊆ [a, b]. Then f has negligible variation on E.
Proof. Choose ǫ > 0 and let δ be a gauge on [a, b] such that for any tagged partition
P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ, xj ∈E ∆ j f < ǫ
Fix such a tagged partition P and choose ǫ ′ > 0. Since f has negligible conditional variation on each I j , there exists a gauge, δ j , such that if x j,k , I x j,k : 1 ≤ k ≤ n j is a tagged partition of [a j , b j ] subordinate to δ j , then
x j,k ∈E ∆ j,k f < ǫ ′ /n. Let Q j =
x j,k , I x j,k : 1 ≤ k ≤ n j be a tagged partition of I j subordinate to
min {δ, δ j }. Also let L be the subset of {1 . . . n} such that if j ∈ L then ∆ j f ≥ 0. Now let R = {(x j , I j ) : j ∈ L} ∪ j / ∈L Q j . Since R is a tagged partition of [a, b] subordinate to δ, ǫ > (x,[α,β])∈R x∈E f (β) − f (α) = j∈L xj∈E ∆ j f + j / ∈L x j,k ∈E ∆ j,k f . Also, because each Q j is subordinate to δ j , ǫ ′ > j / ∈L x j,k ∈E ∆ j,k f ≥ j / ∈L x j,k ∈E ∆ j,k f . Consequently, ǫ + ǫ ′ > j∈L xj∈E ∆ j f = j∈L xj ∈E |∆ j f | .
Since the choice of ǫ ′ > 0 was arbitrary, it must be true that
ǫ ≥ j∈L xj ∈E |∆ j f | . Similarly, ǫ ≥ j / ∈L xj ∈E |∆ j f | . Therefore 2ǫ ≥ xj ∈E |∆ j f | ,
proving that f has negligible variation on E.
Corollary 2. Assume that g : [a, b] → R is differentiable almost everywhere and that f : R → R is HK integrable on every interval with endpoints in the range of g.
where (F • g) ′ = f • g · g ′ fails.
The necessary and sufficient condition that F • g have negligible variation on the set where (F • g) ′ = f • g · g ′ fails is clearly justified by the preceding lemma; in Theorem 6 of the next section, we prove that it is equivalent to the condition that F • g have negligible variation on each null set and on the set where g ′ = 0.
Remark 1. A tempting possibility to investigate is whether HK integrals automati-
cally satisfy the requirements of the substituting function in the change of variables formula. In other words, if g is an indefinite HK integral, is it true that for any HK integrable f that (HK)´g (β) g(α) f (u) du = (HK)´β α f (g (s)) g ′ (s) ds? If this were true, then the composition of two indefinite HK integrals would be an indefinite HK integral. However, this is not the case, as the following two indefinite Lebesgue integrals (and hence also HK integrals) fail to have a composition which is an HK integral.
Let S be the Smith-Volterra-Cantor set of measure 1 2 [Bressoud, constructed on a unit interval through the usual method of deleting an open interval of length 4 −n from the center of each of the 2 n−1 intervals at step n, leaving a closed nowhere-dense set of positive measure in the limit. Let G (x) = dist (x, S) and F (x) = 4 √ x. Consequently F and G are both indefinite Riemann integrals, yet for any x ∈ S and n ∈ N there exists y ∈ (x − 2 −n , x + 2 −n ) such that
y − 2 −2n−3 , y + 2 −2n−3 ⊆ S c . Thus F (G(y))−F (G(x)) y−x = F (G(y)) |y−x| > 4 √ 2 −2n−3 2 −n = 4 √ 2 2n−3 .
Therefore F • G has no derivative on S, a set of positive measure, and so cannot be the indefinite HK integral of any function. Note that G is differentiable except on the endpoints and midpoint of each deleted interval, which is a countable set, and that the construction of G could be altered so that those points are differentiable as well, with the same result.
Negligible Variation
Example 1. The following is a continuous function that has negligible conditional variation and not negligible variation on a set.
Let C denote the Cantor set and c the Cantor-Lebesgue function on [0, 1]. Let D = C ∪ (−C). Let us denote
δ D (x) = 1 if x ∈ D dist (x, D) if x / ∈ D . If {(x j , I j ) : 1 ≤ j ≤ n} is a tagged partition of [−1, 1] subordinate to δ D , then 0 = |c (|1|) − c (|−1|)| = xj ∈D ∆ j c (|·|) + xj / ∈D =0 ∆ j c (|·|) .
Thus c (|·|) has negligible conditional variation on D.
Suppose there exists δ so that for P [−1, 1] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ, xj ∈D |∆ j c (|·|)| < 1. Form a tagged partition P [−1, 1] = {(x j , I j ) : 1 ≤ j ≤ n} as a union of a Q 1 [−1, 0] and Q 2 [−1, 0] subordinate to min {δ D (x) , δ 1 (x)}. These three partitions are a fortiori partitions subordinate to δ. Then, as above,
2 = |c (|−1|) − c (|0|)| + |c (|0|) − c (|1|)| = xj∈D xj ∈[−1,0] |∆ j c (|·|)| + xj / ∈D xj ∈[−1,0] =0 |∆ j c (|·|)| + xj ∈D xj ∈[0,1] |∆ j c (|·|)| + xj / ∈D xj ∈[0,1] =0 |∆ j c (|·|)| .
Conditions That Imply Negligible Variation.
Lemma
3. Let f : [a, b] → R be such that f ′ (x) = 0 ∀x ∈ D ⊆ [a, b]. Then f has negligible variation on D. Proof. Choose ǫ > 0. Let η ǫ : D → R + be a function such that if y ∈ [x − η ǫ (x) , x + η ǫ (x)]∩ [a, b] then |f (y) − f (x)| ≤ ǫ |y − x| / (b − a)
and let Proof. Let ǫ > 0 and let Z n = Z ∩ x : Df (x) ∈ [n, n + 1) . Also, let η 1 :
δ (x) = η ǫ (x) if x ∈ D 1 if x / ∈ D Choose a tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ. Then xj ∈D |∆ j f | ≤ ǫ b − a xj ∈D |I j | ≤ ǫ.Z → R + be a function such that if y ∈ [x − η 1 (x) , x + η 1 (x)] ∩ [a, b] then |f (y) − f (x)| ≤ 1 + Df (x) |y − x|
Let C n ⊇ Z n be open sets with measure less than ǫ 2 n+1 (n+2) . Define
δ (x) = min {η 1 (x) , dist (x, C c n )} if x ∈ Z n 1 if x / ∈ Z Choose a tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ. If x j ∈ Z n for some j, then Df (x j ) ∈ [n, n + 1), so |∆ j f | ≤ |f (b j ) − f (x j )| + |f (x j ) − f (a j )| ≤ (n + 2) · |I j | . Therefore xj∈Z |∆ j f | = ∞ n=0 xj∈Zn |∆ j f | ≤ ∞ n=0 (n + 2) · λ (C n ) ≤ ǫ,
proving that f has negligible variation on Z.
Clearly, if a function has negligible variation on a set N , then it has negligible conditional variation on S ⊇ N if and only if it has negligible conditional variation on S\N .
Consider Theorem 5, where B is the set where (F • g) ′ = f •g·g ′ fails. Let A F and A g be the sets where F and g fail to have have derivatives and set A = g −1 (A F )∪A g .
By Lemma 1, g ′ (x) = 0 for almost every x ∈ B. Similarly, A F and A g have measure zero and g ′ is defined almost everywhere, so g ′ (x) = 0 at almost every x ∈ A.
Furthermore, Theorem 6. Assume that g : [a, b] → D and F : D → R have derivatives almost everywhere and that f = F ′ almost everywhere. Then F • g has negligible variation on the set where (F • g) ′ = f • g · g ′ fails if and only if F • g has negligible variation on each null set and on the set where g ′ is zero.
(F • g) ′ (x) = (F ′ • g · g ′ ) (x) if x ∈ B\A (f • g · g ′ ) (x) if x ∈ A\B so (F • g) ′ is
Proof. Let B again be the set where (F • g) ′ = f • g · g ′ fails and assume F • g has negligible variation on B. It then has negligible variation on all subsets of B as well,
including B ∩ {x : g ′ (x) = 0}. Since (F • g) ′ = f • g · g ′ on the complement of B,
then, by Lemma 3, F • g has negligible variation on {x : g ′ (x) = 0} \B. Therefore F • g has negligible variation on the set {x : g ′ (x) = 0}.
Similarly, for any null set Z, F • g will have negligible variation on Z ∩ B, since that is a subset of B. Also, by Lemma 4, F • g will have negligible variation on Z\B. Therefore F • g has negligible variation on Z.
Conversely, by Lemma 1, there exists a null set Z and a set E ⊆ {x : g ′ (x) = 0} such that B = Z ∪ E. Consequently if F • g has negligible variation on each null set, it must have it on Z in particular. Also, if it has negligible variation on {x : g ′ (x) = 0}, then it has it on its subsets such as E. Therefore F •g has negligible variation on B.
We may therefore restate Corollary 2 in the following equivalent form.
Corollary 3. Assume that g : [a, b] → R is differentiable almost everywhere and that f : R → R is HK integrable on every interval with endpoints in the range of g.
Define F (x) := (HK)´x g(a) f (u) du. Then (f • g) · g ′ is HK integrable on [a, b] and the change of variables formula a, b] if and only if F • g has negligible variation on each null set and on the set where g ′ is zero.
(HK)ˆg (β) g(α) f (u) du = (HK)ˆβ α f (g (s)) g ′ (s) ds holds for every [α, β] ⊆ [
Implications of Negligible Variation.
Functions that satisfy the conditions of Theorem 4 on a set E have a derivative equal to zero almost everywhere on E, so it follows from Lemma 3 that these functions have negligible variation on a subset of E of full measure. We show next that the conclusion of Theorem 4 holds for this larger class of functions. and E γ = x ∈ E\ {b} : D + g (x) > γ . For reasons of symmetry, it suffices to show that λ (E 0 ) = 0. Furthermore, λ * (E 0 ) ≤ ∞ n=1 λ * E 1 n , so it is sufficient to show λ * (E γ ) = 0 for every γ > 0. Choose γ, ǫ > 0. Let δ be a gauge such that for any tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ, xj ∈E |∆ j g| < ǫγ/4. Let
C = [α, β] ⊆ [a, b] : 0 < β − α < δ (α) and α ∈ E γ and g (β) − g (α) β − α > γ/2 .
Since C is a Vitali cover of E γ , there is a finite collection D of disjoint intervals in
C such that λ * (E γ \ D) < ǫ/2. Because D|I j | · γ/2.
Therefore λ ( D) = Ij ∈D |I j | < ǫ/2 and so λ * (E γ ) ≤ λ ( D) + λ * (E γ \ D) < ǫ. Since ǫ was arbitrary, 0 = λ (E γ ). Proof. We can clearly assume that a, b / ∈ E.
Choose ǫ > 0 and let δ be a gauge such that for any tagged partition P [a, b] = {(x j , I j ) : 1 ≤ j ≤ m} subordinate to δ, xj ∈E |∆ j g| < ǫ. Let η x = min {b − x, x − a, δ (x)}.
Thus for x ∈ E sup |h|≤ηx |g (x + h) − g (x)| ≤ ǫ, and we denote this finite-valued supremum as s (x). For every x ∈ E, choose h x so that |g (x + h x ) − g (x)| ≥ s (x) /2 and |h x | ≤ η x . Hence P n [a, b] is a tagged partition subordinate to δ and so
Define C (T ) = {(x − |h x | , x + |h x |) : x ∈ T }. C (E) isǫ > n i=1 |g (y i + h x ) − g (y i )| ≥ n i=1 s (y i ) /2. Define D n = n i=1
[g (y i ) − s (y i ) , g (y i ) + s (y i )]. Then D n+1 ⊇ D n and, from the inequality above, 4ǫ > λ (D n ) for all n. Additionally, ∞ n=1 D n ⊇ g (C ({y i })). Thus 4ǫ ≥ λ ( ∞ n=1 D n ) ≥ λ * (g (C ({y i }))). The same argument applies for {z i }, so 8ǫ ≥ λ * (g (C ({y i }) ∪ C ({z i }))) ≥ λ * (g (E)) .
3 While the statement of Besicovitch's Covering Theorem is usually given in R d and with only rough bounds for the number of sequences necessary, it is not too hard to show that two sequences suffice for R 1 .
Define F (x) := (HK)´x g(a) f (u) du. Then (f • g) · g ′ is HK integrable on [a, b] and the change of variables formula (HK)ˆg
f
(u) du = (HK)ˆβ α f (g (s)) g ′ (s) ds holds for every [α, β] ⊆ [a, b] if and only if F • g has negligible variation on the set
Thus c (|·|) does not have negligible variation on D. This argument also shows that c (|·|) does not have negligible conditional variation on D ∩ [0, 1].
Lemma 4 .
4Let f : [a, b] → R have finite upper and lower Dini derivatives on a null set Z; that is to say Df (x) := lim sup y→x f (y)−f (x) y−x < ∞ for all x ∈ Z. Then f has negligible variation on Z.
zero almost everywhere on (B\A) ∪ (A\B). By Lemma 3, F • g has negligible variation on (B\A) ∪ (A\B). This proves that if F • g has negligible conditional variation on any set S such that B ∩ A ⊆ S ⊆ B ∪ A, then it has it on every other such set.
Theorem 7 .
7If g : [a, b] → R has negligible variation on E ⊆ [a, b], then Dg (x) = 0 almost everywhere on E.Proof. Let D + g (x) := lim sup h→0+ g(x+h)−g(x) h
Theorem 7 and
7Lemma 3 show that f : [a, b] → R has negligible variation on a set E ⊆ [a, b] if and only if there exists a null set Z ⊆ E such that f has negligible variation on Z and f ′ (x) = 0 for all x ∈ E\Z. Theorem 8. If g : [a, b] → R has negligible variation on E ⊆ [a, b], then λ (g (E)) = 0.
a Besicovitch cover of E, so there exist two sequences 3 (possibly finite) of distinct points from E, {y i } and{z i }, such that C ({y i }) , C ({z i }) each consist of disjoint intervals and C ({y i }) ∪ C ({z i }) covers E.Since the closure of C ({y i } n i=1 ) is a finite union of closed intervals, there exists a finite collection O n of disjoint open (in [a, b]) intervals complementing it. Also, for each (α, β) ⊆ [a, b] there exists Q [α, β] subordinate to δ. Let P n [a, b] = n i=1 {(y i , [y i − |h x | , y i ]) , (y i , [y i , y i + |h x |]
is a finite collection of closed intervals, there exists a finite collection O of disjoint open (in [a, b]) intervals complementing D. By Cousin's Lemma, for each (α, β) ∈ O there exists a tagged partition Q [α, β] subordinate to δ. Let P [a, b] = {(α, [α, β]) : [α, β] ∈ D} ∪ Hence P [a, b] is a tagged partition {(x j , I j ) : 1 ≤ j ≤ n} subordinate to δ. So Ij ∈D |∆ j g| ≥
(α,β)∈O
Q [α, β]
.
ǫγ/4 >
xj∈E
|∆ j g| ≥
Ij ∈D
Since ǫ was arbitrarily small, λ (g (E)) = 0.
A modern theory of integration. Robert G Bartle, Graduate Studies in Mathematics. 32American Mathematical SocietyRobert G. Bartle, A modern theory of integration, Graduate Studies in Mathematics, vol. 32, American Mathematical Society, Providence, RI, 2001. MR1817647 (2002d:26001)
A radical approach to Lebesgue's theory of integration. David M Bressoud, MAA Textbooks. CambridgeCambridge University Press1David M. Bressoud, A radical approach to Lebesgue's theory of integration, MAA Textbooks, Cambridge University Press, Cambridge, 2008. MR2380238 (2008j:00001)
Integration by substitution. Gerald S Goodman, Proc. Amer. Math. Soc. 70116497Gerald S. Goodman, Integration by substitution, Proc. Amer. Math. Soc. 70 (1978), no. 1, 89-91. MR0476952 (57 #16497)
. Russell A Gordon, Lebesgue, Perron Denjoy, Henstock , Graduate Studies in Mathematics. 4American Mathematical SocietyProvidence. RI, 1994. MR1288751 (95m:26010Russell A. Gordon, The integrals of Lebesgue, Denjoy, Perron, and Henstock, Graduate Studies in Mathematics, vol. 4, American Mathematical Society, Prov- idence, RI, 1994. MR1288751 (95m:26010)
On change of variable in the Denjoy-Perron integral. I. K Krzyżewski, Colloq. Math. 92652MR0132816K. Krzyżewski, On change of variable in the Denjoy-Perron integral. I, Colloq. Math. 9 (1962), 99-104. MR0132816 (24 #A2652)
Integral: an easy approach after Kurzweil and Henstock. On change of variable in the Denjoy-Perron integral. Lee & Vyborny] Peng Yee Lee and Rudolf VýbornýCambridgeCambridge University PressIIColloq. Math., On change of variable in the Denjoy-Perron integral. II, Colloq. Math. 9 (1962), 317-323. MR0142714 (26 #283) [Lee & Vyborny] Peng Yee Lee and Rudolf Výborný, Integral: an easy approach after Kurzweil and Henstock, Australian Mathematical Society Lecture Series, vol. 14, Cam- bridge University Press, Cambridge, 2000. MR1756319 (2001h:26011)
A general chain rule for derivatives and the change of variables formula for the Lebesgue integral. James Serrin, Dale E Varberg, 514-520. MR0247011Amer. Math. Monthly. 76280James Serrin and Dale E. Varberg, A general chain rule for derivatives and the change of variables formula for the Lebesgue integral, Amer. Math. Monthly 76 (1969), 514-520. MR0247011 (40 #280)
Some applications of Kurzweil-Henstock integration. Rudolf Výborný, Math. Bohem. 1184MR1251885 (94k:26014Rudolf Výborný, Some applications of Kurzweil-Henstock integration, Math. Bohem. 118 (1993), no. 4, 425-441. MR1251885 (94k:26014)
33431 E-mail address: vermont. Yoram Vermont Rutherfoord, Sagher, 777 Glades Road. Florida Atlantic University, Department of Mathematical [email protected], [email protected] Rutherfoord, Yoram Sagher, Florida Atlantic University, Department of Mathematical Sciences, 777 Glades Road, Boca Raton, FL 33431 E-mail address: [email protected], [email protected]
|
[] |
[
"Bistability induced by two cross-correlated Gaussian white noises",
"Bistability induced by two cross-correlated Gaussian white noises"
] |
[
"A N Vitrenko *[email protected] \nDepartment of General and Theoretical Physics\nSumy State University\n2, Rimskiy-Korsakov Street40007SumyUkraine\n"
] |
[
"Department of General and Theoretical Physics\nSumy State University\n2, Rimskiy-Korsakov Street40007SumyUkraine"
] |
[] |
A prototype model of a stochastic one-variable system with a linear restoring force driven by two cross-correlated multiplicative and additive Gaussian white noises was considered earlier [S. I. Denisov et al., Phys. Rev. E 68, 046132 (2003)]. The multiplicative factor was assumed to be quadratic in the vicinity of a stable equilibrium point. It was determined that a negative cross-correlation can induce nonequilibrium transitions. In this paper, we investigate this model in more detail and calculate explicit expressions of the stationary probability density. We construct a phase diagram and show that both additive and multiplicative noises can also generate bimodal probability distributions of the state variable in the presence of anti-correlation. We find the order parameter and determine that the additive noise has a disordering effect and the multiplicative noise has an ordering effect. We explain the mechanism of this bistability and specify its key ingredients.
| null |
[
"https://arxiv.org/pdf/1612.03442v1.pdf"
] | 118,971,528 |
1612.03442
|
08538d0ede2212308da860e83b747dbdf4693766
|
Bistability induced by two cross-correlated Gaussian white noises
11 Dec 2016
A N Vitrenko *[email protected]
Department of General and Theoretical Physics
Sumy State University
2, Rimskiy-Korsakov Street40007SumyUkraine
Bistability induced by two cross-correlated Gaussian white noises
11 Dec 2016numbers: 0510Gg0540-a0570Fh
A prototype model of a stochastic one-variable system with a linear restoring force driven by two cross-correlated multiplicative and additive Gaussian white noises was considered earlier [S. I. Denisov et al., Phys. Rev. E 68, 046132 (2003)]. The multiplicative factor was assumed to be quadratic in the vicinity of a stable equilibrium point. It was determined that a negative cross-correlation can induce nonequilibrium transitions. In this paper, we investigate this model in more detail and calculate explicit expressions of the stationary probability density. We construct a phase diagram and show that both additive and multiplicative noises can also generate bimodal probability distributions of the state variable in the presence of anti-correlation. We find the order parameter and determine that the additive noise has a disordering effect and the multiplicative noise has an ordering effect. We explain the mechanism of this bistability and specify its key ingredients.
I. INTRODUCTION
Abstract (prototype, "toy") models of nonlinear systems play an important role in different fields of natural science and engineering such as physics, chemistry, biology, electronics and nanotechnology. On the one hand, they are simple enough to treat analytically in comparison with realistic models. On the other, they clearly demonstrate remarkable features of the system's behavior. Furthermore, abstract models are usually not related to any particular experimental system, but they can help us answer the question "What is possible?" [1]. As a few outstanding examples one may mention the Brusselator model for chemical oscillations [2], a series of models proposed by Rössler for chaotic dynamics [3], and coupled map lattices for a variety of phenomena in complex systems [4]. A macroscopic nonequilibrium system is inherently open and subject to fluctuations of its environment or so-called "external noise" [5]. The former can often be described by several variables, and an equivalent noise-driven dynamical system exhibits stochastic behavior [6,7]. It is generally difficult to determine its statistical properties. Moreover, noise in nonlinear systems can lead to not only disorder that is obviously, but also to temporal or spatial order that is counterintuitive. In the latter case it plays constructive role and induces numerous phenomena that are impossible in the underlying deterministic dynamics.
They are called noise-induced [8,9] and their examples include stochastic resonance [10][11][12], Brownian ratchets [13,14], noise-induced transitions [15], noise-induced phase transitions [16,17], etc.
The classical theory of noise-induced transitions was developed by Horsthemke and Lefever [15]. It is based on the assumptions, considerably simplifying its construction.
They are as follows: (i) the macroscopic system is spatially homogeneous (it is called zerodimensional); (ii) the system can be described by one state variable; (iii) the external noise is stationary, commonly Gaussian and white. Consequently, the state variable is a Markovian diffusion process governed by Langevin and Fokker-Planck equations [18,19], and its stationary probability density (SPD) can be obtained exactly. A noise-induced transition occurs, if the SPD of the state variable is changed qualitatively as the noise intensity exceeds a critical value. The genetic model [20] and Hongler's model [21] are relevant examples demonstrating this phenomenon. Their SPDs are changed in shape from unimodal to bimodal and two preferential states appear. The corresponding probabilistic potential becomes bistable, whereas the deterministic one is monostable.
Subsequently, noise-induced transitions are discussed in many contexts. It is known that they arise from the multiplicative nature of the external noise. Additive white noise does not modify qualitatively the SPD of one-variable systems, but it can give rise to transitions in nonlinear two-variable oscillators [22][23][24]. Small-size systems generate intrinsic multiplicative noise and discreteness-induced transitions can emerge [25][26][27][28][29]. They have two different mechanisms, one of them is closely related to the classical theory. Noise-induced transitions in zero-dimensional systems are not phase ones in the thermodynamic sense. Noise-induced phase transitions are found in spatially extended systems [16,17] in which temporal components either do not exhibit [30,31] or exhibit [32] noise-induced bistability.
Systems can be driven by two external cross-correlated noises [33, and references therein].
The influence of the cross-correlation on the stationary behavior of zero-dimensional onevariable systems is briefly studied in Ref. [34]. A simple model with a monostable deterministic potential driven by additive and multiplicative Gaussian white noises is considered.
It is shown that the SPD of the state variable may be changed from unimodal to bimodal as the strength of a negative correlation between noises is varied. Besides, each noise does not induce transitions individually. In this paper, we provide a more detailed investigation of this model. We are interested how it is related to other known models that demonstrate noise-induced transitions. To this end, we explicitly calculate the SPD of the state variable and find its asymptotics. It is unclear why a negative cross-correlation can lead to a qualitative change of the stationary behavior of the system and what role noises play in this phenomenon. We expect that both additive and multiplicative noises can induce bistability when a negative cross-correlation is present. To confirm this, we plot a phase diagram and find the order parameter for transitions that corresponds to the extreme points of the SPD. We analyze a statistically equivalent one-noise model to understand the mechanism of bistability induced by two cross-correlated Gaussian white noises and to specify its key ingredients.
The paper is organized as follows. We present the model in Sec. II. In Sec. III, we obtain explicitly the SPD of the state variable and find its asymptotics. In Sec. IV, we construct the phase diagram, find the order parameter, and discuss the mechanism of bistability. We summarize our results in Sec. V.
II. MODEL
The considered dimensionless Langevin equation has the form [34] (we slightly change the notation and rescale the equation in order to reduce the number of parameters)
x = −x + σ 1 x 2 (1 + x 2 ) −1 ξ 1 (t) + σ 2 ξ 2 (t), (2.1)
where the dot denote the derivative with respect to t, x(t) is the state variable, ξ 1 (t) and ξ 2 (t) are cross-correlated multiplicative and additive Gaussian white noises. They have zero mean, intensities σ 2 1 /2 and σ 2 2 /2, respectively, and correlation functions
ξ 1 (t)ξ 1 (t ′ ) = δ(t − t ′ ), ξ 2 (t)ξ 2 (t ′ ) = δ(t − t ′ ), ξ 1 (t)ξ 2 (t ′ ) = rδ(t − t ′ ), (2.2)
where the brackets · denote a statistical average, δ(t) is the Dirac delta function, and r is the coefficient of the correlation between the noises, |r| 1. The Stratonovich interpretation of the Langevin equation (2.1) is used. We assume natural boundary conditions for the state variable x(t), it takes values both positive and negative including zero.
The linear restoring force f (x) = −x corresponds to the parabolic potential V (x) =
x 2 /2 that is monostable with an equilibrium point at x = 0. The multiplicative factor has the specific form g(x) = x 2 /(1 + x 2 ). It is quadratic for small values of |x| [35] and constant for large ones. The latter is means that g(x) does not grow faster than linearly with x and the stochastic process x(t) does not explode [15]. Though the appropriate multiplicative factor arises here from the mathematical considerations, it finds applications, for example, in models of ecological outbreak dynamics [36], genetic regulatory systems [37,38], and tumor-immune system interactions [39,40]. The Langevin equation (2.1) can be interpreted as describing the one-dimensional motion of an overdamped particle in the potential V (x) subject to multiplicative ξ 1 (t) and additive ξ 2 (t) noises [41,42]. This simple model demonstrates the interesting phenomenon of transitions induced by two crosscorrelated noises when the SPD of x(t) becomes bimodal though the associated deterministic dynamics (σ 1 = 0 and σ 2 = 0) is monostable. If σ 1 = 0 then x(t) is the well-known Ornstein-Uhlenbeck process and its SPD is bell-shaped.
The Fokker-Planck equation for the probability density P (x, t) of x(t), statistically equivalent to Eqs. (2.1) and (2.2), is given by
∂ ∂t P (x, t) = − ∂ ∂x A(x)P (x, t) + ∂ 2 ∂x 2 B(x)P (x, t),(2.3)
where the drift A(x) and diffusion 2B(x) coefficients have the form
A(x) = −x + 1 2 B ′ (x), B(x) = 1 2 σ 2 1 [x 2 (1 + x 2 ) −1 + α] 2 + β 2 . (2.4)
Here the prime denotes the derivative with respect to x, and, for convenience, we introduce new parameters as follows:
α = rν, β = ν √ 1 − r 2 , ν = σ 2 σ 1 . (2.5) The diffusion coefficient (2.4) is everywhere positive, if −1 < r 1 or r = −1 and σ 2 σ 1 .
In these cases, the state variable is unbounded, x(t) ∈ (−∞, ∞). If r = −1 and σ 2 < σ 1 , the diffusion coefficient is equal to zero at the points ± σ 2 /(σ 1 − σ 2 ). They are natural boundaries and the domain of the state variable is restricted to the interval
− σ 2 /(σ 1 − σ 2 ), σ 2 /(σ 1 − σ 2 ) . (2.6)
We assume above that σ 2 = 0. Otherwise B(x) = 0 at x = 0 where the restoring force vanishes as well. It is easy to see that this boundary point is absorbing (it can also be supported by an analysis of the boundary conditions [15, pp. 104-107]). Indeed, for large |x| (|x| ≫ 1), the noise ξ 1 (t) is additive and the restoring force is sufficiently strong to drive the system toward the steady state x = 0. For small |x| (|x| ≪ 1), the multiplicative factor g(x) is proportional to x 2 and the restoring force dominates again the stochastic force. If the particle reaches the absorbing point x = 0, it remains there forever. The SPD of x(t)
becomes the Dirac delta function. The additive noise ξ 2 (t) destroys this boundary, keeping the system away from the steady state.
III. STATIONARY PROBABILITY DENSITY
We write the stationary solution of the Fokker-Planck equation (2.3) as
P(x) = N [B(x)] −1/2 exp[−Φ(x)], (3.1)
where N is the normalizing factor and Φ(x) is the modified potential. Up to a constant, the latter is given by
Φ(x) = σ −2 1 x 2 (y 2 + 2y + 1)dy µ 2 y 2 + 2(ν 2 + α)y + ν 2 ,(3.2)
where
µ 2 = 1 + 2rν + ν 2 . (3.3)
Using a table of integrals [43], we calculate Eq. (3.2) and explicitly express Φ(x). There are three different cases:
(i) for |r| < 1, Φ(x) = γ −2 x 2 + 1 β − 2β µ 2 arctan µ 2 x 2 + ν 2 + α β + 1 + α µ 2 ln µ 2 x 4 + 2(ν 2 + α)x 2 + ν 2 ,(3.4)
where (i) for |r| < 1,
γ 2 = (σ 1 µ) 2 , (3.5) (ii) for r = ±1 (α = −1), Φ(x) = γ −2 x 2 − 1 µ(µx 2 + α) + 2 µ ln µx 2 + α , (3.6) where µ = 1 + α, (iii) for r = −1 and ν = 1 (α = −1), Φ(x) = 1 3σ 2 1 1 + x 2 3 .P(x) = N (1 + x 2 ) [µ 2 x 4 + 2(ν 2 + α)x 2 + ν 2 ] 1 2 + 1+α (µγ) 2 × exp − 1 γ 2 x 2 + 1 β − 2β µ 2 × arctan µ 2 x 2 + ν 2 + α β , (3.8) (ii) for r = ±1 (α = −1), P(x) = N (1 + x 2 ) |µx 2 + α| 1+ 2 µγ 2 exp − 1 γ 2 x 2 − 1 µ(µx 2 + α) ,(3.9)
(iii) for r = −1 and ν = 1 (α = −1), instance, in terms of (σ 1 , α, β). In this case, µ 2 = (1 + α) 2 + β 2 and ν 2 = α 2 + β 2 . The corresponding Langevin equation readṡ
P(x) = N (1 + x 2 ) exp − 1 3σ 2 1 1 + x 2 3 .(3.x = −x + σ 1 G(x)ξ(t),(3.11)
where ξ(t) is Gaussian white noise, ξ(t) = 0 and ξ(t)ξ(t ′ ) = δ(t − t ′ ), and the multiplicative factor G(x) takes the form
G(x) = [x 2 (1 + x 2 ) −1 + α] 2 + β 2 1/2 . (3.12)
Here we convert the two-noise Langevin equation (2.1) to the statistically equivalent onenoise Langevin equation (3.11).
The SPD (3.8) is unbounded and has Gaussian tails, i.e., P(x) ∝ exp(−x 2 /γ 2 ) as |x| ≫ 1.
Indeed, for large |x|, the system (2.1) is driven by two additive ξ 1 (t) and ξ 2 (t) noises. The quantity γ 2 /2 is the intensity of the stochastic process σ 1 ξ 1 (t) + σ 2 ξ 2 (t). For small |x|, Eq. (3.8) can be reduced to the asymptotic form
P(x) ∝ 1 + x 2 + α β 2 −1/2 exp − 1 βσ 2 1 × arctan x 2 + α β , |x| ≪ 1,(3.13)
which does not contain the parameter γ and is completely determined by the parameters α, β, and σ 1 . The dynamics of the system (2.1) is non-Gaussian in the neighborhood of the point x = 0. We do not find the probability density (3.13) in a table of distributions, but it can formally be obtained from the unnormalized probability density of the Pearson type IV distribution [44] by substituting x 2 instead of x.
The SPD (3.9), if r = 1 or r = −1 and σ 2 > σ 1 , is also unrestricted with Gaussian tails. But, if r = −1 and σ 2 < σ 1 , it is restricted to the interval (2.6). For small |x|, the asymptotics of Eq. (3.9) can be written as
P(x) ∝ 1 |x 2 + α| exp 1 σ 2 1 (x 2 + α)
, |x| ≪ 1.
(3.14)
If r = −1 and σ 2 = σ 1 , Eq. (3.14) is the unnormalized SPD of the genetic model [15,20,45].
So, the considered model (2.1) is close to it in the specific case.
The SPD (3.10) (r = −1 and σ 2 = σ 1 ) is unrestricted, but its tails are non-Gaussian, P(x) ∝ x 2 exp(−x 6 /3σ 2 1 ) as |x| ≫ 1. This asymptotics agrees with that obtained for the system driven by Gaussian white noise, whose amplitude depends on the state variable x as x −2 [46]. For large |x|, the additive noises ξ 1 (t) and ξ 2 (t) exactly cancel each other, γ 2 = 0, and the system's dynamics is deterministic. The Langevin equation (3.11) related to Eq. (3.10) takes the formẋ
= −x + σ 1 (1 + x 2 ) −1 ξ 1 (t).
(3.15)
The multiplicative factor G(x) = 1/(1 + x 2 ) coincides with the field-dependent kinetic coefficient for the spatially extended systems [32,47] weak that σ 1 σ 2 < 1, by changing the strength r of the correlation between the noises, the SPD remains unimodal. In these cases, the cross-correlation can not lead to transitions.
Therefore, we say that bistability is induced by two cross-correlated Gaussian white noises.
The maximum points x m± of the bimodal SPDs (3.8), (3.9) and (3.10) are found from Eq. [35]. It yields y 3 + 3py + 2q = 0 (y 1), (4.2) where 3p = rσ 1 σ 2 + σ 2 1 and 2q = −σ 2 1 . If D D c , the solution of the cubic equation (4.2) can be written as follows (see, for instance, Ref. [48]):
(i) for p < 0 and q 2 + p 3 0,
y = 2 |p| cos 1 3 arccos − q |p| 3 ,(4.3)
(ii) for p < 0 and q 2 + p 3 > 0, The role that the noises play in the considered phenomenon is not just creating bistability.
y = 2 |p| cosh 1 3 arcosh − q |p| 3 , (4.4) (iii) for p > 0, y = 2 √ p sinh 1 3 arsinh − q p 3 ,(4.
The additive noise has also a disordering effect, it spreads the SPD as illustrated in Fig. 2(a). The order parameter m = |x m± | is equal to zero if σ 2 < σ 2c , where σ 2c = −1/rσ 1 , and increases monotonically with increasing σ 2 if σ 2 > σ 2c [see Fig. 3 The distance 2m between two maxima of P(x) tends to infinity as the intensity of the additive noise increases indefinitely. The additive noise, correlated with the multiplicative noise, induces bistable states firstly, then destroys them in the end. But formally the SPD remains bimodal.
In contrast to the additive noise, the multiplicative noise has an ordering effect: by increasing its intensity, the SPD is narrowed [see Fig. 2
(b)] and two sharp peaks appear
[the curve (3)]. The order parameter m is zero if σ 1 < σ 1c , where σ 1c = −1/rσ 2 , and first increases to a maximum value, then starts to decrease with σ 1 if σ 1 > σ 1c [see Fig. 3(b)].
According to Eqs. (4.5) and (4.7), the asymptotics of m for large σ 1 takes the form If σ 1 < σ 1c , m = 0. If σ 1 > σ 1c , m first increases to a maximum value, then decreases with σ 1 , and m → 0 as σ 1 → ∞.
m ∼ 1 1 + rν − 1 (ν → 0),(4.
understand the mechanism of the phenomenon of bistability induced by two cross-correlated
Gaussian white noises, we analyze the one-noise Langevin equation (3.11). It has two key ingredients: (1) the linear restoring force, f (x) = −x; (2) the multiplicative noise term,
σ 1 G(x)ξ(t).
The latter is the result of the cooperative interaction of two cross-correlated multiplicative and additive Gaussian white noises (2.1). The Maclaurin series expansion of G(x) (3.12) is given by
G(x) = ν + rx 2 + o(x 4 ). (4.11)
Eq. (4.11) indicates that if the cross-correlation is negative, the multiplicative factor G(x) has appropriate form for the emergence of noise-induced transitions.
V. CONCLUSIONS
The prototype model of a stochastic zero-dimensional one-variable system with a linear restoring force driven by two cross-correlated multiplicative and additive Gaussian white noises has been studied in Ref. [34]. The multiplicative factor has been assumed to be quadratic for small absolute values of the state variable and constant for large ones. It has been shown that by changing the strength of a negative cross-correlation the SPD of the state variable may undergo a transition from unimodal to bimodal distribution. In this paper, we have investigated this model in more detail. We have calculated explicitly the SPD of the state variable and found its asymptotics. It has been determined that if the noises are perfectly anti-correlated (r = −1) with equal intensities, the considered model is reduced to the well-known genetic model for small absolute values of the state variable. We have constructed the phase diagram and shown that by increasing both the intensity of the multiplicative noise and the intensity of the additive noise, bistability may emerge when a negative cross-correlation is present. We have obtained the order parameter for transitions corresponding to the extreme points of the SPD. We determined that the additive noise has a disordering effect on the stationary behavior of the system, it spreads the SPD and tends to destroy the deterministic steady state and bistable states. The multiplicative noise has an ordering effect, it narrows the SPD and tends to stabilize the deterministic steady state, and a second critical point occurs. It has been specified the key ingredients of bistability.
They are: (1) linear restoring force; (2) additive Gaussian white noise; (3) multiplicative Gaussian white noise, whose amplitude depends on the state variable x as x 2 in the vicinity of the steady state x = 0; (4) negative cross-correlation. Anti-correlation between the noises means that if one noise increases, the other decreases, and vice versa. Their cooperative interplay causes the effective stochastic force, whose amplitude takes a maximum value at
x = 0 and decreases as |x| increases. The deterministic restoring force drives the system toward the steady state, the effective stochastic force away from it, and bistability may occur as a result of the balance between forces. Although the model is abstract, we expect that the same phenomenon can be observed in real systems.
. (2.4), (3.4), (3.6), and (3.7) into Eq. (3.1), we respectively obtain the explicit expressions for the SPD of x(t):
FIG. 1 .
1in which the zero-dimensional units demonstrate noise-induced transitions. For |x| ≫ 1, G(x) vanishes and the noise ξ(t) does not affect the systems. For |x| ≪ 1, G(x) ∼ 1 − x 2 , Eq. (3.15) is the Langevin equation of the genetic model. IV. PHASE DIAGRAM AND ORDER PARAMETER According to Ref. [34], the shape of the SPDs (3.8), (3.9), and (3.10) is characterized by the following parameter: D = rσ 1 σ 2 (4.1) with the critical value D c = −1. Eq. (4.1) is used to plot the phase diagram of the steadystate behavior of the model (2.1) in the (σ 2 , σ 1 ) plane for different values of r (FIG. 1). The extreme points of the SPD are the order parameter for transitions. If D > D c , the SPD of x(t) is bell-shaped with a single maximum at x = 0, the preferential state coincides with the deterministic steady state. If D < D c , P(x) has a crater-like form with two maxima at x m± and a local minimum at x = 0, two preferential states arise, whereas the underlying deterministic dynamics has only one stable state. Hence at D = D c , the SPD undergoes a transition from unimodal to bimodal distribution. It is possible only if the anti-correlation exists, −1 r < 0, as indicated by the minus sign in D c . In this case, by increasing both the amplitude σ 1 of the multiplicative noise and the amplitude σ 2 of the additive noise, bistable states are created(FIG. 2). Thus, both additive and multiplicative noises can induce transitions in the presence of negative cross-correlation. But if they are not sufficiently strong, σ 1 < 1 and σ 2 < 1, or if one noise is strong and the other one is Phase diagram in the (σ 2 , σ 1 ) plane for Eq. (4.1). The region above the corresponding critical curve is bimodal.
FIG. 2 .
25) (iv) for p = 0 (r = −1 and σ 1 = σ 2 ), Plot of the SPD P(x) vs the state variable x for Eq. (3.8), r = −0.99. (a) The amplitude of the multiplicative noise is fixed, σ 1 = 2. The amplitude of the additive noise increases: (1) σ 2 = 0.1, (2) σ 2 = 2, (3) σ 2 = 4, (4) σ 2 = 10 2 . The curve (1) is unimodal. The curve (4) is bimodal, but its two maxima are indistinguishable and spaced a considerable distance apart. (b) The amplitude of the additive noise is fixed, σ 2 = 2. The amplitude of the multiplicative noise increases: (1) σ 1 = 0.1, (2) σ 1 = 4, (3) σ 1 = 40, (4) σ 1 = 10 5 . The curve (4) is bimodal, but its two maxima are close enough to each other.andx m± = ± y − 1.(4.7)
FIG. 3 .
3zero as σ 1 → ∞. Two sharp peaks approach each other and converge into a single peak around x = 0. In this case, the SPD is still bimodal, its two maxima coincide. It is important that the phase diagram in FIG. 1 does not show any reentrant transitions. The multiplicative noise, correlated with additive noise, induces bistability with a critical point at x = 0 and σ 1 = σ 1c where the SPD has a double maximum. In fact, a second critical point occurs at x = 0 and σ 1 → ∞ that is an unexpected result. It can be interpreted by analyzing Eqs. (3.11) and (3.12). The parameters (2.5) α and β tend to zero as σ 1 ≫ σ 2 . As a result, the Langevin equation (3.11) is reduced to the Langevin equation (2.1) with ξ 2 (t) → 0. In this case, as mentioned in Sec. II, the point x = 0 is absorbing and the SPD is the Dirac delta function.An ordering effect of the multiplicative noise is unusual. Obviously, we expect a disordering effect from any external noise[15], as in our case r = −1 and σ 2 = σ 1 [see Eqs.σ 1 → ∞ and the SPD (3.10) is broadened. It should be noted that the multiplicative factor G(x) = 1/(1 + x 2 ) does not vanish at x = 0 and the absorbing boundary does not appear. It is known that noise-induced bistability is the competitive effect between a deterministic restoring force and a multiplicative random force. The former drives the system toward the steady state, x = x 0 , the latter away from it. The multiplicative factor must have a maximum at x = x 0 and decrease as x deviates from x 0 [9, pp. 219-220]. More concretely, if the deterministic force is linear, f (x) = −x + o(x), then the multiplicative factor must be in the form g(x) = 1−x 2 +o(x 2 ) [31]. When the noise is sufficiently strong, the bimodal SPD of x may occur as a result of the balance between deterministic and random forces. Order parameter m = |x m± | vs (a) the amplitude σ 2 of the additive noise or (b) the amplitude σ 1 of the multiplicative noise for Eq. (4.7). The coefficient r of the correlation between the noises is varied: (1) r = −0.1, (2) r = −0.2, (3) r = −0.4, (4) r = −0.99. (a) The parameter σ 1 is fixed, σ 1 = 2. The critical value is σ 2c = −1/rσ 1 . If σ 2 < σ 2c , m = 0. If σ 2 > σ 2c , m increases monotonically with σ 2 . (b) The parameter σ 2 is fixed, σ 2 = 2. The critical value is σ 1c = −1/rσ 2 .
J A Pelesko, Self Assembly: The Science of Things That Put Themselves Together. Taylor & Francis, Boca RatonJ. A. Pelesko, Self Assembly: The Science of Things That Put Themselves Together (Taylor & Francis, Boca Raton, 2007) pp. 211-251.
Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations. G Nicolis, I Prigogine, WileyNew YorkG. Nicolis and I. Prigogine, Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations (Wiley, New York, 1977).
O Gurel, D Gurel, Oscillations in Chemical Reactions. F. L. BoschkeBerlinSpringer118O. Gurel and D. Gurel, in Oscillations in Chemical Reactions, Topics in Current Chemistry, Vol. 118, edited by F. L. Boschke (Springer, Berlin, 1983) pp. 1-73.
K Kaneko, I Tsuda, Complex Systems: Chaos and Beyond: A Constructive Approach with Applications in Life Sciences. BerlinSpringerK. Kaneko and I. Tsuda, Complex Systems: Chaos and Beyond: A Constructive Approach with Applications in Life Sciences (Springer, Berlin, 2001).
Noise in Nonlinear Dynamical Systems. F. Moss and P. V. E. McClintockCambridgeCambridge University PressF. Moss and P. V. E. McClintock, eds., Noise in Nonlinear Dynamical Systems (Cambridge University Press, Cambridge, 1989).
Stochastic Dynamics. L. Schimansky-Geier and T. PöschelBerlinSpringer484L. Schimansky-Geier and T. Pöschel, eds., Stochastic Dynamics, Lecture Notes in Physics, Vol. 484 (Springer, Berlin, 1997).
V S Anishchenko, V Astakhov, A Neiman, T Vadivasova, L Schimansky-Geier, Nonlinear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments. BerlinSpringerSpringer Series in SynergeticsV. S. Anishchenko, V. Astakhov, A. Neiman, T. Vadivasova, and L. Schimansky-Geier, Non- linear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments, 2nd ed., Springer Series in Synergetics (Springer, Berlin, 2007).
H S Wio, K Lindenberg, Modern Challenges in Statistical Mechanics. V. M. Kenkre and K. LindenbergNew YorkAmerican Institute of Physics658H. S. Wio and K. Lindenberg, in Modern Challenges in Statistical Mechanics, AIP Conference Proceedings, Vol. 658, edited by V. M. Kenkre and K. Lindenberg (American Institute of Physics, New York, 2003) pp. 1-60.
L Ridolfi, P D'odorico, F Laio, Noise-Induced Phenomena in the Environmental Sciences. New YorkCambridge University PressL. Ridolfi, P. D'Odorico, and F. Laio, Noise-Induced Phenomena in the Environmental Sci- ences (Cambridge University Press, New York, 2011).
. L Gammaitoni, P Hänggi, P Jung, F Marchesoni, Rev. Mod. Phys. 70223L. Gammaitoni, P. Hänggi, P. Jung, and F. Marchesoni, Rev. Mod. Phys. 70, 223 (1998).
. T Wellens, V Shatokhin, A Buchleitner, Rep. Prog. Phys. 6745T. Wellens, V. Shatokhin, and A. Buchleitner, Rep. Prog. Phys. 67, 45 (2004).
S Rajasekar, M A F Sanjuan, Nonlinear Resonances. ChamSpringerS. Rajasekar and M. A. F. Sanjuan, Nonlinear Resonances, Springer Series in Synergetics (Springer, Cham, 2016).
. P Reimann, Phys. Rep. 36157P. Reimann, Phys. Rep. 361, 57 (2002).
D Cubero, F Renzoni, Brownian Ratchets: From Statistical Physics to Bio and Nanomotors. CambridgeCambridge University PressD. Cubero and F. Renzoni, Brownian Ratchets: From Statistical Physics to Bio and Nano- motors (Cambridge University Press, Cambridge, 2016).
W Horsthemke, R Lefever, Noise-Induced Transitions: Theory and Applications in Physics, Chemistry, and Biology. BerlinSpringer15W. Horsthemke and R. Lefever, Noise-Induced Transitions: Theory and Applications in Physics, Chemistry, and Biology, Springer Series in Synergetics, Vol. 15 (Springer, Berlin, 1984).
J García-Ojalvo, J M Sancho, Noise in Spatially Extended Systems, Institute for Nonlinear Science. New YorkSpringerJ. García-Ojalvo and J. M. Sancho, Noise in Spatially Extended Systems, Institute for Non- linear Science (Springer, New York, 1999).
. F Sagués, J M Sancho, J García-Ojalvo, Rev. Mod. Phys. 79829F. Sagués, J. M. Sancho, and J. García-Ojalvo, Rev. Mod. Phys. 79, 829 (2007).
The Langevin Equation: With Applications to Stochastic Problems in Physics. W Coffey, Y P Kalmykov, World Scientific Series in Contemporary Chemical Physics. SingaporeWorld Scientific27Chemistry and Electrical EngineeringW. Coffey and Y. P. Kalmykov, The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering, 3rd ed., World Scientific Series in Contemporary Chemical Physics, Vol. 27 (World Scientific, Singapore, 2012).
H Risken, The Fokker-Planck Equation: Methods of Solution and Applications. BerlinSpringer182nd ed.H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications, 2nd ed., Springer Series in Synergetics, Vol. 18 (Springer, Berlin, 1989).
. L Arnold, W Horsthemke, R Lefever, Z. Phys. B. 29367L. Arnold, W. Horsthemke, and R. Lefever, Z. Phys. B 29, 367 (1978).
. M.-O Hongler, Helv. Phys. Acta. 52280M.-O. Hongler, Helv. Phys. Acta 52, 280 (1979).
. L Schimansky-Geier, A V Tolstopjatenko, W Ebelin, Phys. Lett. A. 108329L. Schimansky-Geier, A. V. Tolstopjatenko, and W. Ebelin, Phys. Lett. A 108, 329 (1985).
. P S Landa, A A Zaikin, V G Ushakov, J Kurths, Phys. Rev. E. 614809P. S. Landa, A. A. Zaikin, V. G. Ushakov, and J. Kurths, Phys. Rev. E 61, 4809 (2000).
. V V Semenov, A B Neiman, T E Vadivasova, V S Anishchenko, Phys. Rev. E. 9352210V. V. Semenov, A. B. Neiman, T. E. Vadivasova, and V. S. Anishchenko, Phys. Rev. E 93, 052210 (2016).
. Y Togashi, K Kaneko, Phys. Rev. Lett. 862459Y. Togashi and K. Kaneko, Phys. Rev. Lett. 86, 2459 (2001).
. T Biancalani, L Dyson, A J Mckane, Phys. Rev. Lett. 11238101T. Biancalani, L. Dyson, and A. J. McKane, Phys. Rev. Lett. 112, 038101 (2014).
. B Houchmandzadeh, M Vallade, Phys. Rev. E. 9122115B. Houchmandzadeh and M. Vallade, Phys. Rev. E 91, 022115 (2015).
. N Saito, K Kaneko, Phys. Rev. E. 9122707N. Saito and K. Kaneko, Phys. Rev. E 91, 022707 (2015).
. Y Saito, Y Sughiyama, K Kaneko, T J Kobayashi, Phys. Rev. E. 9422140Y. Saito, Y. Sughiyama, K. Kaneko, and T. J. Kobayashi, Phys. Rev. E 94, 022140 (2016).
. C Van Den Broeck, J M R Parrondo, R Toral, Phys. Rev. Lett. 733395C. Van den Broeck, J. M. R. Parrondo, and R. Toral, Phys. Rev. Lett. 73, 3395 (1994).
. C Van Den Broeck, J M R Parrondo, R Toral, R Kawai, Phys. Rev. E. 554084C. Van den Broeck, J. M. R. Parrondo, R. Toral, and R. Kawai, Phys. Rev. E 55, 4084 (1997).
. M Ibañes, J García-Ojalvo, R Toral, J M Sancho, Phys. Rev. Lett. 8720601M. Ibañes, J. García-Ojalvo, R. Toral, and J. M. Sancho, Phys. Rev. Lett. 87, 020601 (2001).
. V Méndez, S I Denisov, D Campos, W Horsthemke, Phys. Rev. E. 9012116V. Méndez, S. I. Denisov, D. Campos, and W. Horsthemke, Phys. Rev. E 90, 012116 (2014).
. S I Denisov, A N Vitrenko, W Horsthemke, Phys. Rev. E. 6846132S. I. Denisov, A. N. Vitrenko, and W. Horsthemke, Phys. Rev. E 68, 046132 (2003).
The extrema of the SPD for the system with the force f (x) = −x subject to cross-correlated multiplicative g(x)ξ 1 (t) and additive ξ 2 (t) noises are found from. 34The extrema of the SPD for the system with the force f (x) = −x subject to cross-correlated multiplicative g(x)ξ 1 (t) and additive ξ 2 (t) noises are found from [34]
By analyzing this equation, one can conclude that g(x) = x 2 is the simplest case in which the number of extrema changes from 1 (one maximum) to 3 (two maximum and one minimum) as the coefficient of the cross-correlation r is varied. Moreover. if r = 0 no changes are possibleBy analyzing this equation, one can conclude that g(x) = x 2 is the simplest case in which the number of extrema changes from 1 (one maximum) to 3 (two maximum and one minimum) as the coefficient of the cross-correlation r is varied. Moreover, if r = 0 no changes are possible.
. Y Sharma, K C Abbott, P S Dutta, A K Gupta, Theor. Ecol. 8163Y. Sharma, K. C. Abbott, P. S. Dutta, and A. K. Gupta, Theor. Ecol. 8, 163 (2015).
. X.-M Liu, H.-Z Xie, L.-G Liu, Z.-B Li, Phys. A. 388392X.-M. Liu, H.-Z. Xie, L.-G. Liu, and Z.-B. Li, Phys. A 388, 392 (2009).
. Y Sharma, P S Dutta, A K Gupta, Phys. Rev. E. 9332404Y. Sharma, P. S. Dutta, and A. K. Gupta, Phys. Rev. E 93, 032404 (2016).
. T Bose, S Trimper, Phys. Rev. E. 7951903T. Bose and S. Trimper, Phys. Rev. E 79, 051903 (2009).
A Onofrio, Systems Biology of Tumor Dormancy. H. Enderling, N. Almog, and L. HlatkyNew YorkSpringer734A. d'Onofrio, in Systems Biology of Tumor Dormancy, Advances in Experimental Medicine and Biology, Vol. 734, edited by H. Enderling, N. Almog, and L. Hlatky (Springer, New York, 2013) pp. 111-143.
. M Gitterman, J. Phys. A. 32293M. Gitterman, J. Phys. A 32, L293 (1999).
Oscillator and Pendulum with a Random Mass. M Gitterman, World Scientific40SingaporeM. Gitterman, Oscillator and Pendulum with a Random Mass (World Scientific, Singapore, 2015) p. 40.
I S Gradshteyn, I M Ryzhik, Table of Integrals, Series, and Products. WalthamAcademic Press8th ed.I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 8th ed. (Academic Press, Waltham, 2015).
H Jeffreys, Oxford Classic Texts in the Physical Sciences. OxfordOxford University Press75Theory of ProbabilityH. Jeffreys, Theory of Probability, 3rd ed., Oxford Classic Texts in the Physical Sciences (Oxford University Press, Oxford, 1961) p. 75.
R Mahnke, J Kaupužs, I Lubashevsky, Physics of Stochastic Processes: How Randomness Acts in Time, Physics Textbook. WeinheimWiley-VCH364R. Mahnke, J. Kaupužs, and I. Lubashevsky, Physics of Stochastic Processes: How Random- ness Acts in Time, Physics Textbook (Wiley-VCH, Weinheim, 2009) p. 364.
. S I Denisov, W Horsthemke, Phys. Rev. E. 6531105S. I. Denisov and W. Horsthemke, Phys. Rev. E 65, 031105 (2002).
. J Buceta, M Ibañes, J M Sancho, K Lindenberg, Phys. Rev. E. 6721113J. Buceta, M. Ibañes, J. M. Sancho, and K. Lindenberg, Phys. Rev. E 67, 021113 (2003).
I N Bronshtein, K A Semendyayev, G Musiol, H Mühlig, Handbook of Mathematics. BerlinSpringer6th ed.I. N. Bronshtein, K. A. Semendyayev, G. Musiol, and H. Mühlig, Handbook of Mathematics, 6th ed. (Springer, Berlin, 2015) pp. 40-42.
|
[] |
[
"Predictions of α-decay half-lives for neutron-deficient nuclei with the aid of artificial neural network",
"Predictions of α-decay half-lives for neutron-deficient nuclei with the aid of artificial neural network"
] |
[
"A A Saeed \nDepartment of Physics and Materials Science\nKwara State University\nMaleteNigeria\n",
"W A Yahya \nDepartment of Physics and Materials Science\nKwara State University\nMaleteNigeria\n",
"O K Azeez \nDepartment of Physics and Materials Science\nKwara State University\nMaleteNigeria\n"
] |
[
"Department of Physics and Materials Science\nKwara State University\nMaleteNigeria",
"Department of Physics and Materials Science\nKwara State University\nMaleteNigeria",
"Department of Physics and Materials Science\nKwara State University\nMaleteNigeria"
] |
[] |
In recent years, artificial neural network (ANN) has been successfully applied in nuclear physics and some other areas of physics. This study begins with the calculations of α-decay half-lives for some neutron-deficient nuclei using Coulomb and proximity potential model (CPPM), temperature dependent Coulomb and proximity potential model (CPPMT), Royer empirical formula, new Ren B (NRB) formula, and a trained artificial neural network model (T AN N ). By comparison with experimental values, the ANN model is found to give very good descriptions of the half-lives of the neutron-deficient nuclei. Moreover CPPMT is found to perform better than CPPM, indicating the importance of employing temperature-dependent nuclear potential. Furthermore, to predict the α-decay half-lives of unmeasured neutron-deficient nuclei, another ANN algorithm is trained to predict the Q α values. The results of the Q α predictions are compared with the Weizsäcker-Skyrme-4+RBF (WS4+RBF) formula. The half-lives of unmeasured neutron-deficient nuclei are then predicted using CPPM, CPPMT, Royer, NRB, and T AN N , with Q α values predicted by ANN as inputs. This study concludes that half-lives of α-decay from neutron-deficient nuclei can successfully be predicted using ANN, and this can contribute to the determination of nuclei at the driplines.
|
10.5506/aphyspolb.53.1-a4
|
[
"https://arxiv.org/pdf/2201.13129v1.pdf"
] | 246,074,649 |
2201.13129
|
c21ac8b3c107b937694fbff347b4003af2e81192
|
Predictions of α-decay half-lives for neutron-deficient nuclei with the aid of artificial neural network
A A Saeed
Department of Physics and Materials Science
Kwara State University
MaleteNigeria
W A Yahya
Department of Physics and Materials Science
Kwara State University
MaleteNigeria
O K Azeez
Department of Physics and Materials Science
Kwara State University
MaleteNigeria
Predictions of α-decay half-lives for neutron-deficient nuclei with the aid of artificial neural network
numbers: 2790 +b2360+e2110Tg2370 +j
In recent years, artificial neural network (ANN) has been successfully applied in nuclear physics and some other areas of physics. This study begins with the calculations of α-decay half-lives for some neutron-deficient nuclei using Coulomb and proximity potential model (CPPM), temperature dependent Coulomb and proximity potential model (CPPMT), Royer empirical formula, new Ren B (NRB) formula, and a trained artificial neural network model (T AN N ). By comparison with experimental values, the ANN model is found to give very good descriptions of the half-lives of the neutron-deficient nuclei. Moreover CPPMT is found to perform better than CPPM, indicating the importance of employing temperature-dependent nuclear potential. Furthermore, to predict the α-decay half-lives of unmeasured neutron-deficient nuclei, another ANN algorithm is trained to predict the Q α values. The results of the Q α predictions are compared with the Weizsäcker-Skyrme-4+RBF (WS4+RBF) formula. The half-lives of unmeasured neutron-deficient nuclei are then predicted using CPPM, CPPMT, Royer, NRB, and T AN N , with Q α values predicted by ANN as inputs. This study concludes that half-lives of α-decay from neutron-deficient nuclei can successfully be predicted using ANN, and this can contribute to the determination of nuclei at the driplines.
Introduction
α-decay is one of the most important types of radioactive decay in the study of nuclei [1], owing to its ability to provide insights on nuclear structure and stability of nuclei [2]. It was discovered by Ernest Rutherford in 1899 as a component out of three components of radiation emitted by uranium nucleus [3]. In 1928, Gamow [4], Gurney and Condon [5,6] gave † email: [email protected] (1) arXiv:2201.13129v1 [nucl-th] 31 Jan 2022 a theoretical explanation of the Geiger-Nuttall law, which derives its basis from the quantum tunneling effect and was the first successful attempt at the quantum description of nuclear phenomena. Since then, many theoretical models and empirical formulas have been proposed to calculate α-decay half-lives of nuclei. Some of the theoretical models include the effective liquid drop model (ELDM) [7,8], the generalized liquid drop model (GLDM) [9][10][11], modified generalized liquid drop model (MGLM) [12,13], preformed cluster model (PCM) [14,15], fission-like model [16], and so on. Some of the theoretical models use phenomenological potentials while others use microscopic potentials [17,18]. Some of the empirical formulas that have been successful in the investigation of α-decay half-lives are the Royer formula [19][20][21], Denisov and Khudenko formula [22], Viola and Seaborg formula (VSS) [23], Ren formulas [24] (and the modified Ren formulas [25]), universal decay law (UDL) [26], Akrawy and Poenaru formula [27], etc.
α-decay is a dominant radioactive decay mode for unstable nuclei, particularly neutron-deficient nuclei. An atomic nucleus is said to be neutrondeficient if it consists of more protons than neutrons; they are also called proton-rich nuclei, and are close to the proton drip-line. Most of the observed neutron-deficient nuclei with mass number A ≥ 150 can undergo αdecay. The study of the half-lives of neutron-deficient nuclei can contribute to the determination of nuclei at the driplines. The contribution to the determination of nuclei at the driplines have motivated some researches on nuclei with Z > N [28][29][30][31][32]. This study will calculate the α-decay half-lives of neutron-deficient nuclei using Coulomb and proximity potential model and two empirical formulas. It is known that the Coulomb and proximity potential model (CPPM), and temperature dependent Coulomb and proximity potential model (CPPMT) are successful models in the investigation of α-decay half-lives [33][34][35][36]. The two empirical formulas to be employed are the Royer formula [19,20] and new Ren B formula [25]. These formulas have been known to be successful in the calculation of α-decay half-lives of nuclei.
In recent years, machine learning has grown in popularity in the physics community due to its ability to learn from data and arrive at reasonable conclusions. The two commonly used techniques in machine learning are supervised and unsupervised learning techniques. In supervised learning, data with labels are used to train the model with the goal of predicting outcomes as accurately as possible. In unsupervised learning, data with no labels are fed into the model with the goal of finding hidden patterns in the data and arriving at reasonable conclusions. Artificial neural network (ANN), which is an algorithm under supervised machine learning contains a large system that is created and programmed to mimic the human brain [37], and operates by using dense layers made up of neurons to process information. These neurons are also known as units and are arranged in series. The data go through the input layer of the ANN from external sources for the system to learn from and this information is processed in the hidden layer connected by weights, which then becomes the outcome in the output layer. There have been some successful applications of machine learning in nuclear physics. For example machine learning and deep learning have been employed in the study of nuclear charge radii [38,39], in the predictions of nuclear β-decay half-lives [40], in the extraction of electron scattering crosssections from swarm data [41], in the shell model calculations for proton-rich zinc (Zn) isotopes [42], and in the prediction of α-decay Q α values [43].
In this paper, we employ the use of the Coulomb and proximity potential model (CPPM) in the calculation of the α-decay half-lives of some measured neutron-deficient nuclei. Since it is known that the use of temperaturedependent potential can improve the prediction of α-decay half-lives, we have also used the temperature-dependent Coulomb and proximity potential model (termed CPPMT). Moreover, an artificial neural network (specifically, a multilayer feed forward neural network) is also used to predict the half-lives. Two empirical formulas viz. Royer and new Ren B formulas have also been used to determine the performance accuracy of the CPPM, CPPMT, and ANN models. Since the NUBASE2020 [44,45] database is now publicly available, the data used in the study have been extracted from the database. New coefficients for the two empirical formulas are determined using a least square fit scheme with input data from the NUBASE2020 database. The study also predicts the half-lives of α-decay from some unmeasured neutron-deficient nuclei. To achieve this, one requires the Q α values for the α-decay processes. Since there are no experimental Q α values for the unmeasured neutron-deficient nuclei, an artificial neutral network (ANN) has been trained to predict Q α values using about 1021 Q α values from the NUBASE2020 database. The trained ANN model was then used to predict the Q α values for unmeasured neutron-deficient nuclei. The results obtained are compared to the theoretical WS4 and WS4+RBF [46] Q α values. The predicted Q α values are then used as inputs to predict the α-decay half-lives of unmeasured neutron-deficient nuclei.
The paper is presented as follows: the theoretical models used are introduced in Section 2. In Section 3, the results of the calculations are presented and discussed, and in Section 4, the conclusion is presented.
Theoretical Formalism
Coulomb and proximity potential model (CPPM)
In this model, the total interaction potential between the α particle and the daughter nucleus can be expressed as the summation of the proximity potential, Coulomb potential, and centrifugal potential for both the touching configuration and separated fragments. That is [33]:
V = V C (r) + V P (z) + ( + 1) 2µr 2 ,(1)
where the last term is the centrifugal potential, is the angular momentum carried by the α particle, the Coulomb potential V C given by:
V C (r) = Z 1 Z 2 e 2 1 r for r ≥ R C 1 2R C 3 − ( r R C ) 2 for r ≤ R C .(2)
Here, Z 1 and Z 2 represent the charge number of the α particle emitted and the daughter nucleus, respectively, and r is the distance between the fragments centres. R C is known as the radial distance and is given by
R C = 1.24(R 1 + R 2 ), where R 1 and R 2 are defined below.
The first recorded implementation of the proximity potential was in 1987 by Shi and Swiatecki where nuclear deformation influence, and the shell effects on the half-life of exotic radioactivity were estimated [47]. Two years later, Malik et al. applied the proximity potential model in the preformed cluster model [48]. The calculation of the strength of the interaction of the daughter and emitted α particle yields the proximity potential V P (z) provided by Blocki et al [49], and is given as:
V P (z) = 4πγbRφ z b MeV,(3)
where the nuclear surface potential γ is given as
γ = 1.460734 1 − 4 N − Z N + Z 2 MeV/fm 2 .(4)
Here N and Z denote the neutron number and proton number of the parent nucleus, respectively. φ is the universal proximity potential, given by [50]:
φ( ) = 1 2 ( − 2.54) 2 − 0.0852( − 2.54) 3 ≤ 1.2511 −3.437 exp(− /0.75) ≤ 1.2511 ,(5)
whereR is called the mean curvature radius, and it is dependent on the form of both nuclei. It can be expressed as:
R = C 1 C 2 C 1 + C 2 ,(6)
C 1 and C 2 , known as the Süsmann central radii, are calculated using
C i = R i − b 2 R i ,(7)
where R i can be obtained with the aid of a semi-empirical formula in terms of mass number A i [49]:
R i = 1.28A 1/3 i − 0.76 + 0.8A −1/3 i .(8)
The penetration probability of the α particle through the potential barrier can be determined with the aid of the WKB approximation [34,51]:
P = exp − 2 Ro R i 2µ[V (r) − Q]dr ,(9)
where µ = A 1 A 2 /A is the reduced mass, A 1 and A 2 are the mass numbers of emitted α particle and daughter nucleus, respectively, A is the mass number of the parent nucleus, R i and R o are known as the classic turning points, obtained via:
V (R i ) = V (R o ) = Q.(10)
The α-decay half-life can finally be calculated via:
T 1/2 = ln 2 λ ,(11)
where λ = νP , and ν = 10 20 s −1 is known as the assault frequency.
Temperature dependent Coulomb and proximity potential model (CPPMT)
The temperature dependent proximity potentials can be written as:
V P (r, T ) = 4πγ(T )b(T )R(T )φ(ξ).(12)
Here φ(ξ) is still the universal function, the temperature dependent forms of the other parameters in equation (12) are given by [34,[52][53][54]:
γ(T ) = γ(0) 1 − T − T b T b 3/2 ,(13)b(T ) = b(0)(1 + 0.009T 2 ),(14)R(T ) = R(0)(1 + 0.0005T 2 ).(15)
Here T b is the temperature that is associated with near Coulomb barrier energies. A different version of the temperature dependent surface energy coefficient given by γ(T ) = γ(0)(1 − 0.07T ) 2 [34] has been used in this work. The temperature T (MeV) can be derived from: [55,56]
E * = E kin + Q in = 1 9 AT 2 − T,(16)
where E * denotes the parent nucleus excitation energy, and A is its mass number. Q in represents the entrance channel Q-value of the system. E kin is the kinetic energy of the α particle emitted and can be obtained using [34]:
E kin = (A 2 /A)Q.(17)
Royer empirical formula
In the year 2000, Royer [19] proposed an analytical formula for the calculation of the α-decay half-lives of nuclei, by applying a fitting procedure to some α emitters. The proposed formula did not contain dependence on the angular momentum carried by the α particle. In the year 2010, Royer proposed an improved formula for calculating the α-decay half-lives, which is explicitly dependent on the angular momentum ( ) carried by the α particle. The angular momentum for even-even nuclei was taken to be zero. It was observed that the agreement with experimental data was better than what was earlier recorded. The proposed formula is given for even-even, even-odd, odd-even, and odd-odd nuclei as [20]:
log 10 [T ] = −25.752 − 1.15055A 1 6 √ Z + 1.5913Z √ Q ,(18)log 10 [T ] = −27.750 − 1.1138A 1 6 √ Z + 1.6378Z √ Q (19) + 1.7383 × 10 −6 AN Z[ ( + 1)] 1 4 Q + 0.002457A[1 − (−1) ], log 10 [T ] = −27.915 − 1.1292A 1 6 √ Z + 1.6531Z √ Q(20)+ 8.9785 × 10 −7 AN Z[ ( + 1)] 1 4 Q + 0.002513A[1 − (−1) ], log 10 [T ] = −26.448 − 1.1023A 1 6 √ Z + 1.5967Z √ Q(21)+ 1.6961 × 10 −6 AN Z[ ( + 1)] 1 4 Q + 0.00101A[1 − (−1) ],
respectively. The short form of equations (18) -(21) can be written as:
log 10 [T ] = a + bA 1 6 √ Z + cZ √ Q + d × 10 −6 AN Z[ ( + 1)] 1 4 Q + eA 1 − (−1) ,(22)
where a, b, c, d, e are the coefficients given in equations (18) -(21) for eveneven, even-odd, odd-even, and odd-odd nuclei, respectively. For even-even nuclei, d = e = 0.
New Ren B (NRB) formula
In 2018, Akrawy et al. [25] studied the influence of nuclear isospin and angular momentum on α-decay half-lives. The existing Ren B formula by [57], were improved by including asymmetry and angular momentum terms. With the aid of least square fit and experimental values of 365 nuclei, the authors obtained new coefficients for the Ren B formula. The New Ren B formula yielded better results in the calculation of α-decay half-lives than the existing Ren B formula, when compared with the experimental data [25]. The New Ren B formula is given as:
log 10 T N RB 1/2 = a √ µZ 1 Z 2 Q −1/2 + b µZ 1 Z 2 + c + dI + eI 2 + f [ ( + 1)],(23)
where µ is the reduced mass and the nuclear isospin asymmetry I = N −Z A . The two α-decay empirical formulas used in this work are the Royer formula and the New Ren B formula.
Artificial Neural Network (ANN)
ANN is a multilayer neural network made up of an input layer, hidden layers, and an output layer. We label the structure of our ANN network as [M 1 , M 2 , · · · , M n ], where M i is the number of neurons in the ith layer. i = 1 denotes the input layer while i = n denotes the output layer. The outputs from the ith hidden layer are calculated using the formula:
h(θ i , X) = ReLU(0.01w (i) h(θ i−1 , X) + 0.01b (i) ),(24)
where h(θ i−1 , X) denotes the outputs from the previous layers, w (i) and b (i) represent the parameters of the network, and ReLU is the activation function used in the hidden layers. The ReLU is an a non-linear function that helps improve the performance of the model. It has been chosen as the activation function for the hidden layers in this work, because of its ability to solve the problem of vanishing gradient. The outputs h(θ 1 , X) of the input layer are basically the input data X. For a regression problem like in our case, activation functions are not required in the output layer, therefore the output of the ANN network can be expressed as:
y = g(θ, X) = w (n) h(θ n−1 , X) + b (n) ,(25)where θ = {w (1) , b (1) , · · · , w (n) , b (n) } represent the network parameters, h(θ n−1 , x)
represent the outputs of the hidden layers, and X denote the inputs. In this work, ANN models have been trained to predict both half-life and Q α values for some neutron-deficient nuclei. For the ANN model trained to predict the half-lives, M 1 = 4 (consisting the mass number, charge number, orbital angular momentum, and Q α values) and M n = 1, and for the ANN model trained to predict Q α values, M 1 = 2 (consisting the mass number and charge number of the nuclei) and M n = 1. The output layer M n = 1 because we are dealing with a regression problem.
It is important to observe how well the ANN model performs during the training phase, a cost function is used to achieve this. The cost function evaluates the performance of the model by observing the difference between the predicted and actual values. Learning takes place by reducing the cost function to the barest minimum, this is achieved with the aid of an optimizing algorithm. Adam is one of the most widely used optimization algorithms. It has been employed in this work to derive the best values for the parameters in the ANN network, by modifying the parameters w (i) and b (i) for i = 1, · · · , n in the network until an acceptable value between the predicted and actual output is achieved. The root mean square error has been used as the cost function in this study. It can be expressed as:
RMSE(θ) = 1 N N i=1 Y expt i − g(θ, X i ) 2 ,(26)
where Y expt i denote the experimental values, g(θ, X i ) are the predicted output values, and N is the size of the training data set.
In training the ANN model to predict the half-lives, a total number (N ) of 549 nuclei in the NUBASE2020 database have been used. As a result, a network structure of [4,50,100,50,1] has been chosen. During the training phase, the dataset was split into 80% train set and 20 % test set. The test set has been used to validate the performance of the trained model. The performance of the model can be improved, if necessary, by tweaking the parameters of the model before using it for predictions.
To predict the half-lives for unmeasured neutron-deficient nuclei, the Q α values are required as part of the input data. These unmeasured neutrondeficient nuclei have no experimental Q α values. The aid of machine learning is therefore sought to predict the Q α values, which can subsequently be used to calculate the half-lives of the unmeasured neutron-deficient nuclei. In order to achieve this, an ANN model is trained using about 1021 Q α values of measured nuclei in the NUBASE2020 database. The dataset is also split into 80 % train set and 20 % test set. As a result of the number of instances of the data, a network structure of [2,120,120,120,1] has been chosen, and the performance accuracy is also determined using root mean square error.
Results and Discussion
The results of the calculations of the α-decay half-lives of some neutrondeficient nuclei are presented and discussed here. The calculations have been carried out using Coulomb and proximity potential model (CPPM), temperature dependent Coulomb and proximity potential model (CPPMT), Royer empirical formula (Royer), new Ren B emprical formula (NRB), and trained artificial neural network (ANN).
The coefficients given in Ref. [20] for the Royer formula and Ref. [25] for the new Ren B formula were obtained with the aid of a fitting procedure applied to the α-decay half-lives in previous NUBASE databases. In this study, new coefficients have been obtained for the two formulas by applying the least square fit scheme and using 549 α emitters in the NUBASE2020 database, containing 189 even-even, 150 even-odd, 117 odd-even and 93 oddodd nuclei. The new coefficients obtained are given in Table 1 for Royer formula and Table 2 for new Ren B (NRB) formula. The root mean square error (RMSE) values obtained are 0.5411 for Royer formula, and 0.5538 for NRB formula. To calculate the α-decay half-lives for some neutron-deficient nuclei using ANN, the algorithm is trained using the data of 549 α emitters. The train set contains 439 nuclei while the test set contains 110 nuclei. After training and optimizations, the values of the root mean square errors obtained for the train and test sets are shown in Table 3. Having successfully trained the ANN model to predict α-decay half-lives, the trained ANN model, CPPM, CPPMT, Royer, and NRB are now used to calculate the α-decay half-lives of some neutron-deficient nuclei. Table 4 presents the results of the calculations. It can be observed that the values obtained from the five models are in good agreements with the experimental values. In order to quantitatively evaluate the performance of the models, the root mean square error (RMSE) is calculated. The experimental α-decay half-lives are retrieved from Refs. [44,45]. Table 5 presents the computed root mean square error for all the models. It can be observed that the ANN model gives the lowest RMSE, with a value of 0.3843. The CPPMT (RMSE = 0.4963) is found to give lower RMS error compared to the CPPM (RMSE = 0.5946), indicating the importance of the use of temperature dependent potential. The NRB formula is also found to give lower RMSE value than the Royer formula. The calculated temperature values (in MeV) for the neutron-deficient nuclei in the CPPMT model are plotted with respect to the mass number (A) of these nuclei in Figure (2). The energy released (Q α ) during an α-decay process is one of the important input parameters required to calculate α-decay half-lives. To predict the half-lives of α-decay from unmeasured neutron-deficient nuclei, the Q α values are required. Previously the Weizsäcker-Skyrme-4 (WS4) and Weizsäcker-Skyrme-4+RBF (WS4+RBF) [46] formulas were used to predict the Q α values of unmeasured neutron-deficient nuclei [29]. However in this work, we are motivated to use ANN to predict the Q α values, following its success in predicting the α-decay half-lives. To achieve this, an artificial neural network (ANN) is trained using 1021 Q α values of measured nuclei in the NUBASE2020 database. The Q α values are again split into train (80 % of data) and test (20 % of data) sets. Procedure similar to that followed in the training of the half-lives is followed. After training and optimizations, the root mean square errors obtained on the train and test sets are given in Table 6. In order to compare the performance of the ANN predictions of Q α values with existing theories, we have used the ANN model to predict Q α values for 69 neutron-deficient nuclei. The outputs are compared with the predictions of WS4 and WS4+RBF models. Table 7 presents the root mean square error values obtained when the predictions of ANN, WS4, and WS4+RBF are compared with experimental values. With a standard deviation value of 0.1475, the ANN model is found to give a slightly lower RMSE value than WS4+RBF theoretical model. The WS4+RBF, as expected, performs better than the WS4 formula. Figure (3) shows the plots of the Q α values predicted by WS4, WS4+RBF, and ANN models for the neutron-deficient nuclei. Since the trained ANN model performed very well in predicting the Q α values, we are now on track to predict the α-decay half-lives of unmeasured neutron-deficient nuclei. The Q α values predicted by the ANN model (denoted as Q AN N α ) will be used as part of the input values. The α-decay half-lives of the neutron-deficient nuclei within the range of 80 ≤ Z ≤ 120 and 169 ≤ A ≤ 296 will then be predicted using CPPM, CPPMT, Royer, NRB, and the trained artificial neural network (denoted as T AN N ) models. The angular momentum carried by the emitted α particle has been taken to be zero for all nuclei. Table 8 presents the predicted half-lives for α-decay of the 126 unmeasured neutron-deficient nuclei using the various theoretical models and T AN N . The third to fifth columns of the ) models. The sixth to tenth columns show the predictions using CPPM, CPPMT, Royer, NRB, and T AN N . The last column is the previous theoretical calculations of Cui et al. [29] using generalized liquid drop model (GLDM). A careful observation of the values indicate that the results are close to that predicted earlier by Cui et al. [29]. Figure (4) shows the plots of the predicted log[T 1/2 (s)] values using the various models. Fig. 4: Plots of the predicted α-decay half-lives for the neutron-deficient nuclei using the various models.
Conclusion
In this study, α-decay half-lives of some neutron-deficient nuclei within the range of 80 ≤ Z ≤ 118 have been calculated using Coulomb and proximity potential model (CPPM), temperature dependent Coulomb and proximity potential model (CPPMT), Royer formula, New RenB (NRB) formula, and a trained artificial neural network (T AN N ) model. New coefficients were obtained for the Royer and NRB empirical formulas with the aid of a least square fit scheme and input data from NUBASE2020 database. When compared with the experimental data, all models are found to give very good predictions of the half-lives. The CPPMT was found to perform better than CPPM, indicating the importance of using temperature-dependent nuclear potentials. With a root mean square error of 0.3843, the T AN N model is found to give the best performance in predicting the half-lives of the neutron-deficient nuclei. The second stage of the study was to predict the half-lives of α-decay from unmeasured neutron-deficient nuclei. To achieve this, the Q α values were required as inputs. Following the success of the ANN in predicting the half-lives, we were motivated to train another ANN to predict Q α values (denoted as Q AN N α ). When compared to experimental Q α values and theoretically predicted ones by WS4 and WS4+RBF formulas, the ANN model is found to give very good descriptions of the Q α values. The Q AN N α values were then used as inputs to predict the half-lives of α-decay from unmeasured neutron-deficient nuclei using CPPM, CPPMT, improved Royer formula, improved NRB formula and the T AN N model. The results of the predicted half-lives by our models are found to be in good agreements with those predicted using generalized liquid drop model (GLDM). This study concludes that half-lives of α-decay from neutron-deficient nuclei can successfully be predicted using ANN, and this can contribute to the determination of nuclei at the driplines.
Figure ( 1
1) shows the plots of the log[T 1/2 (s)] values for the 69 neutrondeficient nuclei obtained from using the various models. The experimental data are included for comparison. It is observed from the plot that the predicted values agree with the values obtained from experiment.
Fig. 1 :
1Plots of the experimental and theoretically calculated α-decay halflives for some neutron-deficient nuclei using CPPM, CPPMT, Royer, NRB, and ANN models.
Fig. 2 :
2Plot of the calculated temperature (MeV) values in the CPPMT for 69 neutron-deficient nuclei with respect to their mass number (A).
Fig. 3 :
3Plots of the experimental and predicted Q α values for 69 neutrondeficient nuclei using WS4, WS4+RBF, and ANN models.
Table 1 :
1New coefficients for the Royer formula.
Table 2 :
2New coefficients for the new Ren B formula.Nuclei
a
b
c
d
e
f
even-even 0.4095 -1.4111 -15.2260 8.6687 -49.5989 0.0000
even-odd 0.4095 -1.3815 -14.8570 -9.5116 27.5012 0.0323
odd-even 0.4203 -1.4093 -15.3943 -7.2587 07.1358 0.0298
odd-odd 0.4135 -1.4380 -14.5336 1.7049 01.2728 0.0086
Table 3 :
3The root mean square errors (σ) obtained for the training set and test set after training ANN to predict the half-lives.Artificial Neural Network (ANN)
σ
Train
0.3876
Test
0.5719
Table 4 :
4The Experimental and predicted log[T 1/2 (s)] values for 69 neutrondeficient nuclei within the range of 80 ≤ Z ≤ 118.log[T 1/2 (s)]
Table 5 :
5The calculated RMSE values obtained for the neutron-deficient nuclei using CPPM, CPPMT, Royer, NRB, and ANN.Models (Formulas)
σ
CPPM
0.5946
CPPMT
0.4963
Royer
0.4608
NRB
0.4413
ANN
0.3843
Table 6 :
6The σ values between the experimental and predicted Q α values for the training and test set.Artificial Neural Network (ANN)
σ
Train
0.1684
Test
0.1802
Table 7 :
7The computed root mean square errors (σ) obtained using WS4, WS4+RBF, and ANN models.Models
σ
WS4
0.2038
WS4+RBF 0.1565
ANN
0.1475
Table show the
showQ α
values predicted by the WS4 (Q W S4
α
), WS4+RBF (Q W S4+RBF
α
), and ANN
(Q AN N
α
Table 8 :
8The predicted log[T 1/2 (s)] values for 126 unmeasured neutrondeficient nuclei within the range of 80 ≤ Z ≤ 120 using Q α values predicted by ANN (Q AN N α ). Previous theoretical predictions by Cui et al.[29] using GLDM are included for comparison. The Q W S4+RBFα
values have been taken
α -decay half-lives of lead isotopes within a modified generalized liquid drop model. K P Santhosh, Dashty T Akrawy, H Hassanabadi, Ali H Ahmed, Tinu Ann Jose , Physical Review C. 10164610K. P. Santhosh, Dashty T. Akrawy, H. Hassanabadi, Ali H. Ahmed, and Tinu Ann Jose. α -decay half-lives of lead isotopes within a modified gen- eralized liquid drop model. Physical Review C, 101:064610, 2020.
Systematic study of α decay half-lives based on gamow-like model with a screened electrostatic barrier. J H Cheng, J L Chen, J G Deng, X J Wu, X H Li, P C Chu, Nuclear Physics A. 987J H Cheng, J L Chen, J G Deng, X J Wu, X H Li, and P C Chu. System- atic study of α decay half-lives based on gamow-like model with a screened electrostatic barrier. Nuclear Physics A, 987:350-368, 2019.
Half-lives for α and cluster radioactivity within a gamow-like model. A Zdeb, M Warda, K Pomorski, Physical Review C. 8724308A. Zdeb, M. Warda, and K. Pomorski. Half-lives for α and cluster radioactivity within a gamow-like model. Physical Review C, 87:024308, 2013.
The quantum theory of nuclear disintegration. G Gamow, Nature. 122G. Gamow. The quantum theory of nuclear disintegration. Nature, 122:805- 806, 1928.
Wave mechanics and radioactive disintegration. R W Gurney, E U Condon, Nature. 122R. W. Gurney and E. U. Condon. Wave mechanics and radioactive disinte- gration. Nature, 122:439-439, 1928.
Quantum mechanics and radioactive disintegration. R W Gurney, E U Condon, Physical Review. 33R. W. Gurney and E. U. Condon. Quantum mechanics and radioactive disin- tegration. Physical Review, 33:127-140, 1929.
Effective liquid drop description for alpha decay of atomic nuclei. O A P Tavares, S B Duarte, O Rodríguez, M Guzmán, F Gonçalves, García, Journal of Physics G. 24O A P Tavares, S B Duarte, O Rodríguez, F Guzmán, M Gonçalves, and F García. Effective liquid drop description for alpha decay of atomic nuclei. Journal of Physics G, 24:1757-1775, 1998.
Improved effective liquid drop model for α-decay half-lives. J P Cui, Y H Gao, Y Z Wang, J Z Gu, Nuclear Physics A. J.P. Cui, Y.H. Gao, Y.Z. Wang, and J.Z. Gu. Improved effective liquid drop model for α-decay half-lives. Nuclear Physics A, 1017:122341, 2021.
Static and dynamic fusion barriers in heavy-ion reactions. G Royer, B Remaud, Nuclear Physics A. 444G. Royer and B. Remaud. Static and dynamic fusion barriers in heavy-ion reactions. Nuclear Physics A, 444:477-497, 1985.
Systematical calculation of α decay half-lives with a generalized liquid drop model. X Bao, H Zhang, H Zhang, G Royer, J Li, Nuclear Physics A. 921X. Bao, H. Zhang, H. Zhang, G. Royer, and J. Li. Systematical calculation of α decay half-lives with a generalized liquid drop model. Nuclear Physics A, 921:85-95, 2014.
Alpha-decay half-life with a generalized liquid drop model by using a precise decay energy. N N Ma, H F Zhang, J M Dong, H F Zhang, N. N. Ma, H. F. Zhang, J. M. Dong, and H. F. Zhang. Alpha-decay half-life with a generalized liquid drop model by using a precise decay energy, pages 105-108. 2016.
Half-lives of cluster radioactivity using the modified generalized liquid drop model with a new preformation factor. K P Santhosh, Jose Tinu Ann, Physical Review C. 9964604K. P. Santhosh and Tinu Ann Jose. Half-lives of cluster radioactivity using the modified generalized liquid drop model with a new preformation factor. Physical Review C, 99:064604, 2019.
α-decay half-lives of some superheavy nuclei within a modified generalized liquid drop model. D T Akrawy, K P Santhosh, H Hassanabadi, Physical Review C. 10034608D. T. Akrawy, K. P. Santhosh, and H. Hassanabadi. α-decay half-lives of some superheavy nuclei within a modified generalized liquid drop model. Physical Review C, 100:034608, 2019.
Cluster radioactivity. R K Gupta, W Greiner, International Journal of Modern Physics E. 03R. K. Gupta and W. Greiner. Cluster radioactivity. International Journal of Modern Physics E, 03:335-433, 1994.
Cluster radioactive decay within the preformed cluster model using relativistic mean-field theory densities. B Singh, S K Patra, R K Gupta, Physical Review C. 8214607B. Singh, S. K. Patra, and R. K. Gupta. Cluster radioactive decay within the preformed cluster model using relativistic mean-field theory densities. Physical Review C, 82:014607, 2010.
Improvement of a fission-like model for nuclear α decay. Y J Wang, H F Zhang, W Zuo, J Q Li, 2762103Y. J. Wang, H. F. Zhang, W. Zuo, and J. Q. Li. Improvement of a fission-like model for nuclear α decay. 27:062103, 2010.
Alpha decay study of thorium isotopes using double folding model with NN interactions derived from relativistic mean field theory. W A Yahya, B J Falaye, Nuclear Physics A. W. A. Yahya and B. J. Falaye. Alpha decay study of thorium isotopes using double folding model with NN interactions derived from relativistic mean field theory. Nuclear Physics A, 1015:122311, 2021.
Calculations of the alpha decay halflives of some polonium isotopes using the double folding model. W A Yahya, K J Oyewunmi, Acta Physica Polonica B. 52W. A. Yahya and K. J. Oyewunmi. Calculations of the alpha decay half- lives of some polonium isotopes using the double folding model. Acta Physica Polonica B, 52:1357-1372, 2021.
Alpha emission and spontaneous fission through quasi-molecular shapes. G Royer, Journal of Physics G: Nuclear and Particle Physics. 26G Royer. Alpha emission and spontaneous fission through quasi-molecular shapes. Journal of Physics G: Nuclear and Particle Physics, 26:1149-1170, 2000.
Analytic expressions for alpha-decay half-lives and potential barriers. G Royer, Nuclear Physics A. 848G. Royer. Analytic expressions for alpha-decay half-lives and potential barri- ers. Nuclear Physics A, 848:279-291, 2010.
Improved empirical formula for α-decay half-lives. J G Deng, H F Zhang, G Royer, Physical Review C. 10134307J. G. Deng, H. F. Zhang, and G. Royer. Improved empirical formula for α-decay half-lives. Physical Review C, 101:034307, 2020.
Erratum:α-decay half-lives: Empirical relations. V Yu, A A Denisov, Khudenko, phys. rev. Physical Review C. 8254614V. Yu. Denisov and A. A. Khudenko. Erratum:α-decay half-lives: Empirical relations [phys. rev. Physical Review C, 82:054614, 2010.
Nuclear systematics of the heavy elements-II lifetimes for alpha, beta and spontaneous fission decay. V E Viola, G T Seaborg, Journal of Inorganic and Nuclear Chemistry. 28V.E. Viola and G.T. Seaborg. Nuclear systematics of the heavy elements-II lifetimes for alpha, beta and spontaneous fission decay. Journal of Inorganic and Nuclear Chemistry, 28:741-761, 1966.
New perspective on complex cluster radioactivity of heavy nuclei. Z Ren, C Xu, Z Wang, Physical Review C. 7034304Z. Ren, C. Xu, and Z. Wang. New perspective on complex cluster radioactivity of heavy nuclei. Physical Review C, 70:034304, 2004.
Influence of nuclear isospin and angular momentum on α-decay half-lives. D T Akrawy, H Hassanabadi, Y Qian, K P Santhosh, Nuclear Physics A. 983D. T. Akrawy, H. Hassanabadi, Y. Qian, and K.P. Santhosh. Influence of nuclear isospin and angular momentum on α-decay half-lives. Nuclear Physics A, 983:310-320, 2018.
Microscopic mechanism of charged-particle radioactivity and generalization of the geiger-nuttall law. C Qi, F R Xu, R J Liotta, R Wyss, M Y Zhang, C Asawatangtrakuldee, D Hu, Physical Review C. 8044326C. Qi, F. R. Xu, R. J. Liotta, R. Wyss, M. Y. Zhang, C. Asawatangtrakuldee, and D. Hu. Microscopic mechanism of charged-particle radioactivity and gen- eralization of the geiger-nuttall law. Physical Review C, 80:044326, 2009.
Alpha decay calculations with a new formula. D N D T Akrawy, Poenaru, Journal of Physics G: Nuclear and Particle Physics. 44105105D T Akrawy and D N Poenaru. Alpha decay calculations with a new formula. Journal of Physics G: Nuclear and Particle Physics, 44:105105, 2017.
Competition between α decay and proton radioactivity of neutron-deficient nuclei. Y Z Wang, J P Cui, Y L Zhang, S Zhang, J Z Gu, Physical Review C. 9514302Y. Z. Wang, J. P. Cui, Y. L. Zhang, S. Zhang, and J. Z. Gu. Competition between α decay and proton radioactivity of neutron-deficient nuclei. Physical Review C, 95:014302, 2017.
α-decay half-lives of neutrondeficient nuclei. J P Cui, Y Xiao, Y H Gao, Y Z Wang, Nuclear Physics A. 987J.P. Cui, Y. Xiao, Y.H. Gao, and Y.Z. Wang. α-decay half-lives of neutron- deficient nuclei. Nuclear Physics A, 987:99-111, 2019.
Alpha decay as a probe for the structure of neutron-deficient nuclei. C Qi, Reviews in Physics. 1C. Qi. Alpha decay as a probe for the structure of neutron-deficient nuclei. Reviews in Physics, 1:77-89, 2016.
Cluster radioactivity of neutron-deficient nuclei in trans-tin region. Y Gao, J Cui, Y Wang, J Gu, Scientific Reports. 109119Y. Gao, J. Cui, Y. Wang, and J. Gu. Cluster radioactivity of neutron-deficient nuclei in trans-tin region. Scientific Reports, 10:9119, 2020.
Proton radioactivity and α-decay of neutrondeficient nuclei. A , A R Abdulghany, Physica Scripta. 96125314A. Adel and A. R. Abdulghany. Proton radioactivity and α-decay of neutron- deficient nuclei. Physica Scripta, 96:125314, 2021.
Alpha decay properties of superheavy nuclei z = 126. H C Manjunatha, Nuclear Physics A. 945H.C. Manjunatha. Alpha decay properties of superheavy nuclei z = 126. Nuclear Physics A, 945:42-57, 2015.
Alpha decay half-lives of 171−189 Hg isotopes using modified gamow-like model and temperature dependent proximity potential. W A Yahya, Journal of the Nigerian Society of Physical Sciences. 2W.A. Yahya. Alpha decay half-lives of 171−189 Hg isotopes using modified gamow-like model and temperature dependent proximity potential. Journal of the Nigerian Society of Physical Sciences, 2:250-256, 2020.
Calculation of α-decay and cluster half-lives for 197-226fr using temperature-dependent proximity potential model. V Zanganah, Dashty T Akrawy, H Hassanabadi, S S Hosseini, Shagun Thakur, Nuclear Physics A. 997121714V. Zanganah, Dashty T. Akrawy, H. Hassanabadi, S.S. Hosseini, and Sha- gun Thakur. Calculation of α-decay and cluster half-lives for 197-226fr us- ing temperature-dependent proximity potential model. Nuclear Physics A, 997:121714, 2020.
Cluster decay half-lives of trans-lead nuclei within the coulomb and proximity potential model. K P Santhosh, B Priyanka, M S Unnikrishnan, Nuclear Physics A. 889K.P. Santhosh, B. Priyanka, and M.S. Unnikrishnan. Cluster decay half-lives of trans-lead nuclei within the coulomb and proximity potential model. Nuclear Physics A, 889:29-50, 2012.
Multilayer perceptrons. L Vanneschi, M Castelli, Encyclopedia of Bioinformatics and Computational Biology. Elsevier1L. Vanneschi and M. Castelli. Multilayer perceptrons. In Encyclopedia of Bioinformatics and Computational Biology, volume 1-3, pages 612-620. Else- vier, 2019.
An artificial neural network application on nuclear charge radii. S Akkoyun, S O Bayram, A Kara, Sinan, Journal of Physics G: Nuclear and Particle Physics. 4055106S Akkoyun, T Bayram, S O Kara, and A Sinan. An artificial neural network application on nuclear charge radii. Journal of Physics G: Nuclear and Particle Physics, 40:055106, 2013.
Calculation of nuclear charge radii with a trained feed-forward neural network. Di Wu, C L Bai, H Sagawa, H Q Zhang, Physical Review C. 10254323Di Wu, C. L. Bai, H. Sagawa, and H. Q. Zhang. Calculation of nuclear charge radii with a trained feed-forward neural network. Physical Review C, 102:054323, 2020.
Predictions of nuclear β -decay half-lives with machine learning and their impact on r -process nucleosynthesis. Z M Niu, H Z Liang, B H Sun, W H Long, Y F Niu, Physical Review C. 9964307Z. M. Niu, H. Z. Liang, B. H. Sun, W. H. Long, and Y. F. Niu. Predictions of nuclear β -decay half-lives with machine learning and their impact on r -process nucleosynthesis. Physical Review C, 99:064307, 2019.
Extracting electron scattering cross sections from swarm data using deep neural networks. V Jetly, B Chaudhury, Machine Learning: Science and Technology. 235025V. Jetly and B. Chaudhury. Extracting electron scattering cross sections from swarm data using deep neural networks. Machine Learning: Science and Technology, 2:035025, 2021.
Shell model calculations for proton-rich zn isotopes via new generated effective interaction by artificial neural networks. S Akkoyun, T Bayram, Cumhuriyet Science Journal. 40S. Akkoyun and T. Bayram. Shell model calculations for proton-rich zn iso- topes via new generated effective interaction by artificial neural networks. Cumhuriyet Science Journal, 40:570-577, 2019.
Alpha half-lives calculation of superheavy nuclei with q α -value predictions based on the bayesian neural network approach. U Baños Rodríguez, C Vargas, M Gonçalves, S B Duarte, F Guzmán, Journal of Physics G: Nuclear and Particle Physics. 46115109U. Baños Rodríguez, C. Zuñiga Vargas, M. Gonçalves, S. B. Duarte, and F. Guzmán. Alpha half-lives calculation of superheavy nuclei with q α -value predictions based on the bayesian neural network approach. Journal of Physics G: Nuclear and Particle Physics, 46:115109, 2019.
The nubase2020 evaluation of nuclear physics properties. F G Kondev, M Wang, W J Huang, S Naimi, G Audi, Chinese Physics C. 4530001F.G. Kondev, M. Wang, W.J. Huang, S. Naimi, and G. Audi. The nubase2020 evaluation of nuclear physics properties. Chinese Physics C, 45:030001, 2021.
A deep-forest based approach for detecting fraudulent online transaction. L Wang, Z Zhang, X Zhang, X Zhou, P Wang, Y Zheng, Advances in Computers. Elsevier120L. Wang, Z. Zhang, X. Zhang, X. Zhou, P. Wang, and Y. Zheng. A deep-forest based approach for detecting fraudulent online transaction. In Advances in Computers, volume 120, pages 1-38. Elsevier, 2021.
Surface diffuseness correction in global mass formula. N Wang, M Liu, X Wu, J Meng, Physics Letters B. 734N. Wang, M. Liu, X. Wu, and J. Meng. Surface diffuseness correction in global mass formula. Physics Letters B, 734:215-219, 2014.
Estimates of the influence of nuclear deformations and shell effects on the lifetimes of exotic radioactivities. Y.-J Shi, W J Swiatecki, Nuclear Physics A. 464Y.-J. Shi and W.J. Swiatecki. Estimates of the influence of nuclear deforma- tions and shell effects on the lifetimes of exotic radioactivities. Nuclear Physics A, 464:205-222, 1987.
Theory of cluster radioactive decay and of cluster formation in nuclei. S S Malik, R K Gupta, Physical Review C. 39S. S. Malik and R. K. Gupta. Theory of cluster radioactive decay and of cluster formation in nuclei. Physical Review C, 39:1992-2000, 1989.
Proximity forces. J Blocki, J Randrup, W J Swiactecki, C F Tsang, Annals of Physics. 105J. Blocki, J. Randrup, W.J. Swiactecki, and C.F. Tsang. Proximity forces. Annals of Physics, 105:427-462, 1977.
Temperature-dependent potential in clusterdecay process. R Gharaei, V Zanganeh, Nuclear Physics A. 952R. Gharaei and V. Zanganeh. Temperature-dependent potential in cluster- decay process. Nuclear Physics A, 952:28-40, 2016.
Decay width and the shift of a quasistationary state. S A Gurvitz, G Kalbermann, Physical Review Letters. 59S. A. Gurvitz and G. Kalbermann. Decay width and the shift of a quasista- tionary state. Physical Review Letters, 59:262-265, 1987.
The influence of the dependence of surface energy coefficient to temperature in the proximity model. M Salehi, O N Ghodsi, Chinese Physics Letters. 3042502M. Salehi and O. N. Ghodsi. The influence of the dependence of surface energy coefficient to temperature in the proximity model. Chinese Physics Letters, 30:042502, 2013.
Thermal properties of nuclei. G Sauer, H Chandra, U Mosel, Nuclear Physics A. 264G. Sauer, H. Chandra, and U. Mosel. Thermal properties of nuclei. Nuclear Physics A, 264:221-243, 1976.
Temperature and mass dependence of level density parameter. S Shlomo, J B Natowitz, Physical Review C. 44S. Shlomo and J. B. Natowitz. Temperature and mass dependence of level density parameter. Physical Review C, 44:2878-2880, 1991.
Influence of the nuclear surface diffuseness on exotic cluster decay half-life times. R K Gupta, S Singh, R K Puri, Sandulescu, W Greiner, Scheid, Journal of Physics G: Nuclear and Particle Physics. 18R K Gupta, S Singh, R K Puri, A Sandulescu, W Greiner, and W Scheid. Influence of the nuclear surface diffuseness on exotic cluster decay half-life times. Journal of Physics G: Nuclear and Particle Physics, 18:1533-1542, 1992.
Alpha-cluster transfer process in colliding s-d shell nuclei using the energy density formalism. R K Puri , R K Gupta, Journal of Physics G: Nuclear and Particle Physics. 18R K Puri and R K Gupta. Alpha-cluster transfer process in colliding s-d shell nuclei using the energy density formalism. Journal of Physics G: Nuclear and Particle Physics, 18:903-915, 1992.
Unified formula of half-lives for α decay and cluster radioactivity. D Ni, Z Ren, T Dong, C Xu, Physical Review C. 7844310D. Ni, Z. Ren, T. Dong, and C. Xu. Unified formula of half-lives for α decay and cluster radioactivity. Physical Review C, 78:044310, 2008.
|
[] |
[
"A model independent approach to the dark energy equation of state",
"A model independent approach to the dark energy equation of state"
] |
[
"P S Corasaniti \nCentre for Theoretical Physics\nUniversity of Sussex\nBN1 9QJBrightonUnited Kingdom\n",
"E J Copeland \nCentre for Theoretical Physics\nUniversity of Sussex\nBN1 9QJBrightonUnited Kingdom\n"
] |
[
"Centre for Theoretical Physics\nUniversity of Sussex\nBN1 9QJBrightonUnited Kingdom",
"Centre for Theoretical Physics\nUniversity of Sussex\nBN1 9QJBrightonUnited Kingdom"
] |
[] |
The consensus of opinion in cosmology is that the Universe is currently undergoing a period of accelerated expansion. With current and proposed high precision experiments it offers the hope of being able to discriminate between the two competing models that are being suggested to explain the observations, namely a cosmological constant or a time dependent 'Quintessence' model. The latter suffers from a plethora of scalar field potentials all leading to similar late time behaviour of the universe, hence to a lack of predictability. In this paper, we develop a model independent approach which simply involves parameterizing the dark energy equation of state in terms of known observables. This allows us to analyse the impact dark energy has had on cosmology without the need to refer to particular scalar field models and opens up the possibility that future experiments will be able to constrain the dark energy equation of state in a model independent manner.PACS numbers: 98.70.Vc,98.80.Cq
|
10.1103/physrevd.67.063521
|
[
"https://arxiv.org/pdf/astro-ph/0205544v3.pdf"
] | 118,959,614 |
astro-ph/0205544
|
4e9c9f5e57006e6c561362b4d5cdeae524042fbf
|
A model independent approach to the dark energy equation of state
27 Jan 2003 (Dated: November 2, 2018)
P S Corasaniti
Centre for Theoretical Physics
University of Sussex
BN1 9QJBrightonUnited Kingdom
E J Copeland
Centre for Theoretical Physics
University of Sussex
BN1 9QJBrightonUnited Kingdom
A model independent approach to the dark energy equation of state
27 Jan 2003 (Dated: November 2, 2018)PACS numbers: 9870Vc,9880Cq
The consensus of opinion in cosmology is that the Universe is currently undergoing a period of accelerated expansion. With current and proposed high precision experiments it offers the hope of being able to discriminate between the two competing models that are being suggested to explain the observations, namely a cosmological constant or a time dependent 'Quintessence' model. The latter suffers from a plethora of scalar field potentials all leading to similar late time behaviour of the universe, hence to a lack of predictability. In this paper, we develop a model independent approach which simply involves parameterizing the dark energy equation of state in terms of known observables. This allows us to analyse the impact dark energy has had on cosmology without the need to refer to particular scalar field models and opens up the possibility that future experiments will be able to constrain the dark energy equation of state in a model independent manner.PACS numbers: 98.70.Vc,98.80.Cq
INTRODUCTION
Current cosmological observations suggest that the energy density of the universe is dominated by an unknown type of matter, called 'dark energy', [1,2,3,4,5,6]. If correct, it is characterized by a negative value for the equation of state parameter w (close to −1) and is responsible for the present accelerating expansion of the Universe. The simplest dark energy candidate is the cosmological constant, but in alternative scenarios the accelerated phase is driven by a dynamical scalar field called 'Quintessence' [7,8,9,10]. Although recent analysis of the data provides no evidence for the need of a quintessential type contribution [11,12,13], the nature of the dark energy remains illusive. In fact the present value of its equation of state w Q is constrained to be close to the cosmological constant one, but the possibility of a time dependence of w Q or a coupling with cold dark matter for example [14] cannot be excluded. Recent studies have analysed our ability to estimate w Q with high redshift object observations [15,16,17,18,19] and to reconstruct both the time evolution of w Q [20,21,22,23,24,25,26] as well as the scalar field potential V (Q) [27,28,29]. An alternative method for distinguishing different forms of dark energy has been introduced in [30], but generally it appears that it will be difficult to really detect such time variations of w Q even with the proposed SNAP satellite [31,32]. One of the problems that has been pointed out is that, previously unphysical fitting functions for the luminosity distance have been used, making it difficult to accurately reproduce the properties of a given quintessence model from a simulated data sample [20,26,34]. A more efficient approach consists of using a time expansion of w Q at low redshifts. For instance in [21,25,26] a polynomial fit in redshift space z was proposed, while in [33,34] a logarithmic expansion in z was proposed to take into account a class of quintessence models with slow variation in the equation of state. However even these two expansions are limited, in that they can not describe models with a rapid variation in the equation of state, and the polynomial expansion introduces a number of unphysical parameters whose value is not directly related to the properties of a dark energy component. The consequence is that their application is limited to low redshift measurements and cannot be extended for example to the analysis of CMB data. An interesting alternative to the fitting expansion approach, has recently been proposed [35], in which the time behavior of the equation of state can be reconstructed from cosmological distance measurements without assuming the form of its parametrization. In spite of the efficiency of such an approach, it does not take into account the effects of the possible clustering properties of dark energy which become manifest at higher redshifts. Hence its application has to be limited to the effects dark energy can produce on the expansion rate of the universe at low redshifts. On the other hand, it has been argued that dark energy does not leave a detectable imprint at higher redshifts, since it has only recently become the dominant component of the universe. Such a statement, however, is model dependent, on the face of it there is no reason why the dark energy should be negligible deep in the matter dominated era. For instance CMB observations constrain the dark energy density at decoupling to be less then 30 per cent of the critical one [36]. Such a non negligible contribution can be realized in a large class of models and therefore cannot be a priori excluded. Consequently it is of crucial importance to find an appropriate parametrization for the dark energy equation of state that allows us to take into account the full impact dark energy has on different types of cosmological observations. In this paper we attempt to address this very issue by providing a formula to describe w Q , that can accommodate most of the proposed dark energy models in terms of physically motivated parameters. We will then be in a position to benefit from the fact that the cosmological constraints on these parameters will allow us to infer general information about the properties of dark energy.
DARK ENERGY EQUATION OF STATE
The dynamics of the quintessence field Q for a given potential V is described by the system of equations:
Q + 3HQ + dV dQ = 0,(1)
and
H 2 = 8πG 3 ρ m + ρ r +Q 2 2 + V (Q) ,(2)
where ρ m and ρ r are the matter and radiation energy densities respectively and the dot is the time derivative.
The specific evolution of w Q (a), where a is the scale factor, depends on the shape of the potential, however there are some common features in its behaviour that can be described in a model independent manner and which allow us to introduce some physical parameters. As a first approach, we notice that a large number of quintessence models are characterized by the existence of the so called 'tracker regime'. It consists of a period during which the scalar field, while it is approaching a late time attractor, evolves with an almost constant equation of state whose value can track that of the background component. The necessary conditions for the existence of tracker solutions arising from scalar field potentials has been studied in [9,37]. In this paper, we consider a broad class of tracking potentials. These include models for which w Q (a) evolves smoothly, as with the inverse power law [38], V (Q) ∼ 1/Q α (INV) and the supergravity inspired potential [39], V (Q) ∼ 1/Q α e Q 2 /2 (SUGRA). Late time rapidly varying equation of states arise in potentials with two exponential functions [40], V ∼ e −αQ + e βQ (2EXP), in the so called 'Albrecht & Skordis' model [41] (AS) and in the model proposed by Copeland et al. [42] (CNR). To show this in more detail, in fig.1 we plot the equation of state obtained by solving numerically Eq. (1) and Eq. (2) for each of these potentials. There are some generic features that appear to be present, and which we can make use of in our attempts to parameterize w Q . For a large range of initial conditions of the quintessence field, the tracking phase starts before matter-radiation equality. In such a scenario w Q (a) has three distinct phases, separated by two 'phase transitions'. Deep into both the radiation and matter dominated eras the equation of state, w Q (a), takes on the
w Q ≈ w B − 2(Γ − 1) 2Γ − 1 ,(3)
where Γ = V ′′ V /(V ′ ) 2 and V ′ ≡ dV /dQ etc. For the case of an exponential potential, Γ = 1, with w Q = w B , but in general w Q = w B . Therefore if we do not specify the quintessence potential the values of w r Q and w m Q should be considered as free parameters.
The two transition phases can each be described by two parameters; the value of the scale factor a r,m c when the equation of state w Q begins to change and the width ∆ r,m of the transition. Since Γ is constant or slowly varying during the tracker regime, the transition from w r Q to w m Q is always smooth and is similar for all the models (see fig. 1). To be more precise, we have found that a r c ∼ 10 −5 and ∆ r ∼ 10 −4 during this transition, the former number expected from the time of matter-radiation equality and the latter from the transition period from radiation to matter domination. However, when considering the transition in w Q from w m Q to the present day value w 0 Q , we see from fig. 1 that this can be slow (0 < a m c /∆ m < 1) or rapid (a m c /∆ m > 1) according to the slope of the quintessence potential. For instance in models with a steep slope followed by a flat region or by a minimum, as in the case of the two exponentials, the AS potential or the CNR model, the scalar field evolves towards a solution that approaches the late time attractor, finally deviating from the tracking regime with the parameter Γ rapidly varying. In contrast the inverse power law potential always has a slower transition since Γ is constant for all times. Given these general features we conclude that the behavior of w Q (a) can be explained in terms of functions, w p Q (a), involving the following parameters: w 0 Q , w m Q , w r Q , a m c and ∆ m . The authors of [43] have recently used an expansion in terms of a Fermi-Dirac function in order to constrain a class of dark energy models with rapid late time transitions. In what follows we find that a generalisation of this involving a linear combination of such functions allows for a wider range of models to be investigated. To be more precise, we propose the following formula for w p Q (a):
w p Q (a) = F 1 f r (a) + F 2 f m (a) + F 3 ,(4)
The coefficients F 1 , F 2 and F 3 are determined by demanding that w p Q (a) takes on the respective values w r Q , w m Q , w 0 Q during radiation (a r ) and matter (a m ) domination as well as today (a 0 ). Solving the algebraic equations that follow we have:
F 1 = w m Q − w r Q (f m (a 0 ) − f m (a r )) − w 0 Q − w r Q (f m (a m ) − f m (a r )) (f r (a m ) − f r (a r )) (f m (a 0 ) − f m (a r )) − (f r (a 0 ) − f r (a r )) (f m (a m ) − f m (a r )) ,(6)F 2 = w 0 Q − w r Q f m (a 0 ) − f m (a r ) − F 1 f r (a 0 ) − f r (a r ) f m (a 0 ) − f m (a r ) ,(7)F 3 = w r Q − F 1 f r (a r ) − F 2 f m (a r ),(8)
where a 0 = 1, and the value of and a r and a m can be arbitrarily chosen in the radiation and matter era because of the almost constant nature of w Q during those eras. For example in our simulations we assumed a r = 10 −5 and a m = 10 −3 . In table 1 we present the best fit parameters obtained minimizing a chi-square for the different models of fig.1, and in fig.2 we plot the associated functions w p Q (a). It is encouraging to see how accurately the Fermi-Dirac functions mimic the exact time behavior of w Q (a) for the majority of the potentials. In fig.3 we plot the absolute value of the difference ∆w(a) between w Q (a) and w p Q (a). The discrepancy is less then 1% for redshifts z < 10 where the energy density of the dark energy can produce observable effects in these class of models and it remains below 9% between decoupling and matter-radiation equality. Only the CNR case is not accurately described by w p Q (a) due to the high frequency oscillations of the scalar field which occur at low redshift as it fluctuates around the minimum of its potential. In fact these oscillations are not detectable, rather it is the time-average of w Q (a) which is seen in the cosmological observables, and can be described by the corresponding w p Q (a). There are a number of impressive features that can be associated with the use of w p Q (a) in Eq. (4). For instance it can reproduce not only the behavior of models characterized by the tracker regime but also more general ones. As an example of this in fig.4 we plot w p Q (a) cor-responding to three cases: a K-essence model [44] (blue solid line); a rapid late time transition [45] (red dash-dot line) and finally one with an equation of state w 0 Q < −1 (green dash line). The the observational constraints on w 0 Q , w m Q , w r Q , a m c and ∆ m lead to constraints on a large number of dark energy models, but at the same time it provides us with model independent information on the evolution of the dark energy. It could be argued that the five dimensional parameter space we have introduced is too large to be reliably constrained. Fortunately this can be further reduced without losing any of the essential details arising out of tracker solutions in these Quintessence models. In fact nucleosynthesis places tight constraints on the allowed energy density of any dark energy component, generally forcing them to be negligible in the radiation era [47,48]. The real impact of dark energy occurs after matter-radiation equality, so we can set w r Q = w m Q . Consequently we end up with four parameters: w 0 Q , w m Q , a m c and ∆ m . Although they increase the already large parameter space of cosmology, they are necessary if we are to answer fundamental questions about the nature of the dark energy. The parameters make sense, if w Q (a) evolves in time, we need to know when it changed (a m c ), how rapidly (∆ m ) and what its value was when it changed (w m Q ). Neglecting the effects during the radiation dominated era it proves useful to provide a shorter version of Eq. (4), in fact since we can neglect the transition from radiation to matter dominated eras, then the linear combination Eq. (4) can be rewritten as * :
w p Q (a) = w 0 Q + (w m Q − w 0 Q ) × 1 + e a m c ∆m 1 + e − a−a m c ∆m × 1 − e − a−1 ∆m 1 − e 1 ∆m .(9)
As we can see in fig.5, the relative difference between the exact solution w Q (a) of the Klein-Gordon equation and Eq. (9) is smaller than 5% for redshifts z < 1000, therefore it provides a very good approximation for the evolution of the quintessence equation of state. Both Eq. (4) and Eq. (9) are very useful in that they allow us to take into account the clustering properties of dark energy (see for instance [46]) and to combine low redshift measurements with large scale structure and CMB data. We would like to point out that one of the key results in this paper is the fact that our expansion for the equation of state allows for a broader range of redshift experiments than has previously been proposed. * We thank Eric Linder for pointing this out to us.
CONCLUSIONS
The evidence for a present day accelerating universe appears to be mounting, and accompanied with it is the need to understand the nature of the dark energy that many believe to be responsible for this phenomenon. Of the two possibilities proposed to date, a cosmological constant or a Quintessence scalar field, the latter suffers in that there are a plethora of models that have been proposed, all of which satisfy the late time features, that of an accelerating universe. Yet there is no definitive particle physics inspired model for the dark energy therefore an other route should be explored, trying to determine the equation of state that the dark energy satisfies in a model independent manner. If this were to work it would allow us to discuss the impact dark energy has had on cosmology without the need to refer to particular dark energy scenario. In this paper, we have begun addressing this approach. We have introduced a parameterization of the dark energy equation of state w p Q (a) which involves five parameters and shown how well they reproduce a wide range of dark energy models. By estimating their value from cosmological data we can constrain the dark energy in a model independent way. This could be important in future years when high precision CMB and large scale structure observations begin to probe medium to large redshifts, regions where differences in the features of the Quintessence models begin to emerge. Although the low redshift data indicate that it is currently impossible to discriminate a cosmological constant from a Quintessence model, moving to higher redshifts may result in evidence for a time varying equation of state in which case, we need to be in a position to determine such an equation of state in a model independent manner. We believe this paper helps in this goal.
FIG. 1 :
1Evolution of wQ against the scale factor for an inverse power law model (solid blue line), SUGRA model ([39]) (dash red line), two exponential potential model ([40]) (solid magenta line), AS model ([41]) (solid green line) and CNR model ([42]) (dot orange line).
FIG. 2 :
2Plot of w p Q (a) best fit for different potentials values w r Q and w m Q respectively, values that are related to the equation of state of the background component w B through [9, 37]:
FIG. 3 :
3Absolute value of the difference between wQ(a) and w p Q (a) for the models of fig.1.
FIG. 4 :
4Time evolution of w p Q (a) as in the case of K-essence (blue solid line), late time transition (red dash-dot line) end with w o Q < −1 (green dash line).
FIG. 5 :
5Absolute value of the difference between wQ(a) and the low redshift formula Eq. (9) for the models offig.1.
TABLE I :
IBest fit values of the parameters of the expansion
(4).
w 0
Q
w m
Q
w r
Q
a m
c
∆m
INV
-0.40
-0.27
-0.02
0.18
0.5
SUGRA
-0.82
-0.18
0.10
0.1
0.7
2EXP
-1
0.01
0.31
0.19
0.043
AS
-0.96
-0.01
0.31
0.53
0.13
CNR
-1.0
0.1
0.32
0.15
0.016
We would like to thank Carlo Ungarelli, Nelson J. Nunes, Michael Malquarti and Sam Leach for useful discussions. In particular we are grateful to Eric Linder for the useful suggestions he made. PSC is supported by a University of Sussex bursary.
. S Perlmutter, Astrophys. J. 517565Perlmutter, S. et al. 1999, Astrophys. J. , 517, 565
. A Riess, Astrophys. J. 117707Riess, A. et al. 1999, Astrophys. J. , 117, 707
. P De Bernardis, Nature. 404De Bernardis, P. et al. 2000, Nature 404, 955-959
. C B Netterfield, astro-ph/0104460Netterfield, C. B. et al. 2001, astro-ph/0104460
. C Pryke, Astrophys. J. 56846Pryke, C. et al. 2001, Astrophys. J. , 568, 46
. G Efstathiou, astro-ph/0109152Efstathiou, G. et al. 2001, astro-ph/0109152
. C Wetterich, Nucl. Phys. B. 302645Wetterich, C. 1988, Nucl. Phys. B 302, 645
. B Ratra, P J E Pebbles, Phys. Rev. D. 373406Ratra, B. and Pebbles, P.J.E. 1988, Phys. Rev. D 37, 3406
. R R Caldwell, R Dave, P J Steinhardt, Phys. Rev. Lett. 80Caldwell, R. R., Dave, R., and Steinhardt, P. J. 1998, Phys. Rev. Lett. 80, 1582-1585
. P G Ferreira, Joyce , M , Phys. Rev. 5823503Ferreira, P. G, and Joyce, M. 1998, Phys. Rev. D58, 023503
. P S Corasaniti, E J Copeland, Phys. Rev. 6543004Corasaniti, P. S. and Copeland, E. J. 2002, Phys. Rev. D65, 043004
. R Bean, A Melchiorri, Phys. Rev. 6541302Bean, R. and Melchiorri, A. 2002, Phys. Rev. D65, 041302
. S Hannestad, S Mortsell, astro-ph/020509Hannestad, S. and Mortsell, S. 2002, astro-ph/020509
. L Amendola, astro-ph/0205097Amendola, L. et al. 2002, astro-ph/0205097
. I Waga, A P Miceli, Phys. Rev. 59103507Waga, I. and Miceli, A. P. 1999, Phys. Rev. D59, 103507
. J A Newman, M Davis, Astrophys. J. Lett. 53411Newman, J. A. and Davis, M. 2000, Astrophys. J. Lett., 534, L11
. J A Newman, M Davis, Astrophys. J. 564567Newman, J. A. and Davis, M. 2002, Astrophys. J. , 564, 567
. J A S Lima, J S Alcaniz, Astrophys. J. 56615Lima, J. A. S. and Alcaniz, J. S. 2002, Astrophys. J. , 566, 15
. M O Calvao, J R T De Mello Neto, I Waga, Phys. Rev. Lett. 889Calvao, M. O., de Mello Neto, J. R. T. and Waga, I. 2002, Phys. Rev. Lett. 88, 9
. D Huterer, M Turner, Phys. Rev. 64123527Huterer, D. and Turner, M. 2001, Phys. Rev. D64, 123527
. P Astier, Phys. Lett. B. 5008Astier, P. 2001, Phys. Lett. B, 500,8
. V Barger, D Marfatia, Phys. Lett. B. 49867Barger, V. and Marfatia, D. 2001, Phys. Lett. B, 498, 67
. M Goliath, astro-ph/0104009Goliath, M. et al. 2001, astro-ph/0104009
. I Maor, R Brustein, P Steinhardt, Phys. Rev. Lett. 866Maor, I., Brustein, R. and Steinhardt, P. 2001, Phys. Rev. Lett, 86, 6
. J Weller, A Albrecht, Phys. Rev. Lett. 861939Weller, J. and Albrecht, A. 2001, Phys. Rev. Lett. 86, 1939
. J Weller, A Albrecht, Phys. Rev. 65103512Weller, J. and Albrecht, A. 2002, Phys. Rev. D65, 103512
. T D Saini, Phys. Rev. Lett. 851162Saini, T.D. et al. 2000, Phys. Rev. Lett. 85, 1162
. D Huterer, M Turner, Phys. Rev. 6081301Huterer, D. and Turner, M. 1999, Phys. Rev. D60, 081301
. T Chiba, T Nakamura, Phys. Rev. 62121301Chiba, T. and Nakamura, T. 2000, Phys. Rev. D62, 121301
. V Sahni, astro-ph/0201498Sahni, V. et al. 2002, astro-ph/0201498
. J Kujat, astro-ph/0112221Astrophys. J. in pressKujat, J. et al. 2001, astro-ph/0112221, Astrophys. J. in press
. I Maor, astro-ph/0112526Maor, I. et al. 2001, astro-ph/0112526,
. G Efstathiou, MNRAS. 342810Efstathiou, G. 2000, MNRAS 342, 810
. B F Gerke, G Efstathiou, astro-ph/0201336MNRAS in pressGerke, B. F. and Efstathiou, G. 2002, astro-ph/0201336, MNRAS in press
. D Huterer, G Starkman, astro-ph/0207517Huterer, D. and Starkman, G. 2002, astro-ph/0207517
. R Bean, S H Hansen, A Melchiorri, Phys.Rev. 64103508Bean, R., Hansen, S. H. and Melchiorri, A. 2001, Phys.Rev. D64 103508
. P J Steinhardt, L Wang, I Zlatev, Phys. Rev. 59123504Steinhardt, P. J., Wang, L. and Zlatev, I. 1999, Phys. Rev. D59, 123504
. I Zlatev, L Wang, P J Steinhardt, Phys. Rev. Lett. 82Zlatev, I., Wang, L., and Steinhardt, P. J. 1999, Phys. Rev. Lett. 82, 896-899
. P Brax, J Martin, Phys. Lett. 468Brax, P. and Martin, J. 1999, Phys. Lett. B468, 40-45
. T Barreiro, E J Copeland, N J Nunes, Phys. Rev. 61127301Barreiro, T., Copeland, E. J., and Nunes, N. J. 2000, Phys. Rev. D61, 127301
. A Albrecht, C Skordis, Phys. Rev. Lett. 84Albrecht, A., and Skordis, C. 1999, Phys. Rev. Lett. 84, 2076-2079
. E J Copeland, N J Nunes, F Rosati, Phys. Rev. 62123503Copeland, E. J., Nunes, N. J., and Rosati, F. 2000, Phys. Rev. D62, 123503
. B Bassett, astro-ph/0203383Bassett, B. et al. 2002, astro-ph/0203383
. C Armendariz-Picon, V Mukhanov, P J Steinhardt, Phys. Rev. Lett. 854438C. Armendariz-Picon, V. Mukhanov and P. J. Steinhardt, 2000, Phys. Rev. Lett. 85, 4438
. L Parker, A , Phys. Rev. 60123502L. Parker and A. Raval, Phys. Rev. D60 123502 (1999)
. R Dave, R R Caldwell, P J Steinhardt, Phys.Rev. 6623516R. Dave, R. R. Caldwell, P. J. Steinhardt, Phys.Rev. D66 (2002) 023516
. R Bean, S H Hansen, A Melchiorri, Phys.Rev. 64103508Bean, R., Hansen, S. H. and Melchiorri, A. 2001, Phys.Rev. D64, 103508
. M Yahiro, Phys.Rev. 6563502Yahiro, M. et al. 2002, Phys.Rev. D65, 063502
|
[] |
[
"The [C II] emission as a molecular gas mass tracer in galaxies at low and high redshift",
"The [C II] emission as a molecular gas mass tracer in galaxies at low and high redshift"
] |
[
"A Zanella \nEuropean Southern Observatory\nKarl Schwarzschild Straße 285748GarchingGermany\n",
"E Daddi \nCEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance\n",
"G Magdis \nDawn Cosmic Center\nNiels Bohr Institute\nUniversity of Copenhagen\nJuliane Maries Vej 30DK-2100CopenhagenDenmark\n\nDark Cosmology Centre\nNiels Bohr Institute\nUniversity of Copenhagen\nJuliane Maries Vej 30DK-2100CopenhagenDenmark\n",
"T Diaz ",
"Santos \nNúcleo de Astronomía de la Facultad de Ingeniería\nUniversidad Diego Portales\nAv. Ejército Libertador 441SantiagoChile\n",
"D Cormier \nCEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance\n",
"D Liu \nMax Planck Institute for Astronomy\nKonigstuhl 17D-69117HeidelbergGermany\n",
"M Pannella \nFaculty of Physics\nLudwig-Maximilians Universität\nScheinerstr. 181679MunichGermany\n",
"F Bournaud \nCEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance\n",
"F Walter \nMax Planck Institute for Astronomy\nKonigstuhl 17D-69117HeidelbergGermany\n",
"T Wang \nInstitute of Astronomy\nthe University of Tokyo, and National Observatory Of Japan\n181-0015OsawaMitaka, TokyoJapan\n",
"D Elbaz \nCEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance\n",
"R T Coogan \nCEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance\n\nAstronomy Centre\nDepartment of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK\n",
"\nSchool of Physics\nInstitute for Advanced Study\nHoegiro 85, Dongdaemun-gu02455SeoulKorea, Korea\n",
"\nInstituto de Física\nPontificia Universidad Católica de Valparaíso\nCasilla 4059ValparaísoChile\n",
"\nNational Optical Astronomy Observatory\n85719TucsonAZUSA\n",
"\nLAM\nAix Marseille Univ\nCNRS\nLaboratoire d'Astrophysique de Marseille\nMarseilleFrance\n",
"\nInstituto de Física y Astronomía\nUniversidad de Valparaíso\nAvda. Gran Bretaña 1111ValparaísoChile\n",
"\nDepartment of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiChina\n",
"\nSchool of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina\n",
"\nJoint Center for Astronomy\nChinese Academy of Sciences South America Center for Astronomy\nCamino El Observatorio 1515China, Chile\n",
"\nLas Condes\nSantiagoChile\n",
"\nDepartment of Physics\nFaculty of Science\nChulalongkorn University\n254 Phayathai Road10330Pathumwan, BangkokThailand\n",
"\nNational Astronomical Research Institute of Thailand (Public Organization)\nDon Kaeo, Mae Rim, Chiang Mai 50180Thailand\n",
"\nInstitutes for Advanced Study\nKavli Institute for the Physics and Mathematics of the Universe (WPI)\nThe University of Tokyo\nThe University of Tokyo\n277-8583KashiwaChibaJapan\n"
] |
[
"European Southern Observatory\nKarl Schwarzschild Straße 285748GarchingGermany",
"CEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance",
"Dawn Cosmic Center\nNiels Bohr Institute\nUniversity of Copenhagen\nJuliane Maries Vej 30DK-2100CopenhagenDenmark",
"Dark Cosmology Centre\nNiels Bohr Institute\nUniversity of Copenhagen\nJuliane Maries Vej 30DK-2100CopenhagenDenmark",
"Núcleo de Astronomía de la Facultad de Ingeniería\nUniversidad Diego Portales\nAv. Ejército Libertador 441SantiagoChile",
"CEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance",
"Max Planck Institute for Astronomy\nKonigstuhl 17D-69117HeidelbergGermany",
"Faculty of Physics\nLudwig-Maximilians Universität\nScheinerstr. 181679MunichGermany",
"CEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance",
"Max Planck Institute for Astronomy\nKonigstuhl 17D-69117HeidelbergGermany",
"Institute of Astronomy\nthe University of Tokyo, and National Observatory Of Japan\n181-0015OsawaMitaka, TokyoJapan",
"CEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance",
"CEA\nIRFU\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris Cité\nCNRS\nF-91191Gif-sur-YvetteDAp, AIMFrance",
"Astronomy Centre\nDepartment of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK",
"School of Physics\nInstitute for Advanced Study\nHoegiro 85, Dongdaemun-gu02455SeoulKorea, Korea",
"Instituto de Física\nPontificia Universidad Católica de Valparaíso\nCasilla 4059ValparaísoChile",
"National Optical Astronomy Observatory\n85719TucsonAZUSA",
"LAM\nAix Marseille Univ\nCNRS\nLaboratoire d'Astrophysique de Marseille\nMarseilleFrance",
"Instituto de Física y Astronomía\nUniversidad de Valparaíso\nAvda. Gran Bretaña 1111ValparaísoChile",
"Department of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiChina",
"School of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina",
"Joint Center for Astronomy\nChinese Academy of Sciences South America Center for Astronomy\nCamino El Observatorio 1515China, Chile",
"Las Condes\nSantiagoChile",
"Department of Physics\nFaculty of Science\nChulalongkorn University\n254 Phayathai Road10330Pathumwan, BangkokThailand",
"National Astronomical Research Institute of Thailand (Public Organization)\nDon Kaeo, Mae Rim, Chiang Mai 50180Thailand",
"Institutes for Advanced Study\nKavli Institute for the Physics and Mathematics of the Universe (WPI)\nThe University of Tokyo\nThe University of Tokyo\n277-8583KashiwaChibaJapan"
] |
[
"MNRAS 000, 1-?? (2017) A. Cibinel, 7 R. Gobat, 8,9 M. Dickinson, 10 M. Sargent, 7 G. Popping, 6 S. C. Madden, 2 M. Bethermin, 11 T. M. Hughes, 12,13,14,15 F. Valentino, 3,4 W. Rujopakarn"
] |
We present ALMA Band 9 observations of the [C II]158µ m emission for a sample of 10 main-sequence galaxies at redshift z∼2, with typical stellar masses (log M /M ∼10.0-10.9) and star formation rates (∼35-115 M yr −1 ). Given the strong and well understood evolution of the interstellar medium from the present to z = 2, we investigate the behaviour of the [C II] emission and empirically identify its primary driver. We detect [C II] from six galaxies (four secure, two tentative) and estimate ensemble averages including non detections. The [C II]-to-infrared luminosity ratio (L [C II] /L IR ) of our sample is similar to that of local main-sequence galaxies (∼ 2 × 10 −3 ), and ∼ 10 times higher than that of starbursts. The [C II] emission has an average spatial extent of 4 -7 kpc, consistent with the optical size. Complementing our sample with literature data, we find that the [C II] luminosity correlates with galaxies' molecular gas mass, with a mean absolute deviation of 0.2 dex and without evident systematics: the [C II]-to-H 2 conversion factor (α [C II] ∼ 30 M /L ) is largely independent of galaxies' depletion time, metallicity, and redshift. [C II] seems therefore a convenient tracer to estimate galaxies' molecular gas content regardless of their starburst or main-sequence nature, and extending to metal-poor galaxies at low-and high-redshifts. The dearth of [C II] emission reported for z > 6-7 galaxies might suggest either a high star formation efficiency or a small fraction of UV light from star formation reprocessed by dust.
|
10.1093/mnras/sty2394
|
[
"https://arxiv.org/pdf/1808.10331v1.pdf"
] | 59,519,987 |
1808.10331
|
bd371f456ea441488f960a055f3a7b072bacb6e8
|
The [C II] emission as a molecular gas mass tracer in galaxies at low and high redshift
30 Aug 2018
A Zanella
European Southern Observatory
Karl Schwarzschild Straße 285748GarchingGermany
E Daddi
CEA
IRFU
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris Cité
CNRS
F-91191Gif-sur-YvetteDAp, AIMFrance
G Magdis
Dawn Cosmic Center
Niels Bohr Institute
University of Copenhagen
Juliane Maries Vej 30DK-2100CopenhagenDenmark
Dark Cosmology Centre
Niels Bohr Institute
University of Copenhagen
Juliane Maries Vej 30DK-2100CopenhagenDenmark
T Diaz
Santos
Núcleo de Astronomía de la Facultad de Ingeniería
Universidad Diego Portales
Av. Ejército Libertador 441SantiagoChile
D Cormier
CEA
IRFU
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris Cité
CNRS
F-91191Gif-sur-YvetteDAp, AIMFrance
D Liu
Max Planck Institute for Astronomy
Konigstuhl 17D-69117HeidelbergGermany
M Pannella
Faculty of Physics
Ludwig-Maximilians Universität
Scheinerstr. 181679MunichGermany
F Bournaud
CEA
IRFU
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris Cité
CNRS
F-91191Gif-sur-YvetteDAp, AIMFrance
F Walter
Max Planck Institute for Astronomy
Konigstuhl 17D-69117HeidelbergGermany
T Wang
Institute of Astronomy
the University of Tokyo, and National Observatory Of Japan
181-0015OsawaMitaka, TokyoJapan
D Elbaz
CEA
IRFU
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris Cité
CNRS
F-91191Gif-sur-YvetteDAp, AIMFrance
R T Coogan
CEA
IRFU
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris Cité
CNRS
F-91191Gif-sur-YvetteDAp, AIMFrance
Astronomy Centre
Department of Physics and Astronomy
University of Sussex
BN1 9QHBrightonUK
School of Physics
Institute for Advanced Study
Hoegiro 85, Dongdaemun-gu02455SeoulKorea, Korea
Instituto de Física
Pontificia Universidad Católica de Valparaíso
Casilla 4059ValparaísoChile
National Optical Astronomy Observatory
85719TucsonAZUSA
LAM
Aix Marseille Univ
CNRS
Laboratoire d'Astrophysique de Marseille
MarseilleFrance
Instituto de Física y Astronomía
Universidad de Valparaíso
Avda. Gran Bretaña 1111ValparaísoChile
Department of Astronomy
CAS Key Laboratory for Research in Galaxies and Cosmology
University of Science and Technology of China
230026HefeiChina
School of Astronomy and Space Science
University of Science and Technology of China
230026HefeiChina
Joint Center for Astronomy
Chinese Academy of Sciences South America Center for Astronomy
Camino El Observatorio 1515China, Chile
Las Condes
SantiagoChile
Department of Physics
Faculty of Science
Chulalongkorn University
254 Phayathai Road10330Pathumwan, BangkokThailand
National Astronomical Research Institute of Thailand (Public Organization)
Don Kaeo, Mae Rim, Chiang Mai 50180Thailand
Institutes for Advanced Study
Kavli Institute for the Physics and Mathematics of the Universe (WPI)
The University of Tokyo
The University of Tokyo
277-8583KashiwaChibaJapan
The [C II] emission as a molecular gas mass tracer in galaxies at low and high redshift
MNRAS 000, 1-?? (2017) A. Cibinel, 7 R. Gobat, 8,9 M. Dickinson, 10 M. Sargent, 7 G. Popping, 6 S. C. Madden, 2 M. Bethermin, 11 T. M. Hughes, 12,13,14,15 F. Valentino, 3,4 W. Rujopakarn
161830 Aug 2018Accepted XXX. Received YYY; in original form ZZZPreprint 31 August 2018 Compiled using MNRAS L A T E X style file v3.0galaxies: evolution -galaxies: high-redshift -galaxies: ISM -galaxies: star formation -galaxies: starburst -submillimetre: galaxies
We present ALMA Band 9 observations of the [C II]158µ m emission for a sample of 10 main-sequence galaxies at redshift z∼2, with typical stellar masses (log M /M ∼10.0-10.9) and star formation rates (∼35-115 M yr −1 ). Given the strong and well understood evolution of the interstellar medium from the present to z = 2, we investigate the behaviour of the [C II] emission and empirically identify its primary driver. We detect [C II] from six galaxies (four secure, two tentative) and estimate ensemble averages including non detections. The [C II]-to-infrared luminosity ratio (L [C II] /L IR ) of our sample is similar to that of local main-sequence galaxies (∼ 2 × 10 −3 ), and ∼ 10 times higher than that of starbursts. The [C II] emission has an average spatial extent of 4 -7 kpc, consistent with the optical size. Complementing our sample with literature data, we find that the [C II] luminosity correlates with galaxies' molecular gas mass, with a mean absolute deviation of 0.2 dex and without evident systematics: the [C II]-to-H 2 conversion factor (α [C II] ∼ 30 M /L ) is largely independent of galaxies' depletion time, metallicity, and redshift. [C II] seems therefore a convenient tracer to estimate galaxies' molecular gas content regardless of their starburst or main-sequence nature, and extending to metal-poor galaxies at low-and high-redshifts. The dearth of [C II] emission reported for z > 6-7 galaxies might suggest either a high star formation efficiency or a small fraction of UV light from star formation reprocessed by dust.
INTRODUCTION
A tight correlation between the star formation rates (SFR) and stellar masses (M ) in galaxies seems to be in place both in the local Universe and at high redshift (at least up to redshift z ∼ 7, e.g. Bouwens et al. 2012, Steinhardt et al. 2014, Salmon et al. 2015: the so-called "main-sequence" (MS; e.g. Noeske et al. 2007, Elbaz et al. 2007, Daddi et al. 2007, Stark et al. 2009, followed by many others). The normalization of this relation increases with redshift. At fixed stellar mass (∼ 10 10 M ), z ∼ 1 galaxies have SFRs comparable to local Luminous Infrared Galaxies (LIRGs); at z ∼ 2 their SFR is further enhanced and they form stars at rates comparable to local Ultra Luminous Infrared Galaxies (ULIRGs). However, the smooth dynamical disk structure of high-redshift mainsequence sources, together with the tightness of the SFR -M relation, disfavour the hypothesis that the intense star formation activity of these galaxies is triggered by major mergers, as by contrast happens at z = 0 for ULIRGs (e.g., Armus et al. 1987, Sanders & Mirabel 1996, Bushouse et al. 2002. The high SFRs in the distant Universe seem instead to be sustained by secular processes (e.g. cold gas inflows) producing more stable star formation histories (e.g., Noeske et al. 2007, Davé et al. 2012.
Main sequence galaxies are responsible for ∼ 90% of the cosmic star formation rate density (e.g. Rodighiero et al. 2011, Sargent et al. 2012, whereas the remaining ∼ 10% of the cosmic SFR density is due to sources strongly deviating from the main sequence, showing enhanced SFRs and extreme infrared luminosities. Similarly to local ULIRGs, star formation in these starburst (SB) galaxies is thought to be ignited by major merger episodes (e.g., Elbaz et al. 2011, Nordon et al. 2012, Hung et al. 2013, Schreiber et al. 2015, Puglisi et al. 2017. Throughout this paper we will consider as starbursts all the sources that fall > 4 times above the main sequence (Rodighiero et al. 2011).
To understand the mechanisms triggering star formation, it is crucial to know the molecular gas reservoir in galaxies, which forms the main fuel for star formation (e.g. Bigiel et al. 2008), at the peak of the cosmic star formation history (z ∼ 2). Due to their high luminosities, the starbursts have been the main sources studied for a long time, although they only represent a small fraction of the population of starforming galaxies. Only recently it has been possible to gather large samples of z ∼ 1 -2 main-sequence sources and investigate their gas content thanks to their CO and dust emission (e.g. Genzel et al. 2010, Carilli & Walter 2013, Tacconi et al. 2013, Combes et al. 2013, Scoville et al. 2015, Daddi et al. 2015, Walter et al. 2016, Dunlop et al. 2017. Observing the CO transitions at higher redshift, however, becomes challenging since the line luminosity dims with cosmological distance, the contrast against the CMB becomes lower (e.g. da Cunha et al. 2013), and it weakens as metallicity decreases (as expected at high z). Some authors describe the latter effect stating that a large fraction of molecular gas becomes "CO dark", meaning that the CO line no longer traces H 2 (e.g. Wolfire et al. 2010, Shi et al. 2016, Madden et al. 2016, Amorín et al. 2016, Glover & Smith 2016) and therefore the CO luminosity per unit gas mass is much lower on average for these galaxies. Similarly, the dust content of galaxies decreases with metallicity and therefore it might not be a suitable tracer of molecular gas at high redshift. An alter-native possibility is to use other rest-frame far-infrared (IR) lines instead. Recently [C I] has been proposed as molecular gas tracer (e.g., Papadopoulos & Greve 2004, Walter et al. 2011, Bothwell et al. 2016, Popping et al. 2017), although it is fainter than many CO transitions and this is still an open field of research. Alternatively the [C II] 2 P 3/2 -2 P 1/2 transition at 158 µm might be a promising tool to investigate the gas physical conditions in the distant Universe (e.g. Carilli & Walter 2013).
[C II] has been identified as one of the brightest fine structure lines emitted from star-forming galaxies. It has a lower ionization potential than H I (11.3 eV instead of 13.6 eV) and therefore it can be produced in cold atomic interstellar medium (ISM), molecular, and ionized gas. However, several studies have argued that the bulk of galaxies' [C II] emission originates in the external layers of molecular clouds heated by the far-UV radiation emitted from hot stars with 60 -95% of the total [C II] luminosity arising from photodissociation regions (PDRs, e.g. Stacey et al. 1991, Sargsyan et al. 2012, Rigopoulou et al. 2014, Croxall et al. 2017). In particular, Pineda et al. (2013) and Velusamy & Langer (2014) showed that ∼ 75% of the [C II] emission in the Milky Way is coming from the molecular gas; this is in good agreement with simulations showing that 60% -85% of the [C II] luminosity emerges from the molecular phase (Olsen et al. 2017, Accurso et al. 2017b, Vallini et al. 2015. There are also observational and theoretical models suggesting that [C II] is a good tracer of the putative "CO dark" gas. The main reason for this is the fact that in the outer regions of molecular clouds, where the bulk of the gas-phase carbon resides, H 2 is shielded either by dust or self-shielded from UV photodissociation, whereas CO is more easily photodissociated into C and C + . This H 2 is therefore not traced by CO, but it mainly emits in [C II] (e.g. Maloney & Black 1988, Madden et al. 1993, Poglitsch et al. 1995, Wolfire et al. 2010, Pineda et al. 2013, Nordon & Sternberg 2016, Fahrion et al. 2017, Glover & Smith 2016. Another advantage of using the [C II] emission line is the fact that it possibly traces also molecular gas with moderate density. In fact, the critical density needed to excite the [C II] emitting level through electron impacts is > 10 particle/cc (∼ 5 -50 cm −3 ). For comparison, the critical density needed for CO excitation is higher (∼ 1000 H/cc), so low-density molecular gas can emit [C II], but not CO (e.g. Goldsmith et al. 2012, Narayanan & Krumholz 2017. This could be an important contribution given the fact that ∼ 30% of the molecular gas in high-redshift galaxies has a density < 50 H/cc (Bournaud et al. in prep. 2017), although detailed simulations of the [C II] emission in turbulent disks are still missing and observational constraints are currently lacking.
The link between the [C II] emission and star-forming regions is further highlighted by the well known relation between the [C II] and IR luminosities (L [C II] and L IR respectively, e.g. De Looze et al. 2010, De Looze et al. 2014, Popping et al. 2014, Herrera-Camus et al. 2015, Popping et al. 2016, Olsen et al. 2016, Vallini et al. 2016, since the IR luminosity is considered a good indicator of the SFR (Kennicutt 1998). However, this relation is not unique and different galaxies show distinct L [C II] /L IR ratios. In fact, ID9347 ID9347 ID9347 ID9347 ID6515 ID6515 ID6515 ID6515 ID10076 ID10076 ID10076 ID10076 ID9834 ID9834 ID9834 ID9834 ID9681 ID9681 ID9681 ID9681 ID10049 ID10049 ID10049 ID10049 ID2861 ID2861 ID2861 ID2861 ID2910 ID2910 ID2910 ID2910 ID7118 ID7118 ID7118 ID7118 ID8490 ID8490 ID8490 ID8490 Figure 1. HST and ALMA observations of our sample galaxies. For each source we show the HST /WFC3 image taken with the F160W filter, the stellar mass map, the star formation rate map, and the radio observations taken with VLA. The overplotted black contours, when present, show the > 3σ [C II] emission. The green contours indicate the > 3σ 850 µm continuum. The color scale in all panels is linear and it is chosen to show galaxies' features at best. The units of the color bars are the following: counts s −1 for F160W, 10 9 M for the stellar mass maps, M yr −1 for the SFR maps, and Jy for the radio.
in the local Universe main-sequence sources show a constant L [C II] /L IR ∼ 0.002 -0.004, although with substantial scatter (e.g., Stacey et al. 1991, Malhotra et al. 2001Cormier et al. 2015, Smith et al. 2017. Whereas when including also local starburst galaxies (LIRGs and ULIRGs) with L IR > 10 11 L , the [C II]/IR luminosity ratio drops significantly by up to an order of magnitude (e.g. Malhotra et al. 1997, Farrah et al. 2013). These sources are usually referred to as " [C II] Zhao et al. 2013 show a deficit when starbursts are considered. This is likely related to the enhanced star formation efficiency (SFE = SFR/M mol ) of starbursts with respect to local main-sequence galaxies, consistent with the results by Daddi et al. (2010) and Genzel et al. (2010). This relation between the L [C II] /L IR and galaxies' SFE could be due to the fact that the average properties of the interstellar medium in main-sequence and starburst sources are significantly different: the highly compressed and more efficient star formation in starburst could enhance the ionization parameters and drive to lower line to continuum ratios (Graciá-Carpio et al. 2011). At high redshift, observations become more challenging, mainly due to the fainter fluxes of the targets: so far z > 1 studies have mainly targeted IR selected sources (e.g., the most luminous sub-millimeter galaxies and quasars), whereas measurements for IR fainter main-sequence targets are still limited (e.g., Stacey et al. 2010, Ivison et al. 2010, Swinbank et al. 2012, Riechers et al. 2014, Huynh et al. 2014, Brisbin et al. 2015. Therefore it is not clear yet if high-z main-sequence galaxies, which have similar SFRs as (U)LIRGs, are expected to be [C II] deficient. With our sample we start to push the limit of current observations up to redshift z ∼ 2.
The goal of this paper is to understand whether mainsequence, z ∼ 2 galaxies are [C II] deficient and investigate what are the main physical parameters the [C II] emission line is sensitive to. Interestingly we find that its luminosity traces galaxies' molecular gas mass and could therefore be used as an alternative to other proxies (e.g. CO, [CI], or dust emission). Given its brightness and the fact that it remains luminous at low metallicities where the CO largely fades, this emission line might become a valuable resource to explore the galaxies' gas content at very high redshift. Hence understanding the [C II] behaviour in z ∼ 2 main-sequence galaxies, whose physical properties are nowadays relatively well constrained, will lay the ground for future explorations of the ISM at higher redshift.
The paper is structured as follows: in Section 2 we present our observations, sample selection, and data analysis; in Section 3 we discuss our results; in Section 4 we conclude and summarize. Throughout the paper we use a flat ΛCDM cosmology with Ω m = 0.3, Ω Λ = 0.7, and H 0 = 70 km s −1 Mpc −1 . We assumed a Chabrier (2003) initial mass function (IMF) and, when necessary, we accordingly converted literature results obtained with different IMFs.
OBSERVATIONS AND DATA ANALYSIS
In this Section we discuss how we selected the sample and we present our ALMA observations together with available ancillary data. We also report the procedure we used to estimate the [C II] and continuum flux of our sources. Finally, we describe the literature data that we used to complement our observations, for which full details are given in Appendix.
Sample selection and ancillary data
To study the ISM properties of high-redshift main-sequence galaxies, we selected targets in the GOODS-S field (Giavalisco et al. 2004, Nonino et al. 2009), which benefits from extensive multi-wavelength coverage.
Our sample galaxies were selected on the basis of the following criteria: 1) having spectroscopic redshift in the range 1.73 < z < 1.94 to target the [C II] emission line in ALMA Band 9. We made sure that the selected galaxies would have been observed in a frequency region of Band 9 with good atmospheric transmission. Also, to minimize overheads, we selected our sample so that multiple targets could be observed with the same ALMA frequency setup; 2) being detected in the available Herschel data; 3) having SFRs and M typical of main-sequence galaxies at this redshift, as defined by Rodighiero et al. (2014, they all have sSFR/sSFR MS < 1.7); 4) having undisturbed morphologies, with no clear indications of ongoing mergers, as inferred from the visual inspection of HST images. Although some of the optical images of these galaxies might look disturbed, their stellar mass maps are in general smooth (Figure 1), indicating that the irregularities visible in the imaging are likely due to star-forming clumps rather than major mergers (see, e.g., Cibinel et al. 2015).
Our sample therefore consists of 10 typical star-forming, main-sequence galaxies at redshift 1.73 ≤ z ≤ 1.94. Given the high ionization lines present in its optical spectrum, one of them (ID10049) appears to host an active galactic nucleus (AGN). This source was not detected in [C II] and retaining it or not in our final sample does not impact the implications of this work.
Deep Hubble Space Telescope (HST ) observations at optical (HST /ACS F435W, F606W, F775W, F814W, and F850LP filters) and near-IR (HST /WFC3 F105W, F125W, and F160W filters) wavelengths are available from the CAN-DELS survey (Koekemoer et al. 2011, Grogin et al. 2011. Spitzer and Herschel mid-IR and far-IR photometry in the wavelength range 24 µm -500 µm is also available (Elbaz et al. 2011, Wang et al. in prep. 2017. Finally, radio observations at ∼ 5 cm (6 GHz) were taken with the Karl G. Jansky Very Large Array (VLA) with 0.3" × 0.6" resolution (Rujopakarn et al. 2016).
Thanks to these multiwavelength data, we created resolved stellar mass and SFR maps for our targets, following the method described by Cibinel et al. (2015). In brief, we performed pixel-by-pixel spectral energy distribution (SED) fitting considering all the available HST filters mentioned above, after having convolved all the images with the PSF of the matched H F160W band, useful also to increase the signalto-noise (S/N). We considered Bruzual & Charlot (2003) Figure 2. ALMA spectra of the [C II] detections of our sample. Left panels: ALMA 2D maps of the [C II] emission line. The black solid and dashed contours indicate respectively the positive and negative 3σ , 4σ , and 5σ levels. The beam is reported as the black filled ellipse. Each stamp has a size of 4" × 4". The black cross indicates the galaxy center, as estimated from the HST F160W imaging. Some tapering has been done for illustrative purposes, although we used the untapered maps for the analysis. Right panels: 1D spectra of the [C II] detected sources extracted using a PSF to maximize the S/N (notice that in this figure we did not scale the fluxes of the spectra extracted with PSF to match those obtained when using an exponential function with larger size as reported in Table 2). The dark grey shaded areas indicate the 1-σ velocity range over which the flux has been measured. The frequencies corresponding to the optical and [C II] redshifts are marked with arrows. The horizontal bars indicate the 1σ uncertainty associated to the optical (light gray) and [C II] (dark gray) redshift estimate. For illustrative purposes we also report the Gaussian fit of the emission lines: it was not used to estimate the line fluxes, but only as an alternative estimate of the galaxies' redshift (Section 2.3). templates with constant SFR to limit the degeneracy with dust extinction. We corrected the fluxes for dust extinction following the prescriptions by Calzetti et al. (2000). The stellar population age in the models varied between 100 Myr and 2 Gyr, assuming fixed solar metallicity. In Figure 1 we show the resulting SFR and stellar mass maps, together with the HST H F160W -band imaging. The stellar mass computed summing up all the pixels of our maps is in good agreement with that estimated by Santini et al. (2014) fitting the global ultraviolet (UV) to IR SED (they differ < 30% with no systematic trends). In the following we use the stellar masses obtained from the global galaxies' SED, but our conclusions would not change considering the estimate from the stellar mass maps instead.
Spectroscopic redshifts for our sources are all publicly available and were determined in different ways: 5 of them are from the GMASS survey (Kurk et al. 2013), one from the K20 survey (Cimatti et al. 2002, Mignoli et al. 2005, 2 were determined by Popesso et al. (2009) from VLT/VIMOS spectra, one was estimated from our rest-frame UV Keck/LRIS spectroscopy as detailed below, and one had a spectroscopic redshift estimate determined by Pope et al. (2008) from PAH features in the Spitzer /IRS spectrum. With the exception of three sources 1 , all the redshifts were estimated from rest-frame UV absorption lines. This is a notoriously difficult endeavour especially when, given the faint UV magnitudes of the sources, the signal-to-noise ratio (S/N) of the UV continuum is moderate, as for our targets. We note that having accurate spectroscopic redshifts is crucial for data like that presented here: ALMA observations are carried out using four, sometimes adjacent, sidebands (SBs) covering 1.875 GHz each, corresponding to only 800 km s −1 restframe in Band 9 (or equivalently ∆z = 0.008). This implies that the [C II] emission line might be outside the covered frequency range for targets with inaccurate spectroscopic redshift. In general we used at least two adjacent SBs (and up to all 4 in one favourable case) targeting, when possible, galaxies at comparable redshifts (Table 1).
Given the required accuracy in the redshift estimate, before the finalization of the observational setups, we carefully re-analyzed all the spectra of our targets to check and possibly refine the redshifts already reported in the literature. To this purpose, we applied to our VLT/FORS2 and Keck UV rest-frame spectra the same approach described in Gobat et al. (2017b, although both the templates we used and the wavelength range of our data are different). Briefly, we modelled the ∼ 4000 -7000Å range of the spectra using standard Lyman break galaxy templates from Shapley et al. (2003), convolved with a Gaussian to match the resolution of our observations. The redshifts were often revised with respect to those published 2 with variations up to ∼ a few ×10 −3 . Our new values, reported in Table 2, match those measured in the independent work of Tang et al. (2014) and 1 ID2910 that had an IRS spectrum, ID10049 that is an AGN, and ID7118 that has a spectrum from the K20 survey and whose redshift was measured from the Hα emission line 2 At this stage we discovered that one of the literature redshifts was actually wrong, making [C II] unobservable in Band 9. This target was dropped from the observational setups, and so we ended up observing a sample of 10 galaxies instead of the 11 initially allocated to our project.
have formal uncertainties 1-2×10 −3 ( 100−200 km s −1 ), corresponding to an accuracy in the estimate of the [C II] observed frequency of ∼ 0.25 GHz.
Details of ALMA observations
We carried out ALMA Band 9 observations for our sample during Cycle 1 (PI: E. Daddi, Project ID: 2012.1.00775.S) with the goal of detecting the [C II] emission line at restframe 158 µm (ν rest−frame = 1900.54 GHz) and the underlying continuum, redshifted in the frequency range ν obs = 645 -696 GHz. Currently this is the largest sample of galaxies observed with ALMA at this redshift with available [C II] measurements given the difficulty to carry out such observations in Band 9. We observed each galaxy, depending on its IR luminosity, for 8 -13 minutes including overheads to reach a homogeneous sensitivity of 1.5 -2 mJy/beam over a bandwidth of 350 km s −1 . We set a spectral resolution of 0.976 MHz (0.45 km s −1 -later binning the data to substantially lower velocity resolutions) and we requested observations with a spatial resolution of about 1" (configuration C32-1) to get integrated flux measurements of our sources. However, the observations were taken in the C32-3 configuration with a synthesized beam FWHM = 0.3" × 0.2" and a maximum recoverable scale of ∼ 3.5". Our sources were therefore resolved. To check if we could still correctly estimate total [C II] fluxes, we simulated with CASA (McMullin et al. 2007) observations in the C32-3 configuration of extended sources with sizes comparable to those of our galaxies, as detailed in Appendix A. We concluded that, when fitting the sources in the uv plane, we could measure their correct total fluxes, but with substantial losses in terms of effective depth of the data. Figure A1 in Appendix A shows how the total flux error of a source increases, with respect to the case of unresolved observations, as a function of its size expressed in units of the PSF FWHM (see also Equation A1 that quantifies the trend). Given that our targets are 3 -4 times larger than the PSF, we obtained a flux measurement error 5 -10 times higher than expected, hence correspondingly lower S/N ratios. The depth of our data, taken with 0.2" resolution, is therefore equivalent to only 10 -30s of integration if taken with 1 resolution. However, when preparing the observations we considered conservative estimates of the [C II] flux and therefore several targets were detected despite the higher effective noise.
As part of the same ALMA program, besides the Band 9 data, we also requested additional observations in Band 7 to detect the 850 µm continuum, which is important to estimate dust masses for our targets (see Section 2.4). For each galaxy we reached a sensitivity of 140 µJy/beam on the continuum, with an integration time of ∼ 2 minutes on source. The synthesised beam has FWHM = 1" × 0.5" and the maximum recoverable scale is ∼ 6".
We note that there is an astrometry offset between our ALMA observations and the HST data released in the GOODS-S field (Appendix B). Although it is negligible in right ascension (∆RA = 0.06"), it is instead significant in declination (∆DEC = −0.2", > 3σ significant), in agreement with estimates reported by other studies (e.g. Dunlop et al. 2017, Rujopakarn et al. 2016, Barro et al. 2016, Aravena et al. 2016b, Cibinel et al. 2017. We accounted for this offset when interpreting our data by shifting the HST coordinate
ID9347
ID6515 ID9681 ID10049 ID2861 ID7118 Figure 3. ALMA maps of the continuum detections at 850 µm. The black contours indicate the 3σ , 4σ , and 5σ levels. The beam is reported as the black filled ellipse. Each stamp has a size of 10" × 10". The black cross indicated the galaxy center, as estimated from the HST imaging. Some tapering has been done for illustrative purposes, although we used the untapered maps for the analysis.
system to match that of ALMA. In Figure 1 we show the astrometry-corrected HST stamps. However, in Table 2 we report the uncorrected HST coordinates to allow an easier comparison with previous studies. The ALMA target positions are consistent with those from VLA.
[C II] emission line measurements
The data were reduced with the standard ALMA pipeline based on the CASA software (McMullin et al. 2007). The calibrated data cubes were then converted to uvfits format and analyzed with the software GILDAS (Guilloteau & Lucas 2000).
To create the velocity-integrated [C II] line maps for our sample galaxies it was necessary to determine the spectral range over which to integrate the spectra. This in turn requires a 1D spectrum, that needs to be extracted at some spatial position and with a source surface brightness distribution model (PSF or extended). We carried out the following iterative procedure, similar to what described in Daddi et al. (2015 and in preparation) and Coogan et al. (2018).
We fitted, in the uv plane, a given source model (PSF, but also Gaussian and exponential profiles, tailored to the HST size of the galaxies) to all four sidebands and channel per channel, with fixed spatial position determined from the astrometry-corrected HST images. We looked for positive emission line signal in the resulting spectra. When a signal was present, we averaged the data over the channels maximizing the detection S/N and we fitted the resulting single channel dataset to obtain the best fitting line spatial position. If this was different from the spatial position of the initial extraction we proceeded to a new spectral extraction at the new position, and iterate the procedure until convergence was reached.
Individual [C II] detections
Four galaxies converged to secure detections ( Figure 2): they have emission line significance > 5σ in the optimal channel range. The detections are robust against the model used for the extraction of the 1D spectra: the frequency range used for the lines' identification would not change if we extracted the 1D spectra with a Gaussian or exponential model instead of a PSF. The optimizing spatial positions for spectral extractions were consistent with the HST peak positions, typically within the PSF FWHM (Figure 2), and the spectra extracted with Gaussian or exponential models were in any case invariant with respect to such small spatial adjustments.
We estimated the redshift of the four detections in two ways, both giving consistent results (redshift differences < 0.001) and similar formal redshift uncertainties: 1) we computed the signal-weighted average frequency within the line channels, and 2) we fitted the 1D spectrum with a Gaussian function. Following Coogan et al. (2018) simulations of a similar line detection procedure, and given the S/N of these detections, we concluded that redshift uncertainties estimated in this way are reliable. We compared our redshift estimates for these sources with those provided by our VLT and Keck data analysis, and in the literature (Section 2). They generally agree, with no significant systematic difference and a median absolute deviation (MAD) of 200 km s −1 (MAD z = 0.002). This accuracy is fully within the expected uncertainties of both our optical and [C II] redshift (see Ta-ble 2), thus increasing the reliability of the detections considering that the line search was carried out over a total ∆z = 0.035. Given the fact that our sources are extended, we estimated their total [C II] flux by fitting their average emission line maps in the uv plane with exponential models (whereas by using a PSF model instead we would have underestimated the fluxes). We used the following procedure. Our sample is composed of disk-like galaxies as shown in Figure 1. Although in some cases (e.g. ID7118) some clumps of star formation are visible both in the HST imaging and in the spatially resolved SFR maps, the resolved stellar mass maps are smooth, as expected for unperturbed sources, and mainly show the diffuse disk seen also in our ALMA observations. We therefore determined the size of the galaxy disks by fitting the stellar mass maps with an exponential profile (Freeman 1970), using the GALFIT algorithm (Peng et al. 2010). We checked that there were not structured residuals when subtracting the best-fit model from the stellar mass maps. We then extracted the [C II] flux by fitting the ALMA data in the uv plane, using the Fourier Transform of the 2D exponential model, with the GILDAS task uv_fit. We fixed the size and center of the model on the basis of the effective radius and peak coordinates derived from the optical images, corrected for the astrometric offset determined as in Appendix B. As a result, we obtained the total [C II] flux of our sources. Given the larger uncertainties associated to extended source models with respect to the PSF case (Appendix B), this procedure returns > 3σ total flux measurements for the four sources (even if original detections were > 5σ ). We checked that fluxes and uncertainties determined with the uvmodelfit task provided by CASA would give consistent results. We also checked the robustness of our flux measurements against the assumed functional form of the model: fitting the data with a Gaussian profile instead of an exponential would give consistent [C II] fluxes. Finally, we verified that the uncertainties associated to the flux measurement in each channel are consistent with the channel to channel fluctuations, after accounting for the continuum emission and excluding emission lines.
However, the returned fluxes critically depend on the model size that we used and that we determined from the optical images. If we were to use a smaller (larger) size, the inferred flux would be correspondingly lower (higher). Unfortunately, the size of the emission cannot be constrained from the data on individual sources, given the limited S/N ratio. There have been claims that sizes estimated from optical data could be larger than those derived from IR obser-vations , Psychogyios et al. 2016). This could possibly bias our analysis and in particular our flux estimates to higher values. As a check, we aligned our [C II] detections at the HST positions and stacked them (coadding all visibilities) to increase the S/N (Figure 4). In the uv space the overall significance of the stacked detection is ∼ 10σ . The probability that the signal is not resolved (i.e., a point source, which would have constant amplitude versus uv distance) is < 10 −5 . We then fitted the stacked data with an exponential profile, leaving its size free to vary during the fit. We get an exponential scale length for the [C II] emission of 0.65 ± 0.15 (corresponding to ∼ 4 -7 kpc), corrected for the small broadening that could affect the stack due to the uncertainties in the determination of the sources' exact position, and with a significance of S/N(size) ∼ 4σ . The reported size uncertainty was estimated by GILDAS in the fit and the modelling of the signal amplitude versus uv range signal shows that it is reliable ( Figure 4). This indicates that on average the optical sizes that we used in the analysis are appropriate for the fit of our ALMA data and that these four galaxies are indeed quite extended (the average optical size of the 4 galaxies is ∼ 0.7 , 2σ , in good agreement with what measured in the [C II] stack).
We also used the stack of our four detected sources to further check our [C II] flux estimates. We compared the flux measured by fitting the stacking with that obtained by averaging the fluxes of individual detections. As mentioned above, the flux of the stacking critically depends on the adopted model size, but in any case the measurement was highly significant (S/N> 5) even when leaving the size free to vary during the fit. When fitting the stack with a model having an exponential scale length ∼ 0.6" we obtained estimates consistent with the average flux of individual sources.
Tentative and non detections
In our sample, six sources were not individually detected by the procedure discussed in the previous section. In these cases we searched for the presence of weaker [C II] signal in the data by evaluating the recovered signal when eliminating all degrees of freedom in the line search, namely measuring at fixed HST position, using exponential models with the fixed optical size for each galaxy and conservatively averaging the signal over a large velocity range tailored to the optical redshifts. In particular, we created emission line maps by averaging channels over 719 km s −1 , around the frequency corresponding to the optical redshift. This velocity width is obtained by summing in quadrature 3 times the MAD redshift accuracy (obtained considering optical and [C II] redshifts, as discussed above for the four detections) and the average FWHM of the detected emission lines. We find weak signal from two galaxies at S/N> 2.3 (ID9681 and ID8490, see Table 2) and no significant signal from the others. Given that with this approach there are no degrees of freedom, the probability of obtaining each tentative detection (namely the probability of having a > 2.3σ signal) is Gaussian and equal to ∼ 0.01. Furthermore, when considering the six sources discussed above, we expect to find < 0.1 false detections. We therefore conclude that the 2.3σ signal found for our two tentative detections is real.
For the four sources with no detected signal we considered 3σ flux upper limits, as estimated from emission line maps integrated over a 719 km s −1 bandwidth. There are different possible reasons why these galaxies do not show any signal. Two of them (ID7118 and ID2861) have substantially worse data quality, probably due to the weather stability and atmosphere transparency during the observations, with about 3 times higher noise than the rest of the sample. Their L [C II] /L IR upper limits are not very stringent and are substantially higher than the rest of the sample (Table 2). Possible reasons for the other two non detections (ID2910 and ID10049) are the following. (i) These sources might be more extended than the others, and therefore their signal might be further suppressed. However this is unlikely, as their optical size is smaller than the average one of the detected sources (Table 3). (ii) They might have fainter IR luminosity than the other sample galaxies. The L IR that we used to predict the [C II] luminosity for these two undetected sources was overestimated before the observations. However, using the current L IR values (Section 2.4), we obtain L [C II] /L IR upper limits comparable with the ratios estimated for the detected sources. (iii) A wrong optical redshift estimate can also explain the lack of signal from one of these undetected galaxies: ID10049 is an AGN with broad lines 3 , and the determination of its systemic redshift obtained considering narrow line components (z = 1.920) is possibly more uncertain than the redshift range covered by our ALMA observations (z = 1.9014-1.9098 and z = 1.9158-1.9242; Table 1; for comparison, the original literature redshift was 1.906). For ID2910 instead the optical spectrum seems to yield a solid redshift and the covered redshift range is the largest (Table 1), so the [C II] line should have been observed. This source probably has fainter [C II] luminosity than the others (i.e. lower L [C II] /L IR ). Finally, we stacked the four [C II] non-detections in the uv plane and fitted the data with an exponential profile, with size fixed to the average optical size of the sources entering the stacking. This still did not yield a detection. Since two non-detections have shallower data than the others and at least one might have wrong optical redshift, in the rest of the analysis we do not consider the average [C II] flux obtained from the stacking of these sources.
ID Date z SB1 z SB2 z SB3 z SB4 t exp Noise R.M.S. (min) (mJy/beam) (1) (2) (3) (4) (5) (6) (7)(8)
The coordinates, sizes, [C II] fluxes, and luminosities of our sample galaxies are presented in Table 2. We subtracted from the [C II] fluxes the contribution of the underlying 158 µm rest-frame continuum as measured in our ALMA Band 9 data (Section 2.4). For galaxies with no detected continuum at 450 µm (see Section 2.4), we computed the predicted 158 µm rest-frame continuum flux from the bestfit IR SEDs and reduced the [C II] fluxes accordingly.
Average [C II] signal
We have previously stacked the four detections to measure their average size, compare it with the optical one, and understand if we were reliably estimating the fluxes of our sources (Section 2.3.1). Now we want to estimate the average [C II] signal of our sample to investigate its mean behaviour. We therefore add to the previous stack also the two tentative detections and one non-detected source. We report in the following the method that we used to stack these galaxies and the reasons why we excluded from the stack three non-detected sources.
We aligned the detections and tentative detections and stacked them coadding all visibilities. We also coadded the non-detected galaxy ID2910, but we do not include the other three sources for reasons outlined above. . We fitted the resulting map with an exponential model with size fixed to the average optical size of the sources entering the stacking. We finally subtracted the contribution of the rest-frame 158 µm continuum by decreasing the estimated flux by 10% (namely the average continuum correction applied to the sources of our sample, see Section 2.4). We obtained a ∼ 10σ detection that we report in Table 2.
ID RA DEC z opt z [C II] F 450µm F 850µm F [C II] L [C II] log(L IR ) L [C II] /L IR ∆v [deg] [deg
604.6
Columns (1) Galaxy ID;
(2) Right ascension; (3) Declination; (4) Redshift obtained from optical spectra; (5) Redshift estimated by fitting the [C II] emission line (when detected) with a Gaussian in our 1D ALMA spectra. The uncertainty that we report is the formal error obtained from the fit; (6) Observed-frame 450µm continuum emission flux; (7) Observed-frame 850µm continuum flux; (8) [C II] emission line flux. We report upper limits for sources with S/N < 2; (9) [C II] emission line luminosity; (10) IR luminosity integrated over the wavelength range 8 -1000 µm as estimated from SED fitting (Section 2.4); (11) [C II]-to-bolometric infrared luminosity ratio; (12) Line velocity width. Notes a ID10049 is a broad line AGN, its systemic redshift is uncertain and it might be outside the frequency range covered by Band 9. The redshift of ID7118 is based on a single line identified as Hα. If this is correct the redshift uncertainty is < 0.001. b Stack of the 7 galaxies of our sample with reliable [C II] measurement (namely, ID9347, ID6515, ID10076, ID9834, ID9681, ID8490, ID2910, see Section 2.3.3 for a detailed discussion). We excluded from the stack ID2861 and ID7118 since the quality of their data is worse than for the other galaxies and their [C II] upper limits are not stringent. We also excluded ID10049 since it is an AGN and, given that its redshift estimate from optical spectra is highly uncertain, the [C II] emission might be outside the redshift range covered by our ALMA observations. See Section 2.3.2 for a detailed discussion.
The average L [C II] /L IR ratio obtained from the stacking of the seven targets mentioned above is (1.94 +0.34 −0.32 ) × 10 −3 . This is in agreement with that obtained by averaging the individual ratios of the same seven galaxies (L [C II] /L IR = (1.96 +0.19 −0.10 ) × 10 −3 ) where this ratio was obtained averaging the L [C II] /L IR ratio of the seven targets. In particular, the [C II] flux of ID2910 is an upper limit and therefore we considered the case of flux equal to 1σ (giving the average L [C II] /L IR = 1.96 × 10 −3 ) and the two extreme cases of flux equal to 0 or flux equal to 3σ , from where the quoted uncertainties. Through our analysis and in the plots we consider the value
L [C II] /L IR = (1.94 +0.34 −0.32 ) × 10 −3 .
Continuum emission at observed-frame 450 µm and 850 µm
Our ALMA observations cover the continuum at ∼ 450 µm (Band 9 data) and 850 µm (Band 7 data). We created averaged continuum maps by integrating the full spectral range for the observations at 850 µm. For the 450 µm continuum maps instead we made sure to exclude the channels where the flux is dominated by the [C II] emission line. We extracted the continuum flux by fitting the data with an exponential profile, adopting the same procedure described in Section 2.3. The results are provided in Table 2, where 3σ upper limits are reported in case of non-detection.
The estimated continuum fluxes were used, together with the available Spitzer and Herschel data (Elbaz et al. 2011), to properly sample the IR wavelengths, perform SED fitting, and reliably determine parameters such as the infrared luminosity and the dust mass (M dust ). The Spitzer and Herschel data were deblended using prior sources to overcome the blending problems arising from the large PSFs and allow reliable photometry of individual galaxies (Béthermin et al. 2010, Roseboom et al. 2010, Elbaz et al. 2011, Lee et al. 2013, Béthermin et al. 2015, Liu et al. 2017. Following the method presented in Magdis et al. (2012), we fitted the IR photometry with Draine & Li (2007) models, supplemented by the use of a single temperature modified black body (MBB) fit to derive a representative dust temperature of the ISM. In these fits we considered the measured Spitzer, Herschel, and ALMA flux (even if S/N < 3, e.g. there is no detection) along with the corresponding uncertainty instead of adopting upper limits. The contribution of each photometric point to the best fit is weighted by its associated uncertainty. If we were to use upper limits in these fits instead our conclusions would not have changed. The IR SEDs of our targets are shown in Figure 5 and the derived parameters are summarized in Table 3. We note that our method to estimate dust masses is based on the fit of the full far-IR SED of the galaxies, not on scaling a single band luminosity in the Rayleigh-Jeans regime (e.g. as suggested by Scoville et al. 2017). This fact together with the high quality photometry at shorter wavelengths allowed us to properly constrain the fitted parameters also for galaxies with highly uncertain 850 µm measurements. We also determined the average radiation field intensity as U = L IR /(125M dust ) (Magdis et al. 2012). Uncertainties on L IR and M dust were quantified using Monte Carlo simulations, as described by Magdis et al. (2012).
The IR luminosities we estimated (L IR = L[8-1000 µm])
for our sample galaxies lie between 3.5 × 10 11 -1.2 × 10 12 L , with a median value of 7.1 × 10 11 L , and we probe a range of dust masses between 7.0 × 10 7 -1.2 × 10 9 M , with a median value of 3.0 × 10 8 M . Both our median estimate of L IR and M dust are in excellent agreement with literature estimates for main-sequence galaxies at similar redshift (e.g. L IR = 6 × 10 11 L and M dust = 3 × 10 8 M at redshift 1.75 < z < 2.00 in Béthermin et al. 2015, for a mass selected sample with an average M comparable to that of our galaxies). The U parameters that we determined range between 6 -45, consistent with the estimates provided by Magdis et al. (2012) and Béthermin et al. (2015) for main-sequence galaxies at a similar redshift. Finally, we estimated the molecular gas masses of our galaxies with a twofold approach. (1) Given their stellar mass and the mass-metallicity relation by Zahid et al. (2014) we estimated their gas phase metallicity. We then determined the gas-to-dust conversion factor (δ GDR ) for each source, depending on its metallicity, as prescribed by Magdis et al. (2012). And finally we estimated their molecular gas masses as M mol = δ GDR × M dust , given the dust masses obtained from the SED fitting.
(2) Given the galaxies SFRs and the integrated Schmidt-Kennicutt relation for mainsequence sources reported by Sargent et al. (2014), we estimated their molecular gas masses. We estimated the uncertainties taking into account the SFR uncertainties and the dispersion of the Schmidt-Kennicutt relation. By comparing the galaxies detected in the ALMA 850 µm data that allow us to obtain accurate dust masses, we concluded that both methods give consistent results (see Table 3). In the following we use the M mol obtained from the Schmit-Kennicutt relation since, given our in-hand data, it is more robust especially for galaxies with no 850 µm detection. Furthermore, it allows us to get a more consistent comparison with other high-z literature measurements (e.g. the gas masses for the sample of Capak et al. 2015 have been derived using the same Schmidt-Kennicutt relation, as reported in Appendix C).
Other samples from the literature
To explore a larger parameter space and gain a more comprehensive view, we complemented our observations with multiple [C II] datasets from the literature, both at low and high redshift , Gullberg et al. 2015, Huynh et al. 2014, Schaerer et al. 2015, Brisbin et al. 2015, Hughes et al. 2017. In Appendix C we briefly present these additional samples and discuss how the physical parameters that are relevant for our analysis (namely the redshift, [C II], IR, and CO luminosity, molecular gas mass, sSFR, and gasphase metallicity) have been derived; in Table C1 we report them.
RESULTS AND DISCUSSION
The main motivation of this work is to understand which is the dominant physical parameter affecting the [C II] luminosity of galaxies through cosmic time. In the following Columns (1) Galaxy ID;
(2) Star formation rate as calculated from the IR luminosity: SFR = 10 −10 L IR (Kennicutt 1998). Only the starforming component contributing to the IR luminosity was used to estimate the SFR, as contribution from a dusty torus was subtracted; (3) Stellar mass. The typical uncertainty is ∼ 0.2 dex; (4) Dust mass; (5) Gas mass estimated from the integrated Schmidt-Kennicutt relation (Sargent et al. 2014, Equation 4). The measured dispersion of the relation is 0.2 dex. Given that the errors associated to the SFR are < 0.1 dex, for the M SK mol we consider typical uncertainties of 0.2 dex.; (6) Gas mass estimated from the dust mass considering a gas-to-dust conversion factor dependent on metallicity (Magdis et al. 2012); (7) Gas mass estimated from the observed [C II] luminosity considering a [C II]-to-H 2 conversion factor α [C II] = 31 M /L . The uncertainties that we report do not account for the α [C II] uncertainty and they only reflect the [C II] luminosity's uncertainty; (8) Distance from the main sequence as defined by Rodighiero et al. (2014); (9) Average radiation field intensity; (10) Galaxy size as measured from the optical HST images; (11) Gas-phase metallicity 12 + log(O/H). Notes a Stack of the 7 galaxies of our sample with reliable [C II] measurement (namely, ID9347, ID6515, ID10076, ID9834, ID9681, ID8490, ID2910). We excluded from the stack ID2861 and ID7118 since the quality of their data is worse than for the other galaxies and their [C II] upper limits are not stringent. We also excluded ID10049 since it is an AGN and, given that its redshift estimate from optical spectra is highly incertain, the [C II] emission might be outside the redshift range covered by our ALMA observations. See Section 2.3.2 for a detailed discussion.
we investigate whether our z ∼ 2 sources are [C II] deficient and if the [C II]-to-IR luminosity ratio depends on galaxies' distance from the main-sequence. We also investigate whether the [C II] emission can be used as molecular gas mass tracer for main-sequence and starburst galaxies both at low and high redshift. Finally we discuss the implications of our results on the interpretation and planning of z 5 observations.
The [C II] deficit
In the local Universe, the majority of main-sequence galaxies have [C II] luminosities that scale linearly with their IR luminosity showing a constant L [C II] /L IR ratio, although substantial scatter is present (e.g., Stacey et al. 1991, Malhotra et al. 2001, Smith et al. 2017). However, local (U)LIRGs appear to have a different behaviour: they are typically [C II] deficient with respect to their IR luminosity, namely they have lower L [C II] /L IR ratios than main-sequence galaxies (e.g., Malhotra et al. 1997, Farrah et al. 2013. Furthermore, the L [C II] /L IR ratio correlates with the dust temperature, with the ratio decreasing for more luminous galaxies that have higher dust temperature (e.g. Malhotra et al. 2001, Gullberg et al. 2015. This relation also implies that L [C II] /L IR correlates with U , as the dust temperature is proportional to the intensity of the radiation field ( U ∝ T 4+β dust ; e.g., Magdis et al. 2012). It is now well established that for main-sequence galaxies the dust temperature is rising with redshift (Magdis et al. 2012, Béthermin et al. 2015, Schreiber et al. 2017a, following the trend (1+z) 1.8 ), as well as their IR luminosity, and sSFR. Our sample is made of z ∼ 2 main-sequence galaxies, with SFRs comparable to those of (U)LIRGs and average U seven times larger that that of local spirals with comparable mass. Therefore, if the local relation between the L [C II] /L IR ratio and the dust temperature (and/or the IR luminosity, and/or the sSFR) holds even at higher redshift, we would expect our sample to be [C II] deficient, showing a [C II]-to-IR luminosity ratio similar to that of local (U)LIRGs.
To investigate this, we compare the [C II] and IR luminosity of our sources with a compilation of measurements from the literature in Figure 6. Our sample shows a L [C II] /L IR ratio comparable to that observed for local mainsequence sources ), although it is shifted toward higher IR luminosities as expected, given the higher SFR with respect to local galaxies. The average L [C II] /L IR ratio of our data is ∼ 1.9 × 10 −3 , and has a scatter of ∼ 0.15 dex, consistent with the subsample of z ∼ 1 -2 main-sequence galaxies from Stacey et al. (2010, filled grey stars in Figure 6). The z ∼ 1.8 sample of Brisbin et al. (2015) is showing even higher ratios, surprisingly larger than all the other literature samples at any redshift and IR luminosity. The [C II] fluxes of these galaxies were obtained from ZEUS data and ALMA observations will be needed to confirm them. At fixed L IR our galaxies show higher L . We note that we are plotting the de-magnified IR luminosity for the sample of lensed galaxies by Gullberg et al. (2015): we considered that the [C II] emission line is magnified by the same factor as the IR (see discussion in the text and Gullberg et al. 2015). The magnification factors are taken from Spilker et al. (2016). Similarly, the sources by Brisbin et al. (2015) might be lensed, but the magnification factors are unknown and therefore we plot the observed values. our sample is also higher than that of the intermediate redshift starbursts from Magdis et al. (2014) and the subsample of z ∼ 1 -2 starbursts from Stacey et al. (2010, empty grey stars in Figure 6). This suggests that main-sequence galaxies have similar L [C II] /L IR ratios independently of their redshift and stellar mass, and points toward the conclusion that the L [C II] /L IR ratio is mainly set by the mode of star-formation (major mergers for starbursts and smooth accretion in extended disks for main-sequence galaxies), as suggested by Stacey et al. (2010) and Brisbin et al. (2015).
We already knew that L [C II] does not universally scale with L IR , simply because of the existence of the [C II] deficit. However, our results now also imply that the L [C II] /L IR ratio does not only depend on L IR : our z = 2 main-sequence galaxies have similar L IR as local (U)LIRGs, but they have brighter [C II]. For similar reasons we can then conclude that the L [C II] /L IR ratio does not depend on the dust temperature, sSFR, or intensity of the radiation field only, and if such relations exist they are not fundamental, as they depend at least on redshift and likely on galaxies' star formation mode (e.g. merger-driven for starbursts, or maintained by secular processes for main-sequence galaxies). In Figure 7 we show the relation between the L [C II] /L IR ratio and the intensity of the radiation field for our sample and other local and high-redshift galaxies from the literature.
We note that U has been estimated in different ways for the various samples reported in Figure 7, depending on the available data and measurements, and therefore some systematics might be present when comparing the various datasets. In particular, for our galaxies and those from Cormier et al. (2015) and Madden et al. in prep. (2017) it was obtained through the fit of the IR SED, as detailed in Section 2.4 and Rémy-Ruyer et al. (Gullberg et al. 2015, T dust ). Therefore we generated Draine & Li (2007) models with various U in the range 2 -200 and fitted them with a modified black body template with fixed β = 2.0 (the same as used in the SED fitting for our sample galaxies). We used them to find the following relations between U and R 64−158 or T dust and to estimate the radiation field intensity for these datasets: log < U >= 1.144 + 1.807 log R 64−158 + 0.540(log R 64−158 ) 2 and log < U >= 10.151 + 7.498 log T dust . Finally for the galaxies by Capak et al. (2015) we used the relation between U and redshift reported by Béthermin et al. (2015).
The (1) and a dispersion of 0.3 dex. However, high-redshift sources and local dwarfs deviate from the above relation, indicating that the correlation between L [C II] /L IR and U is not universal, but it also depends on other physical quantities, like redshift and/or galaxies' star formation mode. Our high-redshift main-sequence galaxies in fact show similar radiation field intensities as local (U)LIRGs, but typically higher L [C II] /L IR ratios. This could be due to the fact that in the formers the star formation is spread out in extended disks driving to less intense star-formation and higher L [C II] /L IR , whereas in in the latters the starformation, collision-induced by major mergers, is concentrated in smaller regions, driving to more intense star formation and lower L [C II] /L IR , as suggested by Brisbin et al. (2015). This also implies that, since L [C II] /L IR does not only depend on the intensity of the radiation field, and U ∝ M dust /L IR , then L [C II] does not simply scale with M dust either 4 . 4 We note that the intensity of the radiation field U that we use for our analysis is different from the incident far-UV radia-
[C II] as a tracer of molecular gas
Analogously to what discussed so far, by using a sample of local sources and distant starburst galaxies Graciá-Carpio et al. (2011) showed that starbursts show a similar [C II] deficit at any time, but at high redshift the knee of the L [C II] /L IR -L IR relation is shifted toward higher IR luminosities, and a universal relation including all local and distant galaxies could be obtained by plotting the [C II] (or other lines) deficit versus the star formation efficiency (or analogously their depletion time t dep = 1/SFE).
With our sample of z = 2 main-sequence galaxies in hand, we would like now to proceed a step forward, and test whether the [C II] luminosity might be used as a tracer of molecular gas mass: L [C II] ∝ M mol . In this case the L [C II] /L IR ratio would just be proportional to M mol /SFR (given that L IR ∝ SFR) and thus it would measure the galaxies' depletion time. The [C II] deficit in starburst and/or mergers would therefore just reflect their shorter depletion time (and enhanced SFE) with respect to main-sequence galaxies.
In fact, the average L [C II] /L IR ratio of our z ∼ 2 galaxies is ∼ 1.5 times lower than the average of local main-sequence sources, consistent with the modest decrease of the depletion time from z ∼ 0 to z ∼ 2 , Genzel et al. 2015. Although the scatter of the local and high-redshift measurements of the [C II] and IR luminosities make this estimate quite noisy, this seems to indicate once more that the [C II] luminosity correlates with the galaxies' molecular gas mass.
To test if this is indeed the case, as a first step we complemented our sample with all literature data we could assemble (both main-sequence and starburst sources at low and high redshift) with available [C II] and molecular gas mass estimates from other commonly used tracers (see the Appendix for details).
We find that indeed L [C II] and M mol are linearly correlated, indepently of their main-sequence or starburst nature, and follow the relation log L [C II] = −1.28(±0.21) + 0.98(±0.02) log M mol (2) with a dispersion of 0.3 dex (Figure 8). The Pearson test yields a coefficient ρ = 0.97, suggesting a statistically significant correlation between these two parameters. Given the linear correlation between the [C II] luminosity and the molecular gas mass, we can constrain the L [C II] -to-H 2 conversion factor. In the following we refer to it as
α [C II] = L [C II] /M mol(3)
by analogy with the widely used CO-to-H 2 conversion factor, α CO . In Figure 8 we report α [C II] as a function of redshift.
Main-sequence galaxies tion field (G 0 ) that other authors report (e.g. Abel et al. 2009, Brisbin et al. 2015, Gullberg et al. 2015. However, according to PDR modelling, increasing the number of ionizing photons (G 0 ), more hydrogen atoms are ionized and the gas opacity decreases (e.g. Abel et al. 2009). More photons can therefore be absorbed by dust, and the dust temperature increases. As the radiation field's intensity depends on the dust temperature ( U ∝ T α dust ), then U is expected to increase with G 0 as well. Figure 6 caption, but we only show the samples with available M mol estimates. In the legend we highlight the nature of the galaxies in each sample (e.g. main-sequence, starburst, Ly break analogs). The fit of the data is reported (black solid line) together with the standard deviation (black dashed lines). Bottom panels: the [C II]-to-H 2 conversion factor (α [C II] ) as a function of redshift. The average α [C II] for main-sequence galaxies is reported (black solid line) together with the standard deviation (black dashed lines). The median and median absolute deviation of each sample is plotted (green large symbols). The difference between the left and right panels concerns how the molecular gas mass was estimated for the sample of local galaxies from Diaz-Santos et al. (2017, light gray crosses). Since CO observations for this sample are not available, we estimated M mol , given the sSFR of each source, considering the relation between the depletion time and sSFR of galaxies. In the left panel we report the estimates obtained by averaging the trend reported by Sargent et al. (2014) and Scoville et al. (2017), whereas in the right panel we report the estimates obtained considering the trend by Scoville et al. (2017) only (see Section 3.3 for a more detailed discussion).
Considering only the data available for main-sequence galaxies, we get a median α [C II] = 31 M /L with a median absolute deviation of 0.2 dex (and a standard deviation of 0.3 dex). We also computed the median α [C II] separately for the low-and high-redshift main-sequence samples (Table 4): the two consistent estimates that we obtained suggest that the [C II]-to-H 2 conversion factor is likely invariant with redshift. Furthermore, the medians of individual galaxies samples (green symbols in Figure 8) differ less than a factor 2 from one another and are all consistent with the estimated values of α [C II] ∼ 30 M /L .
Starburst galaxies
To further test the possibility to use the estimated α [C II] not only for main-sequence sources, but also for starbursts we considered the sample observed with the South Pole Telescope (SPT) by Vieira et al. (2010) and Carlstrom et al. (2011). They are strongly lensed, dusty, star-forming galaxies at redshift z ∼ 2 -6 selected on the basis of their bright flux at mm wavelengths (see Section 2.5 for more details).
[C II] (Gullberg et al. 2015) and CO (Aravena et al. 2016a) observations are available for these targets. As Gullberg et al. (2015) notice, the similar [C II] and CO line velocity profiles suggest that these emission lines are likely not affected by differential lensing and therefore their fluxes can be directly compared. We obtained a median α [CII] = 22 M /L for this sample, consistent with that obtained for main-sequence datasets at both low and high redshift, as shown in Figure 8. As this SPT sample is likely a mix of main-sequence and starburst galaxies (Weiß et al. 2013), we suggest that the [C II]-to-H 2 conversion factor is unique and independent of the source mode of star formation. Similarly, we considered the starbursts at z ∼ 0.2 analyzed by Magdis et al. (2014) with available [C II] and CO observations and the sample of main-sequence and starbursts from the VALES survey Hughes et al. (2017). The M mol /L [C II] ratios of these samples are on average consistent with that of local and high-redshift main-sequence galaxies, as shown in Figure 8.
Finally, we complemented our sample with the local galaxies observed by Diaz-Santos et al. (2017) that are, in great majority, (U)LIRGs. Molecular gas masses have not been published for these sources and CO observations are not available. Therefore we estimated M mol considering the dependence of galaxies' depletion time on their specific star formation rate, as parametrized by Sargent et al. (2014) and Scoville et al. (2017). Given the difference of the two models especially in the starburst regime (see Section 3.3), we estimated the gas masses for this sample (i) adopting the mean depletion time obtained averaging the two models, and (ii) considering the model reported by Scoville et al. (2017) only. We report the results in Figure 8 and 9 (left and right bottom panels). If we adopt the gas masses obtained with the first method, the α [C II] conversion factor decreases by 0.3 dex for the most extreme starbursts, whereas if only the model by Scoville et al. (2017) is considered the α [C II] conversion factor remains constant independently of the main-sequence or starburst behaviour of galaxies (see also Figure 9, bottom panels). More future observations will be needed to explore in a more robust way the most extreme starburst regime.
All in all our results support the idea that the α [C II] conversion factor is the same for main-sequence sources and starbursts, although the gas conditions in these two galaxy populations are different (e.g. starbursts have higher gas densities and harder radiation fields than main-sequence galaxies). Possible reasons why, despite the different conditions, [C II] correlates with the molecular gas mass for both populations might include the following: (i) different parameters might impact the L [C II] /M mol ratio in opposite ways and balance, therefore having an overall negligible effect; (ii) the gas conditions in the PDRs might be largely similar in all galaxies, with variations in the [C II]/CO ratio smaller than a factor ∼ 2 and most of the [C II] produced in the molecular ISM (De Looze et al. 2014, Hughes et al. 2015, Schirm et al. 2017).
Finally, we investigated what is the main reason for the scatter of the α [C II] measurements. We considered only the galaxies with M mol determined homogeneously from the CO luminosity and we estimated the scatter of the L [C II] -L CO relation. The mean absolute deviation of the relation is ∼ 0.2 dex, similar to that of the L [C II] -M mol relation. This is mainly due to the fact that, to convert the CO luminosity into molecular gas mass, commonly it is adopted an α CO conversion factor that is very similar for all galaxies (it mainly depends on metallicity and the latter is actually very similar for all the galaxies that we considered as shown in Figure 10). More interestingly, the mean absolute deviation of the L [C II] -L CO relation is comparable to that of α [C II] . We therefore concluded that the scatter of the [C II]-to-molecular gas conversion factor is mainly dominated by the intrinsic scatter of the [C II]-to-CO luminosity relation, although the latter correlation is not always linear (e.g. see Figure 2 in Accurso et al. 2017a) likely due to the fact that [C II] traces molecular gas even in regimes where CO does not.
The dependence of the [C II]-to-IR ratio on galaxies' distance from the main-sequence
As the next step, we explicitly investigated if indeed L [C II] ∝ M mol , when systematically studying galaxies on and off mainsequence, thus spanning a large range of sSFR and SFE, up to merger-dominated systems. In fact, when comparing lowand high-redshift sources in bins of IR luminosity ( Figure 6) we might be mixing, in each bin, galaxies with very different properties (e.g. high-z main-sequence sources with local starbursts). On the contrary, this does not happen when considering bins of distance from the main-sequence (namely, sSFR/sSFR MS ). We considered samples with available sSFR measurements and in Figure 9 we plot the L [C II] /L IR ratio in bins of sSFR, normalized to the sSFR of the main-sequence at each redshift (Rodighiero et al. 2014). Our sample has a L [C II] /L IR ratio comparable to that reported in the literature for mainsequence galaxies at lower , the subsample of main-sequence galaxies from Diaz-Santos et al. 2017) and higher redshift 5 . Figure 9. Correlation between the [C II] luminosity and galaxies' distance from the main-sequence. Top panel: [C II]-to-IR luminosity ratio as a function of the galaxy distance from the main sequence. The symbols are the same as reported in Figure 6 caption. Additionally, we include the average of the local star-forming galaxies from Stacey et al. (1991, cyan star). In particular, the sources by Brisbin et al. (2015) might be lensed, but the magnification factors are unknown and therefore we plot the observed values. We also show the running mean computed considering all the plotted datapoints a part from the sample from Contursi et al. (2017, black solid line). Finally we report the model by Sargent et al. (2014, yellow curve) and Scoville et al. (2017, green curve), showing the trend of the depletion time as a function of the sSFR, renormalized to match the observed L [C II] /L IR ratios (the standard deviations of the models are marked as dashed curves). Bottom panels: dependence of α [C II] from galaxies' distance from the main-sequence. The difference between the left and right panels concerns how the molecular gas mass was estimated for the sample of local galaxies from Diaz-Santos et al. (2017, light gray crosses). Since CO observations for this sample are not available, we estimated M mol , given the sSFR of each source, considering the relation between the depletion time and sSFR of galaxies. In the left panel we report the estimates obtained by averaging the trend reported by Sargent et al. (2014) and Scoville et al. (2017), whereas in the right panel we report the estimates obtained considering the trend by Scoville et al. (2017) only (see Section 3.3 for a more detailed discussion). This is up to ∼ 10 times higher than the typical L [C II] /L IR ratio of starbursts defined as to fall > 4 times above the main-sequence (Rodighiero et al. 2011). Given the fact that the IR luminosity is commonly used as a SFR tracer and the [C II] luminosity seems to correlate with the galaxies' molecular gas mass, we expect the L [C II] /L IR ratio to depend on galaxies' gas depletion time (τ dep = M mol /SFR). This seems to be substantiated by the fact that the depletion time in main-sequence galaxies is on average ∼ 10 times higher than in starbursts (e.g. Sargent et al. 2014, similarly to what is observed for the L [C II] /L IR ratio. To make this comparison more quantitative, we considered two models predicting how the depletion time of galaxies changes as a function of their distance from the main-sequence and rescaled them to match the L [C II] /L IR observed for main-sequence galaxies. This scaling factor mainly depends on the [C II]to-CO luminosity ratio and given the shift we applied to the Sargent et al. (2014) and Scoville et al. (2017) models we estimated L [C II] /L CO ∼ 6000. This is in good agreement with the typical values reported in the literature and ranging between 2000 -10000 , Rigopoulou et al. 2018. We compare the rescaled models with observations in Figure 9. Given the higher number of main-sequence sources than starbursts, uncertainties on the estimate of stellar masses affecting galaxies sSFR would tend to systematically bias the distribution of L [C II] /L IR towards higher ratios as the distance from the main-sequence increases (similarly to the Eddington bias affecting source luminosities in surveys). To take this observational bias into account, we convolved the models by Sargent et al. (2014) and Scoville et al. (2017) with a Gaussian function with FWHM ∼ 0.2 dex (the typical uncertainty affecting stellar masses). Qualitatively, the drop of the depletion time that both models show with increasing sSFR well reproduces the trend of the [C II]-to-IR luminosity ratio with sSFR/sSFR MS that is observed in Figure 9. Considering that τ dep = M mol /SFR, and that the IR luminosity is a proxy for the SFR, the agreement between models and observations suggests that [C II] correlates reasonably well with the molecular gas mass, keeping into account the limitations of this exercise (there are still lively ongoing debates on how to best estimate the gas mass of off main-sequence galaxies, as reflected in the differences in the models we adopted). In this framework, the [C II] deficiency of starbursts can be explained as mainly due to their higher star formation efficiency, and hence far-UV fields, with respect to main-sequence sources. This is consistent with the invariance found by Graciá-Carpio et al. (2011), but it conceptually extends it to the possibility that [C II] is directly proportional to the molecular gas mass, at least empirically.
However, quantitatively some discrepancies between models and observations are present. The model by Sargent et al. (2014) accurately reproduces observations, at least up to sSFR/sSFR MS ∼ 4, but some inconsistencies are found at high sSFR/sSFR MS . On the contrary, the model by Scoville et al. (2017) reproduces the observations for galaxies on and above the main-sequence, even if some discrepancies are lished by Capak et al. (2015), but are equivalent to those recently revised by Brisbin et al. (2017). present at sSFR/sSFR MS < 1, a regime that is not yet well tested (but see Schreiber et al. 2017b, Gobat et al. 2017a). Some possible explanations for the discrepancy between the observations and the model by Sargent et al. (2014) are the following: (i) starbursts might have higher gas fractions than currently predicted by the Sargent et al. (2014) model, in agreement with the Scoville et al. (2017) estimate; (ii) the [C II] luminosity, at fixed stellar mass, is expected to increase with more intense radiation fields such as those characteristics of starbursts (Narayanan & Krumholz 2017, Madden et al. in prep. 2017, possibly leading to too high [C II]-to-IR luminosity ratios with respect to the model by Sargent et al. (2014); (iii) if the fraction of [C II] emitted by molecular gas decreases when the sSFR increases (e.g. for starbursts) as indicated by the model from Accurso et al. (2017b), then the [C II]-to-IR luminosity ratio would be higher than the expectations from the model by Sargent et al. (2014). However, to reconcile the observations of the most extreme starbursts (sSFR/sSFR MS ∼ 10) with the model, the [C II] fraction emitted from molecular gas should drop to ∼ 30%, which is much lower than the predictions from Accurso et al. (2017b); (iv) we might also be facing an observational bias: starbursts with relatively high [C II] luminosities might have been preferentially observed so far. Future deeper observations will allow us to understand if this mismatch is indeed due to an observational bias or if instead is real. In the latter case it would show that α [C II] is not actually constant in the strong starburst regime.
We also notice that some local Lyman break analogs observed by Contursi et al. (2017) show L [C II] /L IR ratios higher than expected from both models, given their sSFR ( Figure 9). Although these sources have sSFRs typical of local starbursts, their SFEs are main sequence-like as highlighted by Contursi et al. (2017). They are likely exceptional sources that do not follow the usual relation between sSFR and SFE. Given the fact that they show [C II]-to-IR luminosity ratios compatible with the average of main-sequence galaxies (Figure 6), we conclude that also in this case the SFE is the main parameter setting L [C II] /L IR , suggesting that the [C II] luminosity correlates with galaxies' molecular gas mass.
Invariance of α [C II] with gas phase metallicity
In this Section we investigate the dependence of the α [C II] conversion factor on gas phase metallicity. Understanding whether [C II] traces the molecular gas also for low metallicity galaxies is relevant for observations of high-redshift galaxies that are expected to be metal-poor (Ouchi et al. 2013;Vallini et al. 2015).
In Figure 10 we show literature samples with available measurements of metallicity, CO, and [C II] luminosities. To properly compare different samples we converted all metallicity estimates to the calibration by Pettini & Pagel (2004) using the parametrizations by Kewley & Ellison (2008). We converted the CO luminosity into gas mass by assuming the following α CO -metallicity dependence:
log α CO = 0.64 − 1.5(12 + log(O/H) − 8.7)(4)
that yields the Galactic α CO for solar metallicities and has a slope in between those found in the literature (typically ranging between -1 and -2, e.g. Genzel et al. 2012, Schruba Figure 10. Metallicity dependence of α [C II] for multiple samples with available metallicity estimate, all homogenized to the Pettini & Pagel (2004) calibration using the parametrizations by Kewley & Ellison (2008). The symbols are the same as reported in Figure 6 caption and legend. Left panel: ratio of the CO and [C II] luminosity as a function of the galaxies gas phase metallicity. The linear fit of the two samples is reported (black solid line). Right panel: [C II]-to-H 2 conversion factor as a function of metallicity. The average measurement for our sample (red empty circle) is reported only in this panel since no CO measurements are available for our sources. The gas mass for our galaxies was estimated considering the integrated Schmidt-Kennicutt relation (see Section 2.4). We note that one of the galaxies by Cormier et al. (2015) is an outlier to the L [C II] -M mol relation (and therefore of the α [C II] -metallicity estimate) due to its very low [C II] luminosity with respect to the CO one. We kept this galaxy in the sample for consistency with the literature, although there might be some issues with its [C II] and/or CO measurements. et al. 2012, Tan et al. 2014, Sargent et al. in prep. 2017. Adopting an α CO -metallicity dependence with a slope of −1 or −2 instead would not change our conclusions.
We show the ratio between the CO and [C II] luminosity as a function of metallicity in Figure 10 (left panel). This plot was first shown by Accurso et al. (2017a) (see their Figure 2) and here we are adding some more literature datapoints. Over the metallicity range spanned by these samples (12 + log O/H ∼ 7.8 -9), the CO luminosity drops by a factor 20 compared to [C II]. The fact that the L CO /M dust ratio is overall constant with metallicity (given that both the gas-todust ratio and α CO similarly depend on metallicity) implies that L [C II] /M dust has large variations with metallicity (similarly to the L [C II] /L CO ratio), consistent with what discussed in Section 3.1 (namely that [C II] is not simply a dust mass tracer).
In Figure 10 (right panel) we show the α [C II] dependence on metallicity. Although the scatter is quite large, the L [C II] /M mol ratio does not seem to depend on metallicity. When fitting the data with a linear function we obtain a slope of −0.2 ± 0.2, which is not significantly different from zero and consistent with a constant relation, and a standard deviation of 0.3 dex. This suggests that [C II] can be used as a "universal" molecular gas tracer and a particularly convenient tool to empirically estimate the gas mass of starbursts (whose metallicity is notoriously difficult to constrain due to Accurso et al. (2017a) and Cormier et al. (2015) datasets, whereas for the high-redshift one we used our measurements together with those by Capak et al. (2015). their high dust extinction) and high-redshift low-metallicity galaxies.
We note that the [C II] luminosity is expected to become fainter at very low metallicities, due to the simple fact that less carbon is present . However, this effect is negligible for the samples that we are considering and likely only becomes important at very low metallicities (12 + log(O/H) < 8.0). Figure 11. [C II] dependence on the galaxies' total (UV + IR) and obscured (IR only) SFR. The sample is made of the low-metallicity sources by Cormier et al. (2015), Madden et al. in prep. (2017). Left panel: dependence of the [C II] luminosity to total SFR ratio on the ratio between the total and obscured SFR. The fit of the data is reported (solid black line) together with the standard deviation of the data (dashed black line). Central panel: dependence of the [C II] luminosity to obscured SFR ratio on the ratio between the total and obscured SFR. The average ratio for our z ∼ 2 sample of main-sequence galaxies is reported (solid black line) together with its uncertainty (dashed black line). Right panel: dependence of the [C II] luminosity to total SFR on the gas phase metallicity. The fit of the data is reported (solid black line) together with the standard deviation of the data (dashed black line).
Implications for surveys at z > 2
As shown in the previous Sections, [C II] correlates with the galaxies' molecular gas mass, and the [C II]-to-H 2 conversion factor is likely independent of the main-sequence and starburst behaviour of galaxies, as well as of their gas phase metallicity. In perspective, this is particularly useful for studies of high-redshift targets. At high redshift in fact, due to the galaxies' low metallicity, CO is expected not to trace the bulk of the H 2 anymore (e.g. Maloney & Black 1988, Madden et al. 1997, Wolfire et al. 2010, Bolatto et al. 2013. Thanks to its high luminosity even in the low metallicity regime, [C II] might become a very useful tool to study the ISM properties at these redshifts. However some caution is needed when interpreting or predicting the [C II] luminosity at high redshift. Recent studies have shown that lowmetallicity galaxies have low dust content, hence the UV obscuration is minimal and the IR emission is much lower than in high-metallicity sources (e.g. Galliano et al. 2005, Madden et al. 2006, Rémy-Ruyer et al. 2013, De Looze et al. 2014. This means that the obscured star formation rate -that can be computed from the IR luminosity through the calibration done by Kennicutt (1998) -can be up to 10 times lower than the unobscured one (e.g. computed thorough the UV SED fitting). This can be seen also in Figure 11 where we report the sample of local lowmetallicity galaxies from Cormier et al. (2015) and Madden et al. in prep. (2017), taking at face value the SFR estimates from the literature. The SFR IR /SFR TOT ratio clearly depends on the galaxies' metallicity, with the most metal-poor showing on average lower ratios. Furthermore, the ratio between the [C II] luminosity and the total SFR of these galaxies linearly depends on the SFR IR /SFR TOT ratio (Figure 11
with a dispersion of 0.2 dex. On the contrary, the ratio between the [C II] luminosity and the obscured SFR is constant with the SFR IR /SFR TOT ratio (Figure 11,central panel). This suggests that the [C II] emission is related to dusty star-forming regions rather than to the whole SFR of the galaxy. At very high redshift (e.g. z > 4) measuring the IR luminosity is problematic and therefore often the total SFR obtained from UV-corrected estimates is used to derive a measurement of L IR . However, this might lead to overestimate the IR luminosity and therefore bias the [C II]-to-IR luminosity ratio toward lower values. This would mean that the [C II] deficit observed at high redshift might be due to the approximate estimate of the IR luminosity and not only due to the real evolution of the ISM properties. It could also explain the several cases of z > 5 galaxies with [C II] nondetections that have been recently reported (Combes et al. 2012, Ouchi et al. 2013, Maiolino et al. 2015, Schaerer et al. 2015, Watson et al. 2015: if the total SFR was used to estimate the L IR and the typical L [C II] /L IR = 2 × 10 −3 ratio was used to predict the [C II] luminosity when proposing for observing time, the L [C II] would have been overestimated and therefore the observations would have not been deep enough to detect the [C II] emission of the targets. Future actual measurements of the IR luminosity will be crucial to assess whether high-redshift observations were biased, or on the contrary if the [C II] deficiency is due to an actual evolution of galaxies' properties from z ∼ 0 to z ∼ 5. In the latter case the reason for the deficiency might still not be clear and an additional word of caution is needed: if the [C II] luminosity traces the molecular gas mass even at these high redshifts, these sources might be [C II] deficient due to a low molecular gas content and high SFE. However, the different conditions of the ISM at these redshifts, the lower dust masses, and likely the much harder radiation fields might play an important role as well, potentially introducing systematics and limiting the use of [C II] as molecular gas tracer for very distant galaxies.
Caveats
Finally we mention a few caveats that it is important to consider when using the [C II] emission line to trace galaxies' molecular gas.
First, as discussed in Section 3.5, at redshift z 5 the ISM conditions are likely different with respect to lower redshift (e.g. lower dust masses, harder radiation fields). This might impact the [C II] luminosity, possibly introducing some biases, and limiting the use of the [C II] emission line to estimate the molecular gas mass of galaxies at very high redshift.
Secondly, there are local studies indicating that [C II], mainly due to its low ionization potential, is simultaneously tracing the molecular, atomic and ionized phases (e.g. Stacey et al. 1991, Sargsyan et al. 2012, Rigopoulou et al. 2014, Croxall et al. 2017). The total measured [C II] luminosity might therefore be higher than the one arising from the molecular gas only: this would lead to overestimated H 2 masses. However, it seems that 70% -95% of the [C II] luminosity originates from PDRs ) and in particular > 75% arises from the molecular phase (Pineda et al. 2013, Velusamy & Langer 2014, Vallini et al. 2015, Olsen et al. 2017, Accurso et al. 2017b.
Lastly, as opposed to CO, [C II] is likely emitted only in regions where star formation is ongoing. Molecular clouds that are not illuminated by young stars would therefore not be detected (Beuther et al. 2014).
All in all, the limitations affecting [C II] seem to be different with respect to the ones having an impact on the molecular gas tracers commonly used so far (CO, [C I], or dust measurements), making it an independent molecular gas proxy. Future works comparing the gas mass estimates obtained with different methods will help understanding what tracer is better to consider depending on the physical conditions of the target.
CONCLUSIONS
In this paper we discuss the analysis of a sample of 10 mainsequence galaxies at redshift z ∼ 2 in GOODS-S. We present new ALMA Band 7 850 µm observer frame continuum, and Band 9 [C II] line together with 450 µm observer frame continuum observations, complemented by a suite of ancillary data, including HST, Spitzer, Herschel, and VLA imaging, plus VLT and Keck longslit spectroscopy. The goal is to investigate whether z ∼ 2, main-sequence galaxies are [C II] deficient and understand what are the main physical parameters affecting the [C II] luminosity. We summarize in the following the main conclusions we reached.
• The ratio between the [C II] and IR luminosity (L [CII] /L IR ) of z ∼ 2 main-sequence galaxies is ∼ 2 × 10 −3 , comparable to that of local main-sequence sources and a factor of ∼ 10 higher than local starbursts. This implies that there is not a unique correlation between L [C II] and L IR and therefore we should be careful when using the [C II] luminosity as a SFR indicator. Similarly, the [C II] luminosity does not uniquely correlate with galaxies' specific star formation rate, intensity of the radiation field, and dust mass. • The [C II] emission is spatially extended, on average, on scales comparable to the stellar mass sizes (4 -7 kpc), as inferred from HST imaging in the optical rest frame. This is in agreement with the results by Stacey et al. (2010), Hailey-Dunsheath et al. (2010), and Brisbin et al. (2015) who, for samples of z ∼ 1 -2 galaxies, find similar [C II] extensions. This also suggests that our sample of main sequence galaxies, with typical stellar masses and SFRs, is not made up of the ultra-compact (and more massive) sources selected and studied by Tadaki et al. (2015) and Barro et al. (2016). • The [C II] luminosity linearly correlates with galaxies' molecular gas masses. By complementing our sample with those from the literature, we constrained the L [C II] -to-H 2 conversion factor: it has a median α [C II] = 31 M /L and a median absolute deviation of ∼ 0.2 dex. We find it mostly invariant with galaxies' redshift, depletion time, and gas phase metallicity. This makes [C II] a convenient emission line to estimate the gas mass of starbursts, a notoriously hard property to constrain by using the CO and dust emission due to the large uncertainties in the conversion factors to be adopted. Furthermore, the invariance of α [C II] with metallicity together with the remarkable brightness of [C II] makes this emission line a useful tool to constrain gas masses at very high redshift, where galaxies' metallicity is expected to be low. • Considering that [C II] traces the molecular gas and the IR luminosity is a proxy for SFR, the L [C II] /L IR ratio seems to be mainly a tracer of galaxies' gas depletion time. The L [C II] /L IR ratio for our sample of z ∼ 2 mainsequence galaxies is ∼ 1.5 times lower than that of local main-sequence samples, as expected from the evolution of depletion time with redshift. • The weak [C II] signal from z > 6 -7 galaxies and the many non-detections in the recent literature might be evidence of high star formation efficiency, but might be also due to the fact that the expected signal is computed from the total UV star formation rate, while local dwarfs suggest that Figure A1. Analysis of how the flux uncertainty (noise) changes when a source is resolved, with respect to the unresolved case. The noise obtained fitting an emission line with a Gaussian model normalized by that retrieved with a PSF fit is reported as a function of the source's size normalized by that of the beam. The fit of the datapoints is reported gray solid line. Normalized in this way, the trend is independent on the resolution of the observations and expected to hold quite generally for ALMA observations, at least at first order (to a second order, it should depend on the exact baseline distributions in the observing configuration). Walter F., et al., 2016, ApJ, 833, 67 Wang T. et al. in prep. 2017 Watson D., Christensen L., Knudsen K. K., Richard J., Gallazzi A., Micha lowski M. J., , Nature, 519, 327 Weiß A., et al., 2013, ApJ, 767, 88 Wolfire M. G., Hollenbach D., McKee C. F., 2010, ApJ, 716, 1191Zahid H. J., et al., 2014, ApJ, 792, 75 Zhao Y., et al., 2013, ApJ, 765, L13 da Cunha E., et al., 2013 APPENDIX A: A. RESOLUTION From the HST optical images we estimated that our sample galaxies have FWHM sizes of ∼ 0.7 -1" (see Section 2.1). Since we wanted to measure total [C II] fluxes, we asked for ALMA observations choosing the configuration C32-1, to get a resolution of ∼ 1". However, the data were taken with the configuration C32-3 instead, providing a ∼ 0.2" resolution, higher than needed, and a maximum recoverable scale of ∼ 3.5": our galaxies are then spatially resolved. We tested the impact of the resolution on our flux and size estimates as follows. With the CASA task simobserve we simulated 2D Gaussians with increasing FWHM (in the range 0.1" -2"), mimicking observations taken with ∼ 0.2" resolution. We then fitted these mock data in the uv plane with the task uvmodelfit. Both sizes and fluxes are very well recovered even for large galaxies provided the data had large enough S/N. Very similar results are obtained when simulating sources with GILDAS instead of CASA.
Although fluxes and sizes are well estimated when fitting the emission lines in the uv plane almost independently of the adopted ALMA configuration, the S/N of the observations dramatically decreases when the sources are resolved. To quantify it, we considered the galaxy in our sample showing the highest S/N [C II] emission line (ID9347). We fitted its velocity-integrated map multiple times with the GILDAS algorithm, first with a point source model and then adopting a Gaussian profile with increasing FWHM. In the following we call "noise" the uncertainty associated with the flux, as estimated by GILDAS during the fitting procedure. In Figure A1 we illustrate how the noise of an extended source changes when it is resolved out. The noise is estimated as the uncertainty associated with the flux, when fitting the data. We repeated the exercise for both the [C II] emission line (synthesised beam ∼ 0.2") and the 850 µm continuum (synthesised beam ∼ 0.7"). By fitting the datapoints with a polynomial curve we obtained the following relation:
y = 1.00 + 0.79x + 0.14x 2 + 0.01x 3 (A1)
where y is the ratio of the source and PSF uncertainties (y = Noise source /Noise PSF ), and x is the ratio of their FWHM (x = FWHM source /FWHM PSF ). Figure A1 might be of particular interest when proposing for observations, since the ALMA calculator only provides sensitivity estimates assuming that the source is unresolved. Our plots allow to rescale the sensitivity computed by the calculator on the basis of the actual FWHM of the target, and therefore to estimate the correct S/N to be expected in the observations. We notice however that these predictions assume that the correct position and FWHM of the source are known.
APPENDIX B: B. ASTROMETRY
When comparing our optical data with the observations of the [C II] emission lines together with the 450 µm (Band 9) and 850 µm (Band 7) continuum, there is an astrometric offset between HST and ALMA images. Considering only the galaxies with a line and/or continuum detection (S/N > 3), we estimated the average offsets needed to align the luminosity peak of the HST and ALMA datasets. We measured a systematic shift of the HST centroid with respect to the ALMA data of ∼ − 0.2" in declination and a non significant, negligible offset of ∼ 0.06" in right ascension.
We acknowledge that the astrometry offsets between HST and ALMA datasets in GOODS-S are a known issue (Dunlop et al. 2017, Rujopakarn et al. 2016, Barro et al. 2016, Aravena et al. 2016b, Cibinel et al. 2017). Our estimate is consistent with the ones reported in the literature. A detailed map of the astrometry offset of the HST imaging in GOODS-S will be provided by Dickinson et al. in prep. (2017).
In our analysis we adopt the following coordinate shifts ∆RA = 0", ∆DEC = −0.2".
APPENDIX C: C. LITERATURE DATA We briefly describe the literature samples that we used to complement our observations and the methods used to derive the parameters considered in our analysis (redshift, [C II] luminosity, IR luminosity, CO luminosity, molecular gas mass, specific star formation rate, and metallicity). To properly compare different samples we converted all metallicity estimates to the calibration by Pettini & Pagel (2004) using the parametrizations by Kewley & Ellison (2008). Also we homogenized all the IR luminosities reporting them to the 8 -1000 µm range.
the L [C II] /L IR ratio should not be particularly affected by differential magnification, the absolute [C II] and IR luminosities might instead be amplified.
• Redshift z = 2 lensed main-sequence galaxy (Schaerer et al. 2015). Single galaxy lensed by the foreground galaxy cluster MACS J0451+0006 observed with HST, Spitzer, Herschel, PdBI, and ALMA. The gas mass of this galaxy was determined from the CO luminosity considering a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Dessauges-Zavadsky et al. (2015) estimated its IR luminosity fitting the IR SED with Draine & Li (2007) models and derived its SFR and stellar mass from the best energy conserving SED fits, obtained under the hypothesis of an extinction fixed at the observed IR-to-UV luminosity ratio following the prescriptions of Schaerer et al. (2013). • Redshift z ∼ 1 -2 main-sequence and starbursts ). Sample of galaxies at redshift 1 -2 observed with CSO/ZEUS. The observed far-IR luminosity of these sources ranges between 3 × 10 12 L -2.5 × 10 14 L , although two of them are lensed. In the following we report the observed luminosities since the magnification factors are generally very uncertain or unknown. Both AGN and star-forming galaxies are included. Measurements of molecular gas masses are not available, as well as estimates of the sources' stellar mass, and metallicity. The IR luminosity was estimated from the 12 µm, 25 µm, 60 µm, and 100 µm fluxes as reported by Stacey et al. (2010). • Redshift z = 4.44 main-sequence galaxy (Huynh et al. 2014). Single galaxy observed with ATCA, ALMA, Herschel, and HST. The gas mass of this galaxy was determined from the CO luminosity considering a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Huynh et al. (2014) estimated its IR luminosity fitting the IR SED with Chary & Elbaz (2001) models. Its SFR was derived from the IR luminosity following the calibration from Kennicutt (1998) and its stellar mass from the H -band magnitude together with an average mass-tolight ratio for a likely sub-millimeter galaxy star formation history (Swinbank et al. 2012). • Redshift z ∼ 5.5 main-sequence galaxies . Sample of star-forming galaxies at redshift 5 -6 observed with ALMA and Spitzer. In the following we only report the 4 galaxies with detected [C II] emission together with the average [C II] luminosity obtained by stacking the 6 non-detections. Two galaxies were also serendipitously detected in [C II] and added to the sample. The SFRs range between 3 -169 M yr −1 and the stellar masses 9.7 < log M /M < 10.8. CO observations are not available, so we estimated the molecular gas masses using the integrated Schmidt-Kennicutt relation for mainsequence galaxies reported by Sargent et al. (2014). The IR luminosity was estimated using the grey body models from Casey (2012). Capak et al. (2015) estimated the SFR of the sources by summing the UV and IR luminosity and the stellar mass by fitting SED models to the UV to IR photometry. The metallicity of these galaxies is not available. • Redshift z ∼ 2 -6 lensed galaxies (Gullberg et al. 2015).
Sample of strongly lensed dusty star-forming galaxies in the redshift range 2.1 -5.7 selected from the South Pole Telescope survey (Vieira et al. 2010, Carlstrom et al. 2011 on the basis of their 1.4 mm flux (S 1.4mm > 20 mJy) and followed-up with ALMA and Herschel /SPIRE. Among them, 11 sources also have low-J CO detections from ATCA. In the following we report the de-magnified luminosities, where the magnification factors are taken from (Spilker et al. 2016). The molecular gas mass has been computed considering the CO luminosity and an α CO conversion factor derived for each source on the basis of their dynamical mass (see Aravena et al. 2016a for more details). The adopted α CO factors have values in the range 0.7 -12.3 M K km s −1 pc −2 . Their IR luminosity was estimated fitting the IR SEDs of these sources with greybody models from Greve et al. (2012). The stellar mass, and metallicity of these galaxies are not available.
Figure 4 .
4Stacking of the four secure [C II] detections of our sample. Left panel: image obtained aligning the four galaxies at their HST peak positions and stacking their visibilities. From 3σ and 4σ contours are shown. Right panel: signal amplitude as a function of the uv distance (namely the baseline length). We fitted the data with an exponential model (black curve). A similar fit is obtained when fitting the data with a gaussian model with FWHM ∼ 0.6".
Figure 5 .
5Spectral energy distribution fits for our sample galaxies. Herschel and Spitzer measurements are reported as red filled circles and the ALMA ones as cyan filled circles. The black curve is the best model fit and the yellow line indicates the best modified black body fit.
Figure 6 .
6[C II] /L IR ratios than the average of the local IR-selected starbursts by Díaz-Santos et al. (2013, 2017). The L [C II] /L IR ratio of ☉ Ratio between the [C II] and IR (8 -1000 µm) luminosity of our sample galaxies, as a function of the IR luminosity. Different symbols indicate distinct datasets: our [C II] detections (large red filled circles), our [C II] upper limits for the non-detections (small red filled circles), the average value of our sample (empty red circle), Stacey et al. (2010, grey filled stars indicate star-forming galaxies, grey empty stars indicate AGN or starbursts), Stacey et al. (1991, black empty stars), Gullberg et al. (2015, light grey filled circles), Capak et al. (2015, dark grey filled circles indicate their measurements, dark grey emtpy circles indicate the stack of their non detections), Diaz-Santos et al. (2017, grey crosses), Cormier et al. (2015, black triangles), Brauher et al. (2008, grey diamond), Contursi et al. (2017, grey squares), Magdis et al. (2014, dark grey crosses), Huynh et al. (2014, black downward triangle), Schaerer et al. (2015, black filled star), Brisbin et al. (2015, black filled circles), Ferkinhoff et al. (2014, black empty circle), Hughes et al. (2017, grey crosses), Accurso et al. (2017a, grey asterisks)
(2014).Diaz-Santos et al. (2017) andGullberg et al. (2015) instead do not provide an estimate of U , but only report the sources' flux at 63 µm and 158 µm(Diaz-Santos et al. 2017, R 64−158 ) and the dust temperature
Figure 7 .
7Correlation between the [C II]-to-IR luminosity ratio and the intensity of the radiation field. The symbols are the same as reported inFigure 6caption, but we only show the samples with available U measurements (the method used to estimate U for the various samples is detailed in Section 3.1). The fit of the local sample from Diaz-Santos et al.(2017)is reported (black solid line) together with the standard deviation (black dashed lines).
local galaxies of Diaz-Santos et al. (2017) indeed show a decreasing [C II]-to-IR luminosity ratio with increasing U and the linear fit of this sample yields the following relation log(L [C II] /L IR ) = −2.1(±0.1) + 0.7(±0.1) log(< U >)
Figure 8 .
8Correlation between the [C II] luminosity and the molecular gas mass. Top panel: L [C II] -M mol relation. The symbols are the same as reported in
L [C II] /SFR TOT ) = 6.2(±0.2) + 1.1(±0.3)SFR IR /SFR TOT (5) with a scatter of 0.2 dex, indicating that galaxies with lower metallicity (and lower obscured SFR) typically have lower L [C II] /SFR TOT ratios. This is clearly visible in Figure 11 (right panel): the dependence of the L [C II] /SFR TOT ratio on metallicity can be parametrized as follows: log(L [C II] /SFR TOT ) = −3.8(±2.8) + 1.3(±0.3)[12 + log(O/H)]
Table 1 .
1Log of the observations
Table 2 .
2Measurements for our sample galaxies
Table 3 .
3Physical properties of our sample galaxiesID
SFR
log(M )
log M dust
log M SK
mol
log M dust
mol
log M
[CII]
mol
sSFR/sSFR MS
log <U>
R e
Z
[M yr −1 ] [log(M )] [log(M )] [log(M )]
[log(M )]
[log(M )]
[arcsec]
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
9347
62.9 +7.9
−7.0
10.5
8.5±0.5
10.70
10.50±0.57 10.51 +0.13
−0.19
1.1
1.2±0.5
1.02
8.6
6515
47.7 +4.3
−3.9
10.9
8.5±0.4
10.58
10.40±0.42 10.62 +0.12
−0.16
0.4
1.2±0.4
0.77
8.7
10076
81.6 +6.0
−5.6
10.3
8.4±0.2
10.77
10.46±0.23 10.91 +0.13
−0.19
1.7
1.4±0.2
0.76
8.6
9834
98.9 +5.4
−5.1
10.7
8.2±0.3
10.84
10.20±0.16 10.60 +0.10
−0.12
1.2
1.7±0.3
0.43
8.7
9681
69.3 +5.9
−5.5
10.6
8.3±0.5
10.71
10.29±0.49 10.78 +0.16
−0.27
1.0
1.5±0.5
0.89
8.6
10049
39.7 +5.7
5.0
10.7
8.7±0.2
10.52
10.70±0.29
< 10.37
0.4
0.8±0.2
0.29
8.7
2861
101.6 +6.5
−6.1
10.8
9.0±0.3
10.85
10.97±0.30
< 11.13
1.1
0.9±0.3
0.99
8.7
2910
57.4 +11.1
−9.3
10.4
8.1±0.5
10.64
10.18±0.55
< 10.59
1.3
1.5±0.5
0.58
8.6
7118
114.8 +2.9
−2.9
10.9
9.1±0.2
10.89
11.03±0.22
< 11.21
1.1
0.9±0.2
1.13
8.7
8490
34.4 +5.2
−4.5
10.0
7.8±0.4
10.46
9.98±0.45
10.38 +0.16
−0.26
1.2
1.6±0.4
0.44
8.5
Stack a
64.6 +7.9
−7.0
10.6
8.3±0.1
10.69
10.26±0.34 10.62 +0.04
−0.05
1.1
1.4±0.1
0.70
8.6
Table 4 .
4Estimates of the [C II]-to H 2 conversion factor.Samples
Mean
Standard deviation
Median
M.A.D.
[M /L ]
[dex]
[M /L ]
[dex]
(1)
(2)
(3)
(4)
(5)
All
31
0.3
31
0.2
Local
30
0.3
28
0.2
High-z
35
0.2
38
0.1
Columns (1) Samples used to compute α [C II] . For the local estimate
we considered the
[C II] only reflects the portion of SFR reprocessed by dust in the IR.• Although some caveats are present (e.g.[C II] nondetections at very high redshift might also be due to the effects of a strong radiation field; [C II] might be trac-ing different gas phases simultaneously; it is only emitted when the gas is illuminated by young stars, so it only traces molecular gas with ongoing star formation), the limitations that affect[C II] are different with respect to those impacting more traditional gas tracers such as CO, [C I], and dust emission. This makes [C II] an independent proxy, particularly suitable to push our current knowledge of galaxies' ISM to the highest redshifts.We are grateful to the anonymous referee for their insightful comments. A.Z. thanks C. Cicone, G. Accurso, A. Saintonge, Q.Tan, M. Aravena, A. Pope, A. Ferrara, S. Gallerani, and A. Pallottini for useful discussions. T.M.H. acknowledges support from the Chinese Academy of Sciences (CAS) and the National Commission for Scientific and Technological Research of Chile (CONICYT) through a CAS-CONICYT Joint Postdoctoral Fellowship administered by the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. D.C. is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 702622. M.T.S was supported by a Royal Society Leverhulme Trust Senior Research Fellowship (LT150041). W.R. is supported by JSPS KAKENHI Grant Number JP15K17604 and the Thailand Research Fund/Office of the Higher Education Commission Grant Number MRG6080294. D. L. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2012.1.00775.S ALMA is a partnership of European Southern Observatory (ESO, representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.ACKNOWLEDGMENTS
Table C1 :
C1Compilation of literature data used in this paper. The full table is available online.ID
z
L [C II]
L IR
L' CO
M mol
sSFR
12 + log(O/H)
MNRAS 000, 1-??(2017)
We recall that all our IR luminosities are estimated considering the star forming component only, and possible emissions from dusty tori were subtracted.
For this sample we derived L IR from ALMA continuum using the main-sequence templates ofMagdis et al. (2012) and an appropriate temperature for z = 5.5, following the evolution given inBéthermin et al. (2015) andSchreiber et al. (2017a). This is the reason why the values that we are plotting differ from those pub-MNRAS 000, 1-??(2017)
. N P Abel, C Dudley, J Fischer, S Satyapal, P A M Van Hoof, ApJ. 7011147Abel N. P., Dudley C., Fischer J., Satyapal S., van Hoof P. A. M., 2009, ApJ, 701, 1147
. G Accurso, arXiv:1702.03888preprintAccurso G., et al., 2017a, preprint, (arXiv:1702.03888)
. G Accurso, A Saintonge, T G Bisbas, S Viti, MNRAS. 4643315Accurso G., Saintonge A., Bisbas T. G., Viti S., 2017b, MNRAS, 464, 3315
. R Amorín, C Muñoz-Tuñón, J A L Aguerri, P Planesas, A&A. 58823Amorín R., Muñoz-Tuñón C., Aguerri J. A. L., Planesas P., 2016, A&A, 588, A23
. M Aravena, MNRAS. 4574406Aravena M., et al., 2016a, MNRAS, 457, 4406
. M Aravena, ApJ. 83368Aravena M., et al., 2016b, ApJ, 833, 68
. L Armus, T Heckman, G Miley, AJ. 94831Armus L., Heckman T., Miley G., 1987, AJ, 94, 831
. L Armus, PASP. 121559Armus L., et al., 2009, PASP, 121, 559
. G Barro, ApJ. 82732Barro G., et al., 2016, ApJ, 827, L32
. M Béthermin, H Dole, M Cousin, N Bavouzet, A&A. 51643Béthermin M., Dole H., Cousin M., Bavouzet N., 2010, A&A, 516, A43
. M Béthermin, A&A. 573113Béthermin M., et al., 2015, A&A, 573, A113
. H Beuther, A&A. 57153Beuther H., et al., 2014, A&A, 571, A53
. F Bigiel, A Leroy, F Walter, E Brinks, W J G De Blok, B Madore, M D Thornley, AJ. 1362846Bigiel F., Leroy A., Walter F., Brinks E., de Blok W. J. G., Madore B., Thornley M. D., 2008, AJ, 136, 2846
. A D Bolatto, M Wolfire, A K Leroy, ARA&A. 51207Bolatto A. D., Wolfire M., Leroy A. K., 2013, ARA&A, 51, 207
. M S Bothwell, R Maiolino, Y Peng, C Cicone, H Griffith, J Wagg, MNRAS. 4551156Bothwell M. S., Maiolino R., Peng Y., Cicone C., Griffith H., Wagg J., 2016, MNRAS, 455, 1156
. F Bournaud, in prepBournaud, F. et al. in prep. 2017
. R J Bouwens, ApJ. 75483Bouwens R. J., et al., 2012, ApJ, 754, 83
. J R Brauher, D A Dale, G Helou, ApJS. 178280Brauher J. R., Dale D. A., Helou G., 2008, ApJS, 178, 280
. D Brisbin, C Ferkinhoff, T Nikola, S Parshley, G J Stacey, H Spoon, S Hailey-Dunsheath, A Verma, ApJ. 79913Brisbin D., Ferkinhoff C., Nikola T., Parshley S., Stacey G. J., Spoon H., Hailey-Dunsheath S., Verma A., 2015, ApJ, 799, 13
. D Brisbin, A&A. 60815Brisbin D., et al., 2017, A&A, 608, A15
. G Bruzual, S Charlot, MNRAS. 3441000Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
. H A Bushouse, ApJS. 1381Bushouse H. A., et al., 2002, ApJS, 138, 1
. D Calzetti, L Armus, R C Bohlin, A L Kinney, J Koornneef, T Storchi-Bergmann, ApJ. 533682Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
. P L Capak, Nature. 522455Capak P. L., et al., 2015, Nature, 522, 455
. C L Carilli, F Walter, ARA&A. 51105Carilli C. L., Walter F., 2013, ARA&A, 51, 105
. J E Carlstrom, PASP. 123568Carlstrom J. E., et al., 2011, PASP, 123, 568
. C M Casey, MNRAS. 4253094Casey C. M., 2012, MNRAS, 425, 3094
. G Chabrier, PASP. 115763Chabrier G., 2003, PASP, 115, 763
. R Chary, D Elbaz, ApJ. 556562Chary R., Elbaz D., 2001, ApJ, 556, 562
. A Cibinel, ApJ. 805181Cibinel A., et al., 2015, ApJ, 805, 181
. A Cibinel, MNRAS. 4694683Cibinel A., et al., 2017, MNRAS, 469, 4683
. A Cimatti, A&A. 392395Cimatti A., et al., 2002, A&A, 392, 395
. F Combes, A&A. 5384Combes F., et al., 2012, A&A, 538, L4
. F Combes, S García-Burillo, J Braine, E Schinnerer, F Walter, L Colina, A&A. 55041Combes F., García-Burillo S., Braine J., Schinnerer E., Walter F., Colina L., 2013, A&A, 550, A41
. A Contursi, arXiv:1706.04107preprintContursi A., et al., 2017, preprint, (arXiv:1706.04107)
. R Coogan, 479703Coogan R., et al., 2018, 479,703C
. D Cormier, A&A. 564121Cormier D., et al., 2014, A&A, 564, A121
. D Cormier, A&A. 57853Cormier D., et al., 2015, A&A, 578, A53
. K V Croxall, ApJ. 84596Croxall K. V., et al., 2017, ApJ, 845, 96
. E Daddi, ApJ. 670156Daddi E., et al., 2007, ApJ, 670, 156
. E Daddi, ApJ. 713686Daddi E., et al., 2010, ApJ, 713, 686
. E Daddi, A&A. 57746Daddi E., et al., 2015, A&A, 577, A46
. D A Dale, G Helou, ApJ. 576159Dale D. A., Helou G., 2002, ApJ, 576, 159
. R Davé, K Finlator, B D Oppenheimer, MNRAS. 42198Davé R., Finlator K., Oppenheimer B. D., 2012, MNRAS, 421, 98
. I De Looze, A&A. 51854De Looze I., et al., 2010, A&A, 518, L54
. I De Looze, A&A. 56862De Looze I., et al., 2014, A&A, 568, A62
. De Vis, P , MNRAS. 4711743De Vis P., et al., 2017, MNRAS, 471, 1743
. M Dessauges-Zavadsky, A&A. 57750Dessauges-Zavadsky M., et al., 2015, A&A, 577, A50
. T Díaz-Santos, ApJ. 77468Díaz-Santos T., et al., 2013, ApJ, 774, 68
. T Diaz-Santos, arXiv:1705.04326preprintDiaz-Santos T., et al., 2017, preprint, (arXiv:1705.04326)
. M Dickinson, in prepDickinson, M. et al. in prep. 2017
. B T Draine, A Li, ApJ. 657810Draine B. T., Li A., 2007, ApJ, 657, 810
. J S Dunlop, MNRAS. 466861Dunlop J. S., et al., 2017, MNRAS, 466, 861
. D Elbaz, A&A. 46833Elbaz D., et al., 2007, A&A, 468, 33
. D Elbaz, A&A. 533119Elbaz D., et al., 2011, A&A, 533, A119
. M Eskew, D Zaritsky, S Meidt, AJ. 143139Eskew M., Zaritsky D., Meidt S., 2012, AJ, 143, 139
. K Fahrion, A&A. 5999Fahrion K., et al., 2017, A&A, 599, A9
. Farrah D , ApJ. 77638Farrah D., et al., 2013, ApJ, 776, 38
. C Ferkinhoff, ApJ. 780142Ferkinhoff C., et al., 2014, ApJ, 780, 142
. K C Freeman, ApJ. 160811Freeman K. C., 1970, ApJ, 160, 811
. F Galliano, S C Madden, A P Jones, C D Wilson, J.-P Bernard, A&A. 434867Galliano F., Madden S. C., Jones A. P., Wilson C. D., Bernard J.-P., 2005, A&A, 434, 867
. F Galliano, A&A. 53688Galliano F., et al., 2011, A&A, 536, A88
. R Genzel, MNRAS. 4072091Genzel R., et al., 2010, MNRAS, 407, 2091
. R Genzel, ApJ. 74669Genzel R., et al., 2012, ApJ, 746, 69
. R Genzel, ApJ. 80020Genzel R., et al., 2015, ApJ, 800, 20
. M Giavalisco, ApJ. 60093Giavalisco M., et al., 2004, ApJ, 600, L93
. S C O Glover, R J Smith, MNRAS. 4623011Glover S. C. O., Smith R. J., 2016, MNRAS, 462, 3011
. R Gobat, arXiv:1703.02207A&A. 59995preprintGobat R., et al., 2017a, preprint, (arXiv:1703.02207) Gobat R., et al., 2017b, A&A, 599, A95
. P F Goldsmith, W D Langer, J L Pineda, T Velusamy, ApJS. 20313Goldsmith P. F., Langer W. D., Pineda J. L., Velusamy T., 2012, ApJS, 203, 13
. J Graciá-Carpio, ApJ. 7287Graciá-Carpio J., et al., 2011, ApJ, 728, L7
. T R Greve, ApJ. 756101Greve T. R., et al., 2012, ApJ, 756, 101
. N A Grogin, ApJS. 19735Grogin N. A., et al., 2011, ApJS, 197, 35
Imaging at Radio through Submillimeter Wavelengths. S Guilloteau, R Lucas, Astronomical Society of the Pacific Conference Series. Mangum J. G., Radford S. J. E.217299Guilloteau S., Lucas R., 2000, in Mangum J. G., Radford S. J. E., eds, Astronomical Society of the Pacific Conference Series Vol. 217, Imaging at Radio through Submillimeter Wavelengths. p. 299
. B Gullberg, MNRAS. 4492883Gullberg B., et al., 2015, MNRAS, 449, 2883
. S Hailey-Dunsheath, T Nikola, G J Stacey, T E Oberst, S C Parshley, D J Benford, J G Staguhn, C E Tucker, ApJ. 714162Hailey-Dunsheath S., Nikola T., Stacey G. J., Oberst T. E., Parshley S. C., Benford D. J., Staguhn J. G., Tucker C. E., 2010, ApJ, 714, L162
. R Herrera-Camus, ApJ. 8001Herrera-Camus R., et al., 2015, ApJ, 800, 1
. J H Howell, ApJ. 715572Howell J. H., et al., 2010, ApJ, 715, 572
. T M Hughes, A&A. 57517Hughes T. M., et al., 2015, A&A, 575, A17
. T M Hughes, A&A. 60249Hughes T. M., et al., 2017, A&A, 602, A49
. C.-L Hung, ApJ. 778129Hung C.-L., et al., 2013, ApJ, 778, 129
. M T Huynh, MNRAS. 44354Huynh M. T., et al., 2014, MNRAS, 443, L54
. R J Ivison, A&A. 51835Ivison R. J., et al., 2010, A&A, 518, L35
. R C KennicuttJr, ApJ. 498541Kennicutt Jr. R. C., 1998, ApJ, 498, 541
. L J Kewley, S L Ellison, ApJ. 6811183Kewley L. J., Ellison S. L., 2008, ApJ, 681, 1183
. A M Koekemoer, ApJS. 19736Koekemoer A. M., et al., 2011, ApJS, 197, 36
. J Kurk, A&A. 54963Kurk J., et al., 2013, A&A, 549, A63
. N Lee, ApJ. 778131Lee N., et al., 2013, ApJ, 778, 131
. D Liu, arXiv:1703.05281preprintin prepLiu D., et al., 2017, preprint, (arXiv:1703.05281) Madden, S. et al. in prep. 2017
. S C Madden, N Geis, R Genzel, F Herrmann, J Jackson, A Poglitsch, G J Stacey, C H Townes, ApJ. 407579Madden S. C., Geis N., Genzel R., Herrmann F., Jackson J., Poglitsch A., Stacey G. J., Townes C. H., 1993, ApJ, 407, 579
. S C Madden, A Poglitsch, N Geis, G J Stacey, C H Townes, ApJ. 483200Madden S. C., Poglitsch A., Geis N., Stacey G. J., Townes C. H., 1997, ApJ, 483, 200
. S C Madden, F Galliano, A P Jones, M Sauvage, A&A. 446877Madden S. C., Galliano F., Jones A. P., Sauvage M., 2006, A&A, 446, 877
. S C Madden, PASP. 125600Madden S. C., et al., 2013, PASP, 125, 600
S C Madden, D Cormier, A Rémy-Ruyer, 10.1017/S1743921316007493arXiv:1603.04674From Interstellar Clouds to Star-Forming Galaxies: Universal Processes?. pp 191-198. Jablonka P., André P., van der Tak F.315IAU SymposiumMadden S. C., Cormier D., Rémy-Ruyer A., 2016, in Jablonka P., André P., van der Tak F., eds, IAU Symposium Vol. 315, From Interstellar Clouds to Star-Forming Galax- ies: Universal Processes?. pp 191-198 (arXiv:1603.04674), doi:10.1017/S1743921316007493
. G E Magdis, D Rigopoulou, J.-S Huang, G G Fazio, MNRAS. 4011521Magdis G. E., Rigopoulou D., Huang J.-S., Fazio G. G., 2010, MNRAS, 401, 1521
. G E Magdis, ApJ. 7606Magdis G. E., et al., 2012, ApJ, 760, 6
. G E Magdis, ApJ. 79663Magdis G. E., et al., 2014, ApJ, 796, 63
. R Maiolino, MNRAS. 45254Maiolino R., et al., 2015, MNRAS, 452, 54
. S Malhotra, ApJ. 49127Malhotra S., et al., 1997, ApJ, 491, L27
. S Malhotra, ApJ. 561766Malhotra S., et al., 2001, ApJ, 561, 766
. P Maloney, J H Black, ApJ. 325389Maloney P., Black J. H., 1988, ApJ, 325, 389
J P Mcmullin, B Waters, D Schiebel, W Young, K Golap, Astronomical Society of the Pacific Conference Series. Shaw R. A., Hill F., Bell D. J.376127Astronomical Data Analysis Software and Systems XVIMcMullin J. P., Waters B., Schiebel D., Young W., Golap K., 2007, in Shaw R. A., Hill F., Bell D. J., eds, Astronomical Society of the Pacific Conference Series Vol. 376, Astronomical Data Analysis Software and Systems XVI. p. 127
. M Mignoli, A&A. 437883Mignoli M., et al., 2005, A&A, 437, 883
. D Narayanan, M R Krumholz, MNRAS. 46750Narayanan D., Krumholz M. R., 2017, MNRAS, 467, 50
. K G Noeske, ApJ. 66043Noeske K. G., et al., 2007, ApJ, 660, L43
. M Nonino, ApJS. 183244Nonino M., et al., 2009, ApJS, 183, 244
. R Nordon, A Sternberg, MNRAS. 4622804Nordon R., Sternberg A., 2016, MNRAS, 462, 2804
. R Nordon, ApJ. 745182Nordon R., et al., 2012, ApJ, 745, 182
. K P Olsen, T R Greve, C Brinch, J Sommer-Larsen, J Rasmussen, S Toft, A Zirm, MNRAS. 4573306Olsen K. P., Greve T. R., Brinch C., Sommer-Larsen J., Ras- mussen J., Toft S., Zirm A., 2016, MNRAS, 457, 3306
. K Olsen, T R Greve, D Narayanan, R Thompson, R Davé, L Niebla Rios, S Stawinski, ApJ. 846105Olsen K., Greve T. R., Narayanan D., Thompson R., Davé R., Niebla Rios L., Stawinski S., 2017, ApJ, 846, 105
. M Ouchi, ApJ. 778102Ouchi M., et al., 2013, ApJ, 778, 102
. R A Overzier, ApJ. 704548Overzier R. A., et al., 2009, ApJ, 704, 548
. P P Papadopoulos, T R Greve, ApJ. 61529Papadopoulos P. P., Greve T. R., 2004, ApJ, 615, L29
. Y.-J Peng, ApJ. 721193Peng Y.-j., et al., 2010, ApJ, 721, 193
. M Pettini, B E J Pagel, MNRAS. 34859Pettini M., Pagel B. E. J., 2004, MNRAS, 348, L59
. J L Pineda, W D Langer, T Velusamy, P F Goldsmith, A&A. 554103Pineda J. L., Langer W. D., Velusamy T., Goldsmith P. F., 2013, A&A, 554, A103
. A Poglitsch, A Krabbe, S C Madden, T Nikola, N Geis, L E B Johansson, G J Stacey, A Sternberg, ApJ. 454293Poglitsch A., Krabbe A., Madden S. C., Nikola T., Geis N., Jo- hansson L. E. B., Stacey G. J., Sternberg A., 1995, ApJ, 454, 293
. A Pope, ApJ. 6751171Pope A., et al., 2008, ApJ, 675, 1171
. P Popesso, A&A. 494443Popesso P., et al., 2009, A&A, 494, 443
. G Popping, J P Pérez-Beaupuits, M Spaans, S C Trager, R S Somerville, MNRAS. 4441301Popping G., Pérez-Beaupuits J. P., Spaans M., Trager S. C., Somerville R. S., 2014, MNRAS, 444, 1301
. G Popping, E Van Kampen, R Decarli, M Spaans, R S Somerville, S C Trager, MNRAS. 46193Popping G., van Kampen E., Decarli R., Spaans M., Somerville R. S., Trager S. C., 2016, MNRAS, 461, 93
. G Popping, A&A. 60211Popping G., et al., 2017, A&A, 602, A11
. A Psychogyios, A&A. 5911Psychogyios A., et al., 2016, A&A, 591, A1
. A Puglisi, ApJ. 83818Puglisi A., et al., 2017, ApJ, 838, L18
. A Rémy-Ruyer, A&A. 55795Rémy-Ruyer A., et al., 2013, A&A, 557, A95
. A Rémy-Ruyer, A&A. 56331Rémy-Ruyer A., et al., 2014, A&A, 563, A31
. D A Riechers, ApJ. 79684Riechers D. A., et al., 2014, ApJ, 796, 84
. D Rigopoulou, ApJ. 78115Rigopoulou D., et al., 2014, ApJ, 781, L15
. D Rigopoulou, M Pereira-Santaella, G E Magdis, A Cooray, D Farrah, R Marques-Chaves, I Perez-Fournon, D Riechers, MNRAS. 47320Rigopoulou D., Pereira-Santaella M., Magdis G. E., Cooray A., Farrah D., Marques-Chaves R., Perez-Fournon I., Riechers D., 2018, MNRAS, 473, 20
. G Rodighiero, ApJ. 73940Rodighiero G., et al., 2011, ApJ, 739, L40
. G Rodighiero, MNRAS. 44319Rodighiero G., et al., 2014, MNRAS, 443, 19
. I G Roseboom, MNRAS. 40948Roseboom I. G., et al., 2010, MNRAS, 409, 48
. W Rujopakarn, ApJ. 83312Rujopakarn W., et al., 2016, ApJ, 833, 12
. A Saintonge, MNRAS. 4621749Saintonge A., et al., 2016, MNRAS, 462, 1749
. A Saintonge, arXiv:1710.02157ApJ. 799183preprintSaintonge A., et al., 2017, preprint, (arXiv:1710.02157) Salmon B., et al., 2015, ApJ, 799, 183
. D B Sanders, I F Mirabel, ARA&A. 34749Sanders D. B., Mirabel I. F., 1996, ARA&A, 34, 749
. P Santini, A&A. 56230Santini P., et al., 2014, A&A, 562, A30
. M Sargent, in prepSargent, M. et al. in prep. 2017
. M T Sargent, M Béthermin, E Daddi, D Elbaz, ApJ. 74731Sargent M. T., Béthermin M., Daddi E., Elbaz D., 2012, ApJ, 747, L31
. M T Sargent, ApJ. 79319Sargent M. T., et al., 2014, ApJ, 793, 19
. L Sargsyan, ApJ. 755171Sargsyan L., et al., 2012, ApJ, 755, 171
. D Schaerer, S De Barros, P Sklias, A&A. 5494Schaerer D., de Barros S., Sklias P., 2013, A&A, 549, A4
. D Schaerer, A&A. 5762Schaerer D., et al., 2015, A&A, 576, L2
. M R P Schirm, MNRAS. 4704989Schirm M. R. P., et al., 2017, MNRAS, 470, 4989
. C Schreiber, A&A. 57574Schreiber C., et al., 2015, A&A, 575, A74
. C Schreiber, D Elbaz, M Pannella, L Ciesla, T Wang, M Franco, arXiv:1710.10276preprintSchreiber C., Elbaz D., Pannella M., Ciesla L., Wang T., Franco M., 2017a, preprint, (arXiv:1710.10276)
. C Schreiber, arXiv:1709.03505AJ. 143138preprintSchreiber C., et al., 2017b, preprint, (arXiv:1709.03505) Schruba A., et al., 2012, AJ, 143, 138
. N Scoville, A Faisst, P Capak, Y Kakazu, G Li, C Steinhardt, ApJ. 800108Scoville N., Faisst A., Capak P., Kakazu Y., Li G., Steinhardt C., 2015, ApJ, 800, 108
. N Scoville, ApJ. 837150Scoville N., et al., 2017, ApJ, 837, 150
. A E Shapley, C C Steidel, M Pettini, K L Adelberger, ApJ. 58865Shapley A. E., Steidel C. C., Pettini M., Adelberger K. L., 2003, ApJ, 588, 65
. Y Shi, J Wang, Z.-Y Zhang, Y Gao, C.-N Hao, X.-Y Xia, Q Gu, Nature Communications. 713789Shi Y., Wang J., Zhang Z.-Y., Gao Y., Hao C.-N., Xia X.-Y., Gu Q., 2016, Nature Communications, 7, 13789
. R Siebenmorgen, E Krügel, A&A. 461445Siebenmorgen R., Krügel E., 2007, A&A, 461, 445
. J D T Smith, ApJ. 8345Smith J. D. T., et al., 2017, ApJ, 834, 5
. J S Spilker, ApJ. 826112Spilker J. S., et al., 2016, ApJ, 826, 112
. G J Stacey, N Geis, R Genzel, J B Lugten, A Poglitsch, A Sternberg, C H Townes, ApJ. 373423Stacey G. J., Geis N., Genzel R., Lugten J. B., Poglitsch A., Sternberg A., Townes C. H., 1991, ApJ, 373, 423
. G J Stacey, S Hailey-Dunsheath, C Ferkinhoff, T Nikola, S C Parshley, D J Benford, J G Staguhn, N Fiolet, ApJ. 724957Stacey G. J., Hailey-Dunsheath S., Ferkinhoff C., Nikola T., Parshley S. C., Benford D. J., Staguhn J. G., Fiolet N., 2010, ApJ, 724, 957
. D P Stark, R S Ellis, A Bunker, K Bundy, T Targett, A Benson, M Lacy, ApJ. 6971493Stark D. P., Ellis R. S., Bunker A., Bundy K., Targett T., Benson A., Lacy M., 2009, ApJ, 697, 1493
. C L Steinhardt, ApJ. 79125Steinhardt C. L., et al., 2014, ApJ, 791, L25
. A M Swinbank, MNRAS. 4271066Swinbank A. M., et al., 2012, MNRAS, 427, 1066
. L J Tacconi, ApJ. 76874Tacconi L. J., et al., 2013, ApJ, 768, 74
. K.-I Tadaki, ApJ. 8113Tadaki K.-i., et al., 2015, ApJ, 811, L3
. Q Tan, A&A. 56998Tan Q., et al., 2014, A&A, 569, A98
. Y Tang, M Giavalisco, Y Guo, J Kurk, ApJ. 79392Tang Y., Giavalisco M., Guo Y., Kurk J., 2014, ApJ, 793, 92
. L Vallini, S Gallerani, A Ferrara, A Pallottini, B Yue, ApJ. 81336Vallini L., Gallerani S., Ferrara A., Pallottini A., Yue B., 2015, ApJ, 813, 36
. L Vallini, C Gruppioni, F Pozzi, C Vignali, G Zamorani, MNRAS. 45640Vallini L., Gruppioni C., Pozzi F., Vignali C., Zamorani G., 2016, MNRAS, 456, L40
. T Velusamy, W D Langer, A&A. 57245Velusamy T., Langer W. D., 2014, A&A, 572, A45
. J D Vieira, ApJ. 719763Vieira J. D., et al., 2010, ApJ, 719, 763
. F Walter, A Weiß, D Downes, R Decarli, C Henkel, ApJ. 73018Walter F., Weiß A., Downes D., Decarli R., Henkel C., 2011, ApJ, 730, 18
Sample of local dwarf galaxies observed with Herschel /PACS and SPIRE as part of the DGS survey (Madden et al. 2013). They have metallicity ranging from ∼ 1/40 Z to near solar. Cormier, • Local dwarf galaxies. SFR from ∼ 5 × 10 −4 M yr −1 to• Local dwarf galaxies (Cormier et al. 2015, Madden et al. in prep. 2017). Sample of local dwarf galaxies observed with Herschel /PACS and SPIRE as part of the DGS survey (Madden et al. 2013). They have metallicity ranging from ∼ 1/40 Z to near solar, SFR from ∼ 5 × 10 −4 M yr −1 to
M yr −1 and they are all nearby (maximum distance ∼. M yr −1 and they are all nearby (maximum distance ∼
We converted the CO luminosity of these sources (Cormier et al. 2014, Madden et al. in prep. 2017) into molecular gas mass by using a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Mpc, this work we only consider the galaxies that have been followed-up with ATNF Mopra 22-m. Their IR luminosity was estimated fitting the IR SEDs with semi-empirical modelsMpc). In this work we only consider the galaxies that have been followed-up with ATNF Mopra 22-m, APEX, and IRAM 30-m telescopes and show a CO(1-0) emission line detection (Cormier et al. 2014, Madden et al. in prep. 2017, De Vis et al. 2017). We converted the CO luminos- ity of these sources (Cormier et al. 2014, Madden et al. in prep. 2017) into molecular gas mass by using a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Sec- tion 3.4). Their IR luminosity was estimated fitting the IR SEDs with semi-empirical models (Galliano et al. 2011).
estimated their SFR from the total infrared luminosity using the equation from Kennicutt (1998) and their stellar mass from the 3.6 and 4.5 µm flux densities using the formula of. Rémy-Ruyer, Eskew et al.Rémy-Ruyer et al. (2014) estimated their SFR from the total infrared luminosity using the equation from Kenni- cutt (1998) and their stellar mass from the 3.6 and 4.5 µm flux densities using the formula of Eskew et al. (2012).
local galaxies from the xCOLD GASS survey (Saintonge et al. 2017) with metallicities in the range 0.4 < Z/Z < 1.0. They have Herschel [C II] and IRAM CO(1-0) observations, together with auxiliary data from. Accurso, Sample of intermediate mass (9 < log M /M < 10). • Local main-sequence galaxies. computed the molecular gas masses from the CO luminosity, considering a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4)• Local main-sequence galaxies (Accurso et al. 2017a). Sam- ple of intermediate mass (9 < log M /M < 10), local galax- ies from the xCOLD GASS survey (Saintonge et al. 2017) with metallicities in the range 0.4 < Z/Z < 1.0. They have Herschel [C II] and IRAM CO(1-0) observations, together with auxiliary data from GALEX, WISE, and SDSS. Ac- curso et al. (2017a) computed the molecular gas masses from the CO luminosity, considering a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4).
measured the SFR of these sources from the combination of UV and IR photometry and their stellar mass from SDSS photometry. Saintonge, Saintonge et al. (2016) measured the SFR of these sources from the combination of UV and IR photometry and their stellar mass from SDSS photometry.
We excluded those that were classified as starbursts (on the basis of their dust temperature: T dust ≥ 40 K) and considered only the 6 "normal" star-forming ones. CO observations taken with a similar beam size to the [C II] ones are reported by. Stacey, Stacey et al.We estimate the molecular gas mass for these galaxies considering a Milky. Way like α CO = 4.4• Local main-sequence galaxies (Stacey et al. 1991). Sam- ple of local galaxies with KAO observations. We excluded those that were classified as starbursts (on the basis of their dust temperature: T dust ≥ 40 K) and considered only the 6 "normal" star-forming ones. CO observations taken with a similar beam size to the [C II] ones are reported by Stacey et al. (1991). We estimate the molecular gas mass for these galaxies considering a Milky-Way like α CO = 4.4
K km s −1 pc 2 conversion factor. Measurements of stellar masses, and metallicity are not available. In Figure 9 we report the average [C II]-to-IR luminosity ratio of these 6 sources considering that they are in main-sequence (sSFR/sSFR MS = 1K km s −1 pc 2 conversion factor. Measurements of stel- lar masses, and metallicity are not available. In Figure 9 we report the average [C II]-to-IR luminosity ratio of these 6 sources considering that they are in main-sequence (sSFR/sSFR MS = 1).
Sample of local galaxies observed with ISO/LWS including "normal" star-forming systems, starbursts, and AGNs. In this analysis we only considered the 74 sources with both [C II] and IR detection. The IR luminosity was estimated from the 25 µm, 60 µm, and 100 µm fluxes as reported by. Brauher, • Local main-sequence and starburst galaxies. Brauher et al.Molecular gas and stellar mass. and metallicity measurements are not available• Local main-sequence and starburst galaxies (Brauher et al. 2008). Sample of local galaxies observed with ISO/LWS including "normal" star-forming systems, starbursts, and AGNs. In this analysis we only considered the 74 sources with both [C II] and IR detection. The IR luminosity was estimated from the 25 µm, 60 µm, and 100 µm fluxes as reported by Brauher et al. (2008). Molecular gas and stel- lar mass, and metallicity measurements are not available.
They have far-infrared luminosities in the range 2 × 10 9 L -2 × 10 12 L and sSFR 5 × 10 −12 -3 × 10 −9 yr −1 . No measurements of their molecular gas mass are available from the literature. We therefore estimated M mol considering the models by. Díaz-Santos, Sample of local luminous infrared galaxies observed with Herschel /PACS as part of GOALS. Díaz-Santos et al.Their SFR is estimated from IR luminosity. Kennicutt 1998) and their stellar mass from the IRAC 3.6 µm and Two Micron All Sky Survey (2MASS)• Local starbursts (Díaz-Santos et al. 2013, Diaz-Santos et al. 2017). Sample of local luminous infrared galaxies observed with Herschel /PACS as part of GOALS (Ar- mus et al. 2009). They have far-infrared luminosities in the range 2 × 10 9 L -2 × 10 12 L and sSFR 5 × 10 −12 -3 × 10 −9 yr −1 . No measurements of their molecular gas mass are available from the literature. We therefore es- timated M mol considering the models by Sargent et al. (2014) and Scoville et al. (2017) that parametrize the de- pendence of galaxies' depletion time on their sSFR (see Section 3.3 for more details). The IR luminosity was esti- mated from the 60 µm and 100 µm as reported by Díaz- Santos et al. (2013). Their SFR is estimated from IR lu- minosity (Kennicutt 1998) and their stellar mass from the IRAC 3.6 µm and Two Micron All Sky Survey (2MASS)
The metallicity of these sources is not available. K-Band Photometry ( Howell, K-band photometry (Howell et al. 2010). The metallicity of these sources is not available.
Sample of Lyman break analogs (namely, compact galaxies with UV luminosity L UV > 2 × 10 10 L and UV surface brightness I 1530 > 10 9 L kpc −2 ) at redshift 0.1 -0.3, with Herschel /PACS [C II] and IRAM CO(1-0) observations. Their IR luminosity was derived by fitting the IR SEDs of these sources with Draine & Li (2007) models. Their SFRs span the range 3 -100 M yr −1 and their sSFR are comparable to those of z ∼ 2 main-sequence galaxies. We determined their molecular gas mass from the CO luminosity, using a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Contursi, Their SFR has been derived from the IR luminosity considering the equation from Kennicutt (1998) and their stellar masses from rest-frame optical photometry. Redshift z ∼ 0.2 Lyman break analogs• Redshift z ∼ 0.2 Lyman break analogs (Contursi et al. 2017). Sample of Lyman break analogs (namely, compact galaxies with UV luminosity L UV > 2 × 10 10 L and UV surface brightness I 1530 > 10 9 L kpc −2 ) at redshift 0.1 -0.3, with Herschel /PACS [C II] and IRAM CO(1-0) observations. Their IR luminosity was derived by fitting the IR SEDs of these sources with Draine & Li (2007) models. Their SFRs span the range 3 -100 M yr −1 and their sSFR are comparable to those of z ∼ 2 main-sequence galaxies. We determined their molecular gas mass from the CO luminosity, using a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Their SFR has been derived from the IR luminosity considering the equa- tion from Kennicutt (1998) and their stellar masses from rest-frame optical photometry (Overzier et al. 2009).
Sample of (ultra)-luminous infrared galaxies at redshift 0.21 -0.88 observed with Herschel. They have an IR luminosity L IR > 10 11.5 L . Among them, 5 are classified as AGN host, QSO, or composite systems from optical or IRS data. The gas mass has been estimated from the CO luminosity considering a conversion factor that depends on metallicity (α CO ∼ Z −1.5 , see Section 3.4). Their IR luminosity was estimated fitting the IR SEDs of these sources with Draine & Li (2007) models. The SFR of these sources. Magdis, Redshift z ∼ 0.5 starbursts. is derived from the IR luminosity considering the equation from Kennicutt• Redshift z ∼ 0.5 starbursts (Magdis et al. 2014). Sample of (ultra)-luminous infrared galaxies at redshift 0.21 - 0.88 observed with Herschel. They have an IR luminos- ity L IR > 10 11.5 L . Among them, 5 are classified as AGN host, QSO, or composite systems from optical or IRS data. The gas mass has been estimated from the CO luminosity considering a conversion factor that depends on metal- licity (α CO ∼ Z −1.5 , see Section 3.4). Their IR luminosity was estimated fitting the IR SEDs of these sources with Draine & Li (2007) models. The SFR of these sources is derived from the IR luminosity considering the equation from Kennicutt (1998).
Single galaxy lensed by the foreground galaxy observed with Herschel and CSO/ZEUS. The gas mass of this galaxy was determined from the CO luminosity considering a conversion factor α CO = 4.4 M (K km s − 1 pc 2 ) −1 . Its IR luminosity was estimated fitting the IR SED with Siebenmorgen & Krügel (2007) models. Ferkinhoff, Redshift z = 1.8 lensed galaxy. In the following we report the unlensed luminosities• Redshift z = 1.8 lensed galaxy (Ferkinhoff et al. 2014). Sin- gle galaxy lensed by the foreground galaxy observed with Herschel and CSO/ZEUS. The gas mass of this galaxy was determined from the CO luminosity considering a conver- sion factor α CO = 4.4 M (K km s − 1 pc 2 ) −1 . Its IR lumi- nosity was estimated fitting the IR SED with Siebenmor- gen & Krügel (2007) models. In the following we report the unlensed luminosities.
• Redshift z = 1.8 main-sequence galaxies. Brisbin, • Redshift z = 1.8 main-sequence galaxies (Brisbin et al.
Measurements of molecular gas masses are not available. The star formation rate has been estimated from the IR luminsity, considering the equation from Kennicutt. Magdis, Sample of galaxies at redshift z ∼ 1.8 observed with CSO/ZEUS. The observed IR luminosity of these sources ranges between 7 × 10 11 L -6 × 10 12 L and was estimated fitting the IR SED with the models by Dale & Helou. Metallicity measurements are not available. We note that some of the galaxies by Brisbin et al. (2015) might be lensed. WhileSample of galaxies at redshift z ∼ 1.8 observed with CSO/ZEUS. The observed IR luminosity of these sources ranges between 7 × 10 11 L -6 × 10 12 L and was esti- mated fitting the IR SED with the models by Dale & Helou (2002). Measurements of molecular gas masses are not available. The star formation rate has been estimated from the IR luminsity, considering the equation from Ken- nicutt (1998). The stellar mass has been estimated from the 2 µm IRAC flux (Magdis et al. 2010). Metallicity mea- surements are not available. We note that some of the galaxies by Brisbin et al. (2015) might be lensed. While
Cormier, Madden et al. in prep. Local dwarf galaxiesLocal dwarf galaxies (Cormier et al. 2015, Madden et al. in prep. 2017)
Columns (1) Galaxy ID; (2) Redshift; (3) [C II] luminosity; (4) Infrared luminosity; (5) CO luminosity; (6) Molecular gas mass. ) Gas-phase metallicity. ) Specific star formation rateColumns (1) Galaxy ID; (2) Redshift; (3) [C II] luminosity; (4) Infrared luminosity; (5) CO luminosity; (6) Molecular gas mass; (7) Specific star formation rate; (8) Gas-phase metallicity.
The difference between the two estimates consists in the model that we assumed: the first is derived using the mean depletion time obtained averaging the parametrization by. Díaz-Santos, Sargent et al.we report two molecular gas mass estimates. They have both been obtained considering the sSFR of each galaxy, their SFR, and the relation between depletion time and sSFR. whereas the second is estimated considering only the model by Scoville et al. (2017, see Section 3.3 for a more detailed discussionNotes. For the sample by Díaz-Santos et al. (2013), Diaz-Santos et al. (2017) we report two molecular gas mass estimates. They have both been obtained considering the sSFR of each galaxy, their SFR, and the relation between depletion time and sSFR (Sargent et al. 2014, Scoville et al. 2017). The difference between the two estimates consists in the model that we assumed: the first is derived using the mean depletion time obtained averaging the parametrization by Sargent et al. (2014) and Scoville et al. (2017), whereas the second is estimated considering only the model by Scoville et al. (2017, see Section 3.3 for a more detailed discussion).
|
[] |
[
"Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field",
"Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field"
] |
[
"Da-Bao Yang ",
"Ku Meng ",
"Ji-Xuan Hou ",
"\nDepartment of fundamental physics\nSchool of Science\nDepartment of Physics\nTianjin Polytechnic University\n300387TianjinPeople's Republic of China\n",
"\nSoutheast University\n211189NanjingPeople's Republic of China\n"
] |
[
"Department of fundamental physics\nSchool of Science\nDepartment of Physics\nTianjin Polytechnic University\n300387TianjinPeople's Republic of China",
"Southeast University\n211189NanjingPeople's Republic of China"
] |
[] |
In astrophysical environments, neutrinos may propagate over a long distance in a magnetic field.In the presence of a rotating magentic field, the neutrino spin can flip from left-handed neutrino to right-handed neutrino. Smirnov demonstrated that the pure state geometric phase due to the neutrino spin precession may cause resonantg spin conversion inside the Sun. However, in general, the neutrinos may in an ensemble of thermal state. In this article, the corresponding mixed state geometric phases will be formulated, including the off-diagonal casse and diagonal ones. The spefic features towards temperature will be analysized.
| null |
[
"https://arxiv.org/pdf/1512.08219v2.pdf"
] | 119,107,186 |
1512.08219
|
c1003aa8f7f48579a817e5d2c2e676c8f5e72a24
|
Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field
30 Dec 2015
Da-Bao Yang
Ku Meng
Ji-Xuan Hou
Department of fundamental physics
School of Science
Department of Physics
Tianjin Polytechnic University
300387TianjinPeople's Republic of China
Southeast University
211189NanjingPeople's Republic of China
Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field
30 Dec 2015(Dated: December 31, 2015)numbers: 0365Vf7510Pq3115ac * Electronic address: bobydbcn@163com
In astrophysical environments, neutrinos may propagate over a long distance in a magnetic field.In the presence of a rotating magentic field, the neutrino spin can flip from left-handed neutrino to right-handed neutrino. Smirnov demonstrated that the pure state geometric phase due to the neutrino spin precession may cause resonantg spin conversion inside the Sun. However, in general, the neutrinos may in an ensemble of thermal state. In this article, the corresponding mixed state geometric phases will be formulated, including the off-diagonal casse and diagonal ones. The spefic features towards temperature will be analysized.
I. INTRODUCTION
Geometric phase had been discovered by Berry [2] in the circumstance of adiabatic evolution. Then it was generalized by Wilczek and Zee [17], Aharonov and Anandan [1,19], Samuel and Bhandari [10], and Samuel and Bhandari [10] in the context of pure state. Moreover, it was also extended to mixed state counterparts. Its operationally well-defined notion was proposed by Sjöqvist et. al. [13] based on inferferometry. Subsequently, it was generalized to degenerate case by Singh et. al. [12] and to nonunitary evuolution by Tong et. al. [15] by use of kenimatic approach. In addition, when the final state is orthogonal to the initial state, the above geometric phase is meaningless. So the complementary one to the usual geometric phases had been put forward by Manini and Pistolesi [9]. The new phase is called off-diagonal geoemtric phase, which was been generalized to non-abelian casse by Kult et. al. [8]. It also had been extended to mixed state ones by Filipp and Sjöqvist [5,6] during unitary evolution . Further extension to non-degenerate case was made by Tong et. al. [16] by kinematic approach. Finally , there are excellent reviewed articles [18] and monographs [3,4,11] talking about its influence and applications in physics and other natural science.
As well kown, Neutrino plays an important role in particle physics and astronomy.
Smirnov investigated the effect of resonant spin converstion of solar neutrinos which was induced by the geometric phase [14]. Joshi and Jain figured out the geometric phase of neutrino when it was propagating in a rotating transverse mangnetic field [7]. However, their disscussion are confined to the pure state case. In this article, we will talk about mixed state geometric phase of neutrino, ranging from off-diagonal phase to diagonal one. This paper is organised as follows. In the next section, the off-diagonal geometric phase for mixed state will be reviewed as well as the usual mixed state geometric phase. Furthermore, the related equation about the propagation of two helicity components of neutrino will be retrospected. In Sec. III, both the off-diagonal and diagonal mixed geometric phase for neutrino in thermal state are going to be calculated. Finally, a conclusion is drawn in the last section.
II. REVIEW OF O FF-DIAGONAL PHASE
If a non-degenerate density matrix takes this form
ρ 1 = λ 1 |ψ 1 ψ 2 | + · · · + λ N |ψ N ψ N |.(1)
Moreover, a density operator that can't interfer with ρ 1 is introduced [5], which is
ρ n = W n−1 ρ 1 (W † ) n−1 , n = 1, ..., N where W = |ψ 1 ψ N | + ψ N ψ N −1 | · · · + |ψ 2 ψ 1 |.
In the unitary evolution, excet the ususal mixed state geometric phase, there exists so called mixed state off diagonal phase , which reads [5] γ (l)
ρ j 1 ...ρ j l = Φ[T r( l a=1 U (τ ) l √ ρ ja )],(2)
where Φ[z] ≡ z/|z| for nonzero complex number z and [16]
U = U(t) N k=1 e −iδ k ,(3)
in which
δ k = −iˆt 0 ψ k |U † (t ′ )U(t ′ )|ψ k dt ′(4)
and U(t) is the time evolution operator of this system. Moreover U satisfies the parallel transport condition, which is
ψ k |U † (t)U (t)|ψ k = 0, k = 1, · · · , N.
In addition, the usual mixed state geometric phase factor [15] takes the following form
γ = Φ N k=1 λ k ψ k |U(τ )|ψ k e −iδ k(5)
The propagation helicity components ν R ν L T of a neutrino in a magnetic field obeys the following equation [14] i d dt
ν R ν L = V 2 µ ν Be −iωt µ ν Be iωt − V 2 ν R ν L ,(6)
where T denotes the matrices transposing operation, B = B x + iB y = Be iωt , µ ν reprensents the magnetic moment of a massive Dirac neutrino and V is a term due to neutrino mass as well as interaction with matter. The instantaneous eigenvalues and eigenvectors corresponding to the Hamiltonian take the following form [7]
E 1 = + V 2 2 + (µ ν B) 2 |ψ 1 = 1 N µ ν B −e iωt V 2 − E 1 (7)
and
E 2 = − V 2 2 + (µ ν B) 2 |ψ 2 = 1 N e −iωt V 2 − E 1 µ ν B ,(8)
where the normalized factor
N = V 2 − E 1 2 + (µ ν B) 2 .
If this system in a thermal state, the density operator can be written as
ρ = λ 1 |1 1| + λ 2 |2 2|(9)
where λ 1 = e −βE 1 e −βE 1 + e −βE 2 and
λ 2 = e −βE 2 e −βE 1 + e −βE 2 .
In addition, β = 1/(kT ), where k is the Boltzmann constant and T represents the temperature. In the next section, the mixed state geometric phase for both off-diagonal one and diagonal one will be calculated.
III. MIXED STATE GEOMETRIC PHASE
The differential equation Eq. (6) can be exactly solved by the following transformation
ν R ν L = e −iσz 1 2 ωt a b ,(10)
where σ z is a Pauli matrix along z direction whose explicit form is
σ z = 1 0 0 −1 .
By substituting Eq. (10) into Eq. (6), one can obtain
i d dt a b =H a b ,(11)whereH = µ ν Bσ x + 1 2 (V − ω)σ z .
Furthermore, it can be written in this form
H = 1 2 Ω 2µν B Ω 0 V −ω Ω σ x σ y σ z ,(12)
where Ω = (2µ ν B) 2 + (V − ω) 2 . BecauseH is independent of time, Eq. 11 can be exactly solved, whose time evolution operator takes the form
U = e −iHt .
Associating with Eq. (10), the time evolution operator for Eq. (6) is
U = e −iHt e iσz 1 2 ωt .(13)
By substituting Eq. (12) into Eq. 13, the above operator can be written in an explicit form, which is
U = cos Ω 2 t − i V −ω Ω sin Ω 2 t −i 2µν B Ω sin Ω 2 t −i 2µν B Ω sin Ω 2 t cos Ω 2 t + i V −ω Ω sin Ω 2 t e i ωt 2 0 0 e −i ωt 2
In order to calculate off-diagonal phase (2), by use of Eq. (3), we can work out
U 11 ≡ ψ 1 |U(t) e −iδ 1 |ψ 1 ψ 1 | + e −iδ 2 |ψ 2 ψ 2 | |ψ 1 = U 11 e −iδ 1 ,
where U 11 = ψ 1 |U(t)|ψ 1 . In order to simplify the result, let's talk about an easier case.
when t = τ = 2π/Ω,
U 11 = − 1 N 2 µ 2 ν B 2 e i ωτ 2 + V 2 − E 1 2 e −i ωτ 2 = U * 22 ,(14)
where * denotes the complex conjugate operation. By similar calculations, one can obtains
U 12 = 2 N 2 µ ν B V 2 − E 1 sin 1 2 ω e −i(ωτ + π 2 ) = −U * 21(15)
Furthermore, δ 1 can be explicitly calculated out by substituting Eq. (13) Eq. (7) into Eq.
(4), which takes the form
δ 1 = 1 N 2 2µ 2 ν B 2 V 2 − E 1 + V 2 − E 1 2 V 2 − ω − µ 2 ν B 2 V 2 − ω τ.(16)
By similar calculation, one can get
δ 2 = −δ 1 .(17)
Hence Eq. (3) can be explicitly calculated out,
U 11 U 12 U 21 U 22 = U 11 U 12 U 21 U 22 e −iδ 1 0 0 e −iδ 2 .(18)
Now, let us calculate the mixed state off-diagonal phasse
γ (2) ρ 1 ρ 2 = Φ T r 2 a=1 U (τ ) √ ρ a ,(19)
where ρ 1 = λ 1 |1 1| + λ 2 |2 2| and ρ 2 = λ 1 |2 2| + λ 2 |1 1|. Under the basis of |ψ 1 and |ψ 2 ,
T r 2 a=1 U (τ ) √ ρ a = 2 b=1 ψ b | 2 a=1 U (τ ) √ ρ a |ψ b = √ λ 1 λ 2 U 11 2 + U 22 2 + U 12 U 21 .(20)
By substituting Eq. (18) into Eq. (20), we can obtain a simpler result
T r 2 a=1 U (τ ) √ ρ a = λ 1 λ 2 U 11 e −iδ 1 2 + U 22 e −iδ 2 2 + U 12 U 21 e −i(δ 1 +δ 2 )
By substituting Eq. (14) and Eq. (15) into the above equation, off-diagonal geometric phase (19) can be explicitly calculated,
γ (2) ρ 1 ρ 2 = Φ{ V 2 − E 1 2 µ 2 ν B 2 (cos ωτ − 1) + √ λ 1 λ 2 [ V 2 − E 1 4 cos (ωτ + 2δ 1 ) + µ 4 ν B 4 cos (ωτ + 2δ 1 ) +2µ 2 ν B 2 V 2 − E 1 2 cos 2δ 1 ]}(21)
Hence, the corresponding phase is either π or 0, which depends on temperature and magnetic field. So, its phase is unresponsitive to temperature.
γ = Φ{ λ 1 e i( ωτ 2 −δ 1 ) + λ 2 e −i( ωτ 2 −δ 1 ) µ 2 ν B 2 + λ 1 e −i( ωτ 2 +δ 1 ) + λ 2 e i( ωτ 2 +δ 1 ) V 2 − E 1 2 }.(22)
From the above result, we can draw a conclution that if λ 1 = λ 2 , in another word T → ∞, the corresponding phase maybe π or 0. In other circumstance, it may vary continuously in an interval. By contrary to off-diagonal one, the diagonal phase is more sensitive to temprature.
IV. CONCLUSIONS AND ACKNOWLEDGEMENTS
In this article, the time evolution operator of neutrino spin in the presence of uniformly rotating magnetic field is obtained. Under this time evolution operator, a thermal state of this neutrinos evolves. Then there exists mixed off-diagonal geometric phase for mixed state, as well as diagonal ones. They have been calculated respectively. And an analytic form is achieved. In addition, a conclusion is drawn that diagonal phase is more sensentive to off-diagonal one towards temperature.
D
.B.Y. is supported by NSF ( Natural Science Foundation ) of China under Grant No. 11447196. J.X.H. is supported by the NSF of China under Grant 11304037, the NSF of Jiangsu Province, China under Grant BK20130604, as well as the Ph.D. Programs Foundation of Ministry of Education of China under Grant 20130092120041. And K. M. is supported by NSF of China under grant No.11447153.
By substituting Eq. (14), Eq. (16) and Eq. (17) into Eq. (5), the diagonal geometric phase for miexed state reads
Non-adiabatic non-abelian geometric phase. J Anandan, Physics Letters A. 1334-5J Anandan. Non-adiabatic non-abelian geometric phase. Physics Letters A, 133(4-5):171-175, 1988.
Quantal phase factors accompanying adiabatic changes. M V Berry, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 392M. V. Berry. Quantal phase factors accompanying adiabatic changes. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 392:45-57, 1984.
The geometric phase in quantum systems. A Bohm, H Mostafazadeh, Koizumi, J Niu, Zwanziger, Springer-VerlagA Bohm, A Mostafazadeh, H Koizumi, Q Niu, and J Zwanziger. The geometric phase in quantum systems. Springer-Verlag, 2003.
Geometric phases in classical and quantum mechanics. D Chruscinski, A Jamioikowski, Birkhauser36D. Chruscinski and A. Jamioikowski. Geometric phases in classical and quantum mechanics, volume 36. Birkhauser, 2004.
Copyright (C) 2010 The American Physical Society Please report any problems to prola@aps. Stefan Filipp, Erik Sjoqvist, Physical Review A. 68442112Off-diagonal generalization of the mixed-state geometric phase. org PRAStefan Filipp and Erik Sjoqvist. Off-diagonal generalization of the mixed-state geometric phase. Physical Review A, 68(4):042112, 2003. Copyright (C) 2010 The American Physical Society Please report any problems to [email protected] PRA.
Copyright (C) 2010 The American Physical Society Please report any problems to prola@aps. Stefan Filipp, Erik Sjoqvist, Physical Review Letters. 90550403Off-diagonal geometric phase for mixed states. org PRLStefan Filipp and Erik Sjoqvist. Off-diagonal geometric phase for mixed states. Physical Review Letters, 90(5):050403, 2003. Copyright (C) 2010 The American Physical Society Please report any problems to [email protected] PRL.
Geometric phase for neutrino propagation in a transverse magnetic field. Sandeep Joshi, Sudhir R Jain, Sandeep Joshi and Sudhir R Jain. Geometric phase for neutrino propagation in a transverse magnetic field. 2015.
Non-abelian generalization of off-diagonal geometric phases. D Kult, Europhysics Letters). 7860004EPLD Kult. Non-abelian generalization of off-diagonal geometric phases. EPL (Europhysics Let- ters), 78:60004, 2007.
Copyright (C) 2010 The American Physical Society Please report any problems to prola@aps. Nicola Manini, F Pistolesi, Physical Review Letters. 85153067Off-diagonal geometric phases. org PRLNicola Manini and F. Pistolesi. Off-diagonal geometric phases. Physical Review Letters, 85(15):3067, 2000. Copyright (C) 2010 The American Physical Society Please report any problems to [email protected] PRL.
General setting for berry's phase. Joseph Samuel, Rajendra Bhandari, The American Physical Society. 602339PRLPhysical Review LettersJoseph Samuel and Rajendra Bhandari. General setting for berry's phase. Physical Review Letters, 60(Copyright (C) 2010 The American Physical Society):2339, 1988. PRL.
Geometric phases in physics. A Shapere, Wilczek, A Shapere and F Wilczek. Geometric phases in physics. 1989.
Geometric phases for nondegenerate and degenerate mixed states. K Singh, D M Tong, K Basu, J L Chen, J F Du, The American Physical Society. 67332106Physical Review A. Please report any problems to [email protected] PRAK. Singh, D. M. Tong, K. Basu, J. L. Chen, and J. F. Du. Geometric phases for nondegenerate and degenerate mixed states. Physical Review A, 67(3):032106, 2003. Copyright (C) 2011 The American Physical Society Please report any problems to [email protected] PRA.
. Erik Sjoqvist, Arun K Pati, Artur Ekert, Jeeva S Anandan, Marie Ericsson, Daniel K , Erik Sjoqvist, Arun K. Pati, Artur Ekert, Jeeva S. Anandan, Marie Ericsson, Daniel K. L.
Copyright (C) 2010 The American Physical Society Please report any problems to prola@aps. Vlatko Oi, Vedral, Physical Review Letters. 85142845Geometric phases for mixed states in interferometry. org PRLOi, and Vlatko Vedral. Geometric phases for mixed states in interferometry. Physical Review Letters, 85(14):2845, 2000. Copyright (C) 2010 The American Physical Society Please report any problems to [email protected] PRL.
The geometrical phase in neutrino spin precession and the solar neutrino problem. Yu Smirnov, Physics Letters B. 2601A Yu Smirnov. The geometrical phase in neutrino spin precession and the solar neutrino problem. Physics Letters B, 260(1):161-164, 1991.
Kinematic approach to the mixed state geometric phase in nonunitary evolution. D M Tong, E Sjoqvist, L C Kwek, C H Oh, Copyright (C) 2010 The American Physical Society Please report any problems to [email protected] PRL. 9380405D. M. Tong, E. Sjoqvist, L. C. Kwek, and C. H. Oh. Kinematic approach to the mixed state geometric phase in nonunitary evolution. Physical Review Letters, 93(8):080405, 2004. Copy- right (C) 2010 The American Physical Society Please report any problems to [email protected] PRL.
Kinematic approach to off-diagonal geometric phases of nondegenerate and degenerate mixed states. D M Tong, Erik Sjoqvist, Stefan Filipp, L C Kwek, C H Oh, The American Physical Society. 71332106Physical Review A. Please report any problems to [email protected] PRAD. M. Tong, Erik Sjoqvist, Stefan Filipp, L. C. Kwek, and C. H. Oh. Kinematic approach to off-diagonal geometric phases of nondegenerate and degenerate mixed states. Physical Review A, 71(3):032106, 2005. Copyright (C) 2011 The American Physical Society Please report any problems to [email protected] PRA.
Appearance of gauge structure in simple dynamical systems. F Wilczek, Zee, Physical Review Letters. 5224F Wilczek and A Zee. Appearance of gauge structure in simple dynamical systems. Physical Review Letters, 52(24):2111-2114, 1984.
Berry phase effects on electronic properties. Di Xiao, Ming-Che Chang, Qian Niu, Reviews of Modern Physics. 823RMPDi Xiao, Ming-Che Chang, and Qian Niu. Berry phase effects on electronic properties. Reviews of Modern Physics, 82(3):1959-2007, 2010. RMP.
Phase change during a cyclic quantum evolution. A Anandan, Y Aharonov, Phys. Rev. Lett. 581593A. Anandan Y.Aharonov. Phase change during a cyclic quantum evolution. Phys. Rev. Lett, 58(1593), 1987.
|
[] |
[
"STRUCTURE AND HISTORY OF DARK MATTER HALOS PROBED WITH GRAVITATIONAL LENSING",
"STRUCTURE AND HISTORY OF DARK MATTER HALOS PROBED WITH GRAVITATIONAL LENSING"
] |
[
"A Lapi ",
"A Cavaliere "
] |
[] |
[] |
We test with gravitational lensing data the dark matter (DM) halos embedding the luminous baryonic component of galaxy clusters; our benchmark is provided by their two-stage cosmogonical development that we compute with its variance, and by the related 'α-profiles' we derive. The latter solve the Jeans equation for the self-gravitating, anisotropic DM equilibria, and yield the radial runs of the density ρ(r) and the velocity dispersion σ 2 r (r) in terms of the DM 'entropy' K ≡ σ 2 r /ρ 2/3 ∝ r α highlighted by recent N-body simulations; the former constrains the slope to the narrow range α ≈ 1.25 − 1.3. These physically based α-profiles meet the overall requirements from gravitational lensing observations, being intrinsically flatter at the center and steeper in the outskirts relative to the empirical NFW formula. Specifically, we project them along the l.o.s. and compare with a recent extensive dataset from strong and weak lensing observations in and around the cluster A1689. We find an optimal fit at both small and large scales in terms of a halo constituted by an early body with α ≈ 1.25 and by recent extensive outskirts, that make up an overall mass 10 15 M ⊙ with a concentration parameter c ≈ 10 consistent with the variance we compute in the ΛCDM cosmogony. The resulting structure corresponds to a potential well shallow in the outskirts as that inferred from the X rays radiated from the hot electrons and baryons constituting the intracluster plasma.
|
10.1088/0004-637x/695/2/l125
|
[
"https://arxiv.org/pdf/0903.1589v1.pdf"
] | 18,737,002 |
0903.1589
|
d24a70907aba0bcccf877c41fc935f9db41587b1
|
STRUCTURE AND HISTORY OF DARK MATTER HALOS PROBED WITH GRAVITATIONAL LENSING
9 Mar 2009 March 9, 2009
A Lapi
A Cavaliere
STRUCTURE AND HISTORY OF DARK MATTER HALOS PROBED WITH GRAVITATIONAL LENSING
9 Mar 2009 March 9, 2009ACCEPTED BY APJ LETTERS. Preprint typeset using L A T E X style emulateapj v. 03/07/07 Draft versionSubject headings: Dark matter -galaxies: clusters: general -galaxies: clusters: individual (A1689) - gravitational lensing -X-rays: galaxies: clusters
We test with gravitational lensing data the dark matter (DM) halos embedding the luminous baryonic component of galaxy clusters; our benchmark is provided by their two-stage cosmogonical development that we compute with its variance, and by the related 'α-profiles' we derive. The latter solve the Jeans equation for the self-gravitating, anisotropic DM equilibria, and yield the radial runs of the density ρ(r) and the velocity dispersion σ 2 r (r) in terms of the DM 'entropy' K ≡ σ 2 r /ρ 2/3 ∝ r α highlighted by recent N-body simulations; the former constrains the slope to the narrow range α ≈ 1.25 − 1.3. These physically based α-profiles meet the overall requirements from gravitational lensing observations, being intrinsically flatter at the center and steeper in the outskirts relative to the empirical NFW formula. Specifically, we project them along the l.o.s. and compare with a recent extensive dataset from strong and weak lensing observations in and around the cluster A1689. We find an optimal fit at both small and large scales in terms of a halo constituted by an early body with α ≈ 1.25 and by recent extensive outskirts, that make up an overall mass 10 15 M ⊙ with a concentration parameter c ≈ 10 consistent with the variance we compute in the ΛCDM cosmogony. The resulting structure corresponds to a potential well shallow in the outskirts as that inferred from the X rays radiated from the hot electrons and baryons constituting the intracluster plasma.
INTRODUCTION
The collisionless, cold dark matter (DM) particles that constitute the gravitationally dominant component of galaxy clusters are distributed in a 'halo' embedding the electromagnetically active baryons. The halo development under self-gravity from small density perturbations has been focused by several recent N-body simulations, with three main outcomes.
First, the growth is recognized (Zhao et al. 2003;Hoffman et al. 2007;Diemand et al. 2007) to comprise two cosmogonic stages: an early fast collapse including a few violent major mergers building up the halo 'body'; a later, quasi-equilibrium stage where the outskirts develop from the inside-out by minor mergers and smooth accretion (see Salvador-Solé et al. 2007). The transition occurs at the redshift z t when a DM gravitational well attains its maximal depth, or the radial peak of the circular velocity v 2 c ≡ GM/R its maximal height (see Li et al. 2007). This sharp definition of halo 'formation' also marks the time for the early gravitational turmoil to subside.
Second, the ensuing quasi-equilibrium structure is effectively expressed in terms of the functional K ≡ σ 2 r /ρ 2/3 that combines the density ρ and the radial velocity dispersion σ 2 r in the form of a DM 'entropy' (or rather 'adiabat'; see Bertschinger 1985, Taylor & Navarro 2001, Hoffman et al. 2007, Vass et al. 2008. This mimics the behavior of a thermodynamic entropy in that it increases in the halo bodies during the fast collapses and stays put during the subsequent quiet accretion.
Third, the simple run K(r) ∝ r α is empirically found to hold in the settled halo bodies, with slopes around 1.25 (see Navarro et al. 2008). This apparently universal halo feature allows recasting the pressure ρ(r) σ 2 r (r) ∝ K(r) ρ 5/3 (r) in terms of the density, to balance self-gravity for the equilibrium.
As to the latter we have used the isotropic Jeans equation to derive the 'α-profiles' for DM quantities like ρ(r) and σ 2 r (r), having ascertained that α is to lie within the narrow range 1.25 − 1.3 from a state-of-the-art semianalytic study of the cosmogonic halo development, see Lapi & Cavaliere (2009, hereafter LC09).
In this Letter we first refine these profiles to conditions of developing outskirts and anisotropic velocity dispersion; then we test them against recent data that join observations of strong and weak gravitational lensing (GL).
HALO DEVELOPMENT AND EQUILIBRIUM
Our framework will be provided by the standard Λcosmology, i.e., a flat Universe with normalized matter density Ω M = 0.27, dark energy density Ω Λ = 0.73, and Hubble constant H 0 = 72 km s −1 Mpc −1 . To bridge the matterto the dark energy-dominated era across the cosmological crossover at z ≈ 0.5, we shall express the redshift-time relation (1 + z) ∝ t −q in terms of the parameter q growing from 2/3 to 4/5.
The entropy slope from cosmogonic development
The evolutions of the current bounding radius R, of the circular velocity v 2 c , and of the entropy K are obtained by LC09 in terms of the halo mass M and its growth rateṀ from the simple scaling laws R ∝ M/Ṁ 2/3 , v 2 c ∝Ṁ 2/3 , and K ∝ R M 1/3 . Whence straightforward algebra leads to express the entropy slope α ≡ d log K/d logR as
α = 1 + 1 1 + 2 ǫ/q ,(1)
in terms of the inverse growth rate ǫ ≡ − d log (1 + z)/d logM = q M/Ṁ t. With ǫ ≈ 1 marking the cosmogonic transition from fast to slow accretion as gauged on the running Hubble timescale, it c , entropy slope α, and concentration c for a current overall mass of 10 15 M ⊙ . In all panels: red lines refer to the history of an average cluster; blue ones refer to the main progenitor to illustrate the variance (see § 2.1 for details). Lines are dotted during the fast collapse of the halo body, and dashed during the slow accretion of the outskirts; big dots locate the cosmogonic transition, when α = 1.29 and 1.25 are seen to hold (bottom left panel) for the average and the main progenitor history, respectively. During the subsequent slow accretion such values are retained in the halo body, while they slowly decrease at the outskirts' boundary following α(z).
is seen from Eq. (1) that the range α ≈ 1.25 − 1.3 will apply to average halos that began their slow accretion in the corresponding interval z t ≈ 1.5 − 0.2. The range of α is narrow as the evolution of ǫ(t)/q(t) is slower than for ǫ(t) and q(t) separately, which explains why closely similar values of α are found in the halo bodies from different simulations.
Such a behavior is checked and refined in terms of the detailed evolution of ǫ(t); its average evolutionary track is obtained from integrating for M(t) the differential equatioṅ
M(M,t) = M 0 dM ′ (M − M ′ ) d 2 P M ′ →M d M ′ dt ,(2)
with the state-of-the-art kernel detailed in Appendix A of LC09. We illustrate as red lines in Fig. 1 our outcomes for an average cluster with current overall mass M ≈ 10 15 M ⊙ ; we plot the redshift evolution of the mass M(z), of the circular velocity v c (z), of the entropy slope α(z). Note that our approach, which includes ellipsoidal collapse of the body and outskirts growth controlled by Λ, renders the peaked behavior of v 2 c (t) in remarkable agreement with the detailed simulations. For M ≈ 10 15 M ⊙ the transition occurs at z t ≈ 0.2, the entropy slope at z t is around α ≈ 1.3, and the outskirts are currently rudimentary.
But considerable variance arises from the stochastic nature of the individual growth histories. As an example of variant, early biased track we focus on the one associated with the 'main progenitor' that constitutes the main branch of a merging tree (illustrated, e.g., in Cavaliere & Menci 2007, their Fig. 1). Such a history obtains from the same Eq. (2), with the lower integration limit replaced by M/2; the results for a current mass M ≈ 10 15 M ⊙ are shown as blue lines in Fig. 1. Relative to the average, this history features a higher transition redshift z t ≈ 1.5, a less massive body with M ≈ 2 × 10 14 M ⊙ , an entropy slope α ≈ 1.25, and currently extensive outskirts. We find the occurrence of such biased halos relative to the av-erage to be 0.125 : 1 on integrating the kernel of Eq. (2) over the corresponding two histories.
An imprint of the transition redshift z t is provided by the concentration parameter c(z), that in overall terms scales as [M(z)/M(z t )] 1/3 ; in fact, Zhao et al. (2003) and Wechsler et al. (2006) describe its increase for z < z t with the approximation c(z) ≈ 4 (1 + z t )/(1 + z), adopted in the bottom right panel of our Fig. 1. It is seen that for the average history the present concentration reads c ≈ 4, while for the main progenitor it takes on values c ≈ 10.
From overall halo development we turn now to profiles for the equilibrium following the transition time.
2.2. α-profiles from Jeans equilibrium Physical profiles are derived from the above values of α inserted into the radial Jeans equation, with pressure ρσ 2 r ∝ r α ρ 5/3 and anisotropy described in terms of the standard Binney (1978) parameter β ≡ 1 − σ 2 θ /σ 2 r . In terms of the density slope γ ≡ −d logρ/d log r Jeans may be recast to read
γ = 3 5 (α + v 2 c σ 2 r ) + 6 5 β .(3)
When supplemented with the mass definition M(< r) ≡ 4π r 0 dr ′ r ′2 ρ(r ′ ) entering v 2 c , this constitutes an integrodifferential equation for ρ(r), that by double differentiation reduces to a handy 2 nd order differential equation for γ (Austin et al. 2005, Dehnen & McLaughlin 2005.
With α = const and β = 0 (meaning isotropy), LC09 found that physical solutions, that we named 'α-profiles', exist for α ≤ 35/27 = 1.296; the corresponding density runs steepen monotonically outwards and satisfy physical central and outer boundary conditions, respectively: a round minimum of the potential along with a round maximum of the pressure; a finite (hence definite) overall mass. In Fig. 2 we report as dashed lines the α-profiles for various quantities: density ρ(r), mass M(< r), circular velocity v 2 c (r), and velocity dispersion σ 2 r (r). The behavior of the density run (top left panel of Fig. 2) is highlighted by the analytic expressions of the slopes
γ a ≡ 3 5 α , γ 0 ≡ 6 − 3α , γ b ≡ 3 2 (1 + α) .(4)
These start with the central (r → 0) value γ a ≈ 0.75 − 078, progressively steepen to γ 0 ≈ 2.25 − 2.1 at the point r 0 that marks the halo main body, and steepen further into the outskirts to the value γ b ≈ 3.38 − 3.44. Monotonic behavior and physical boundary conditions for the α-profiles are seen (cf. LC09) to imply a maximal value κ crit (α) = v 2 c /σ 2 ≈ 2.6 − 2.5 in the body, at the point r p r 0 where v 2 c (r) peaks (see Fig. 2, bottom right panel); there the slope reads γ p = 3 (α + k crit )/5 ≈ 2.32 − 2.28 after Eq. (3).
Thus the inner slopes are considerably flatter as to yield a smooth central pressure, while the outer one is steeper as to yield a definite overall mass, compared to the empirical NFW formula (Navarro et al. 1997). The latter, in fact, implies infinite mass, and angled central pressure and potential.
The concentration parameter for the α-profiles is given in detail by c ≡ R/r −2 in terms of the radius r −2 where γ = 2. This may be viewed as a measure of central condensation (small r −2 ) and/or outskirts' extension (large R).
CLUSTER PROFILES TESTED WITH GL OBSERVATIONS
A significant observational test requires to go beyond the limitations to α = const and isotropy; we tackle these issues in turn.
3.1. Refined α-profiles As to the latter, here we include on the r.h.s. of Eq. (3) the anisotropy term. This clearly will steepen the density run for positive β meaning radial dominance, as expected in the outskirts from infalling cold matter. Tangential components must develop toward the center, as expected from increasing importance of angular momentum effects (see LC09), and as sup-
β(r) ≈ β(0) + β ′ [γ(r) − γ a ](5)
with β(0) −0.1 and β ′ ≈ 0.2, limited to β(r) 0.5. We find (Fig. 2) the corresponding ρ(r) to be slightly flattened at the center by a weakly negative β(0), and considerably steepened into the outskirts where β(r) grows substantially positive. Specifically, the following simple rules apply: the slope β ′ in Eq. (5) is found to drop out from the derivatives of the Jeans equation (Dehnen & McLaughlin 2005); the upper bound to α now readsα = 35/27 − 4β(0)/27; γ a is modified into γ a = 3α/5 + 6β(0)/5 while γ 0 and γ b retain their form.
As to the former and minor issue, we include the slowly decreasing run α(r) (cf. Fig. 1) enforced within outskirts developing by slow mass accretion, as the outer scale is stretched out while the body stays put. Clearly, this affects little the inner profile of ρ(r) as Jeans itself (with its inner boundary conditions) works from the inside out, but it tilts down ρ(r) appreciably into the outskirts.
In Fig. 2 we represent as solid lines the α-profiles refined as to include both the declining α(r) and the anisotropies discussed above.
Testing the α-profiles and their development
Now we turn to testing these refined profiles against the recent, extensive GL observations of the cluster A1689 that join strong and weak lensing to cover scales from 10 −2 to 2.1 Mpc, the latter being the virial radius R, see Broadhurst et al. (2005), Halkola et al. (2006), Limousin et al. (2007), Umetsu & Broadhurst (2008); we focus on the dataset presented by the latter authors and refined in Lemze et al. (2008).
These observations concern the surface density, for which our benchmark is constituted by the refined α-profiles inte-FIG. 3.-Surface density runs for the cluster A1689. Filled symbols represent the data by Lemze et al. (2008;see also Umetsu & Broadhurst 2008) from joint strong and weak GL observations. Blue lines illustrate fits to the data from our α-profiles with no prior (dot-dashed), or with priors: α ≥ 1.25 and β(0) = 0 (dashed); α ≥ 1.25 and β(0) ≥ −0.1 (solid). For comparison, the black dotted line shows the NFW fit. We report in Table 1 the values of the fitting parameters α, β(0), and c and of the corresponding minimum χ 2 /do f . grated over the l.o.s. at a projected distance s from the center
Σ(s) = 2 R s dr r ρ(r) √ r 2 − s 2 .(6)
In Fig. 3 we illustrate as blue lines the outcomes of fits to the data in terms of our refined α-profiles; for the anisotropy we have adopted Eq. (5), with β(0) as the only effective parameter, see § 3.1. The dashed line represents the best fit for isotropic profiles (β = 0) with the prior α ≥ 1.25 from the two-stage development (see § 2.1); the solid line refers to the best fit for anisotropic profiles subject to the priors α ≥ 1.25 and β(0) ≥ −0.1 from simulations (see § 3.1); the dot-dashed line refers to the best fit for anisotropic profiles with no prior. For comparison, the black dotted line illustrates the fit with the NFW formula.
In Table 1 we report the corresponding values of the fitting parameters α, β(0), and c (with their 68% uncertainty), and of the corresponding minimum χ 2 /do f . The physical α-profiles generally provide fits of comparable or better quality than the empirical NFW formula; note that an optimal fit obtains with no priors on α and β, but at the cost of ignoring the information on the former as provided by the two-stage development, and on the latter as provided by the numerical simulations.
We find that balanced fits to the surface density in A1689 require c ≈ 10; lower values would cause wide overshooting of the outer points, while higher ones would allow approaching these (with little impact on χ 2 owing to the large uncertainties) at the cost of overshooting the precise inner points.
A similar balancing has been found by Broadhurst et al. (2008) from fits with the empirical NFW formula, yielding generally high concentrations. These authors suggest that large values of c may be understood in terms of formation redshifts earlier than expected from the standard ΛCDM cosmogony, as quantitatively focused by Sadeh & Rephaeli (2008). From the perspective of our α-profiles, concentrations c ≈ 10 are strictly related to transition redshifts z t ≈ 1.5 based on the state-of-the-art evolution of biased ΛCDM halos, Fig. 3 α (2) and Fig. 1; on the same ground, we compute their occurrence to be bounded by about 13% in blind sampling.
β(0) c χ 2 /do f NFW − − − − − − 12.2 +3
On the other hand, such early biased halos tend to be favored with GL data. This is because strong GL observations are favored by centrally flat profiles and steep outskirts producing conspicuously large Einstein rings (Broadhurst & Barkana 2008). Meanwhile, weak GL observations require extensive outskirts to affect numerous background galaxies.
For A1689 the latter data with their uncertainties are not yet sharply constraining the fit, but we expect convergence toward the physical profiles as the uncertainties are narrowed down by improved control over the redshift distribution of background galaxies (see Medezinski et al. 2007, Limousin et al. 2007, Umetsu & Broadhurst 2008). On the other hand, more and more clusters are being covered by GL observations, which often find evidence of centrally flat density runs (Bradač et al. 2008, Sand et al. 2008, Richard et al. 2009), consistently with our physical α-profiles.
DISCUSSION AND CONCLUSIONS
To sum up, the DM halo benchmark we test with GL observations is comprised of a time and a space behavior, that we strictly link in the framework of ΛCDM cosmogony. As to time, we find a narrow range of the DM entropy slope α as the outcome of a two-stage development comprising an early fast collapse of the body followed by a slow, inside-out growth of the outskirts.
As to space, the physical α-profiles we derive feature density runs ρ(r) intrinsically flatter toward the center, and intrinsically steeper toward the outskirts as to yield a definite mass, relative to the singular NFW rendition of early N-body data. We find these runs to be stable with, or even sharpened by anisotropy.
These physical α-profiles improve at both small and large scales the fits to the GL data, including the extensively probed case of A1689. Here the present analysis requires a halo biased toward a main progenitor lineage, with non-standard concentration c ≈ 10 marking a body collapsed early at z t ≈ 1.5 and late extensive outskirts; we find such halos to comprise some 10% of the clusters.
Alternative proposals to relieve the tension of the GL observations with the profiles expected for average clusters include density cores flattened by degenerate pressure (Nakajima & Morikawa 2007), or concentrations enhanced by some 30% owing to sharply prolate triaxialities (see Hennawi et al. 2007;Oguri & Blandford 2009;Corless et al. 2009) which on the other hand would produce steeper central slopes. The α-profiles and the two-stage development dispense with such contrasting interpretations, providing sharp and consistent shapes linked with early-biased histories. This view clearly invites blind sampling of more clusters in GL.
On the other hand, as we discuss below, X-ray data will provide an independent line of evidence concerning profiles and concentrations. This is based on the other major component of clusters, i.e., the intracluster plasma (ICP) which settles to its own equilibrium within the DM potential well, and emits strong X rays by thermal bremsstrahlung (see Sarazin 1988).
ICP information from spectroscopy (yielding the temperature T ) and X-ray brightness (yielding the squared number density n 2 ) are best combined in the form of the thermodynamic ICP entropy (adiabat) k(r) ≡ k B T /n 2/3 ; this modulates the ICP equilibrium within the DM potential well, and throughout the cluster body follows a powerlaw run k(r) ∝ r a with a 1.1.
In fact, Lapi et al. (2005) and Cavaliere et al. (2009) compute the slope a to be expected in the cluster outskirts from accretion of external gas shocked at about the virial R (see also Tozzi & Norman 2001). They find a ≈ 2.37 − 0.71/∆φ(c) in terms of the potential drop v 2 c (R) ∆φ(c) from the turnaround to R; meanwhile, the ICP density follows n(r) ∝ r −g with g ≈ 1.42 + 0.47/∆φ(c). Values ∆φ ≈ 0.56 are seen to apply for α-profiles with concentration c ≈ 5, to yield a ≈ 1.1 and g ≈ 2.2 as measured in many clusters. But in clusters with extensive outskirts and higher concentrations the outer potential is shallower and ∆φ(c) smaller; when c ≈ 10 one finds ∆φ ≈ 0.47, yielding an intrinsically flatter a ≈ 0.85 (and a steeper g ≈ 2.4 in the absence of large central energy discharges). This theoretical expectation finds gratifying support in the flat a ≈ 0.8 observed in A1689 by Lemze et al. (2008).
We note that the α-profiles provide a physical benchmark also useful in the context of probing cold DM annihilations [∝ ρ 2 (r)] or decays [∝ ρ(r)] through diffuse γ-ray emissions expected from the Galaxy center (e.g., Bertone et al. 2008), and the positron excess recently detected by PAMELA (e.g., Adriani et al. 2008). We will investigate the issue elsewhere.
Finally, since a two-stage development also applies to hot DM cosmogonies (though with c decreasing in cosmic time, see Wang & White 2008), we comment upon the case for cold DM on the basis of the radial run σ 2 r (r) ∝ ρ 2/3 (r) K(r) ∝ r α−2γ(r)/3 . We expect that cold DM halos will be marked by σ 2 r (r) falling down to a few hundreds km s −1 into the outskirts, cf. Fig. 2 with the dynamical observations by Lemze et al. (2009, their Fig. 7). Such a behavior will provide evidence for a truly cold nature of the DM.
FIG. 1 .
1-Evolutionary track of a halo's mass M, circular velocity v 2
FIG. 2 .
2-Radial runs for the α-profiles: density ρ, mass M, velocity dispersion σr, and circular velocity vc; the profiles are normalized to 1 at the point r 0 where γ = γ 0 holds (see Eq. 4), with the main body spanning the range r 2 r 0 . Curves are for α = 1.25 (blue) and 1.29 (red), in the isotropic (dashed) and anisotropic (solid) equilibria; we adopt the anisotropy given in Eq. (5) with β(0) = −0.1 and β ′ = 0.2, the latter parameter being actually irrelevant (see § 3.1).
ported by numerical simulations (Austin et al. 2005; Hansen & Moore 2005; Dehnen & McLaughlin 2005). In detail, the latter suggest the effective linear approximation
. -For the anisotropy we have adopted Eq. (5), where β(0) is the only relevant free parameter, see § 3.1. see Eq.
TABLE 1 RESULTS
1OF THE FITS TO THE SURFACE DENSITY OF A1689Reference in
Work partially supported by Agenzia Spaziale Italiana (ASI). We thank T. Broadhurst, R. Fusco-Femiano, E. Medezinski, P. Natoli, and K. Umetsu for informative and useful discussions. We acknowledge an anonymous referee for constructive and helpful comments.
. O Adriani, arXiv:0810.4995Nature. 634756ApJAdriani, O., et al. 2008, Nature, submitted (preprint arXiv:0810.4995) Austin, C.G., et al. 2005, ApJ, 634, 756
. E Bertschinger, ApJS. 5839Bertschinger, E. 1985, ApJS, 58, 39
. G Bertone, arXiv:0811.3744preprintBertone, G., et al. 2008 (preprint arXiv:0811.3744)
. J Binney, MNRAS. 183779Binney J. 1978, MNRAS, 183, 779
. M Bradač, ApJ. 681187Bradač, M., et al. 2008, ApJ, 681, 187
. T Broadhurst, ApJ. 685143ApJBroadhurst, T., et al. 2008, ApJ, 685, L9 --2005, ApJ, 619, L143
. T Broadhurst, R Barkana, MNRAS. 3901647Broadhurst, T., & Barkana, R. 2008, MNRAS, 390, 1647
. J Bullock, MNRAS. 321559Bullock, J., et al. 2001, MNRAS, 321, 559
. A Cavaliere, A Lapi, R Fusco-Femiano, A Cavaliere, N Menci, ApJ. 66447ApJCavaliere, A., Lapi, A., & Fusco-Femiano, R. 2009, ApJ, submitted Cavaliere, A., & Menci, N. 2007, ApJ, 664, 47
. V L Corless, L J King, D Clowe, arXiv:0812.0632MNRAS. in press. preprintCorless, V.L., King, L.J., & Clowe, D. 2009, MNRAS, in press (preprint arXiv:0812.0632)
. W Dehnen, D E Mclaughlin, MNRAS. 3631057Dehnen, W., & McLaughlin, D.E. 2005, MNRAS, 363, 1057
. J Diemand, M Kuhlen, P Madau, ApJ. 667859Diemand, J., Kuhlen, M., & Madau, P. 2007, ApJ, 667, 859
. A Halkola, S Seitz, M Pannella, MNRAS. 3721425Halkola, A., Seitz, S., & Pannella, M. 2006, MNRAS, 372, 1425
. S H Hansen, B Moore, NewA. 11333Hansen, S.H., & Moore, B. 2006, NewA, 11, 333
. J F Hennawi, ApJ. 654714Hennawi, J.F., et al. 2007, ApJ, 654, 714
. Y Hoffman, ApJ. 6711108Hoffman, Y., et al. 2007, ApJ, 671, 1108
. A Lapi, A Cavaliere, ApJ. 692174LC09Lapi, A., & Cavaliere, A. 2009, ApJ, 692, 174 [LC09]
. A Lapi, A Cavaliere, N Menci, ApJ. 61960Lapi, A., Cavaliere, A., & Menci, N. 2005, ApJ, 619, 60
. D Lemze, arXiv:0810.3129MNRAS. 3861092MNRASLemze, D., et al. 2009, MNRAS, submitted (preprint arXiv:0810.3129) Lemze, D., et al. 2008, MNRAS, 386, 1092
. Y Li, MNRAS. 379689Li, Y., et al. 2007, MNRAS, 379, 689
. M Limousin, ApJ. 668643Limousin, M., et al. 2007, ApJ, 668, 643
. J F Navarro, arXiv:0810.1522ApJ. 663717preprintNavarro, J.F., et al. 2008 (preprint arXiv:0810.1522) Medezinski, E., et al. 2007, ApJ, 663, 717
. T Nakajima, M Morikawa, ApJ. 655135Nakajima, T., & Morikawa, M. 2007, ApJ, 655, 135
. J F Navarro, C S Frenk, S D M White, ApJ. 490493Navarro, J.F., Frenk, C.S., & White, S.D.M. 1997, ApJ, 490, 493
. M Oguri, R D Blandford, MNRAS. 392930Oguri, M., & Blandford, R.D. 2009, MNRAS, 392, 930
. M Oguri, arXiv:0901.4372preprintOguri, M., et al. 2009 (preprint arXiv:0901.4372)
. J Richard, arXiv:0901.0427A&A. 666181ApJRichard, J., et al. 2009, A&A, in press (preprint arXiv:0901.0427) Salvador-Solé, E., et al. 2007, ApJ, 666, 181
. S Sadeh, Y Rephaeli, MNRAS. 3881759Sadeh, S., & Rephaeli, Y. 2008, MNRAS, 388, 1759
. D J Sand, ApJ. 674711Sand, D.J., et al. 2008, ApJ, 674, 711
C L. ; J E Sarazin, J F Navarro, X-ray Emission from Clusters of Galaxies. CambridgeCambridge Univ. Press Taylor563483Sarazin, C.L. 1988, X-ray Emission from Clusters of Galaxies, Cambridge: Cambridge Univ. Press Taylor, J.E., & Navarro, J.F. 2001, ApJ, 563, 483
. P Tozzi, C Norman, ApJ. 54663Tozzi, P., & Norman, C. 2001, ApJ, 546, 63
. K Umetsu, T Broadhurst, ApJ. 684177Umetsu, K., & Broadhurst, T. 2008, ApJ, 684, 177
. I Vass, arXiv:0810.0277MNRAS. submitted. preprintVass, I., et al. 2008, MNRAS, submitted (preprint arXiv:0810.0277)
. J Wang, S D M White, arXiv:0809.1322MNRAS. submitted. preprintWang, J., & White, S.D.M. 2008, MNRAS, submitted (preprint arXiv:0809.1322)
. R H Wechsler, ApJ. 65271Wechsler, R.H., et al. 2006, ApJ, 652, 71
. D H Zhao, MNRAS. 33912Zhao, D.H., et al. 2003, MNRAS, 339, 12
|
[] |
[
"THE FIBRES OF THE PRYM MAP OF ETALE CYCLIC COVERINGS OF DEGREE 7",
"THE FIBRES OF THE PRYM MAP OF ETALE CYCLIC COVERINGS OF DEGREE 7"
] |
[
"Herbert Lange ",
"Angela Ortega "
] |
[] |
[] |
We study the Prym varieties arising frométale cyclic coverings of degree 7 over a curve of genus 2. We prove that these Prym varieties are products of Jacobians JY × JY of genus 3 curves Y with polarization type D =(1, 1, 1, 1, 1, 7). We describe the fibers of the Prym map between the moduli space of such coverings and the moduli space of abelian sixfolds with polarization type D, admitting an automorphism of order 7.
|
10.1090/conm/709/14293
|
[
"https://arxiv.org/pdf/1604.01700v1.pdf"
] | 119,578,871 |
1604.01700
|
bdbd26c6ff060a3c8c33d7bca285bd4242bb46f6
|
THE FIBRES OF THE PRYM MAP OF ETALE CYCLIC COVERINGS OF DEGREE 7
6 Apr 2016
Herbert Lange
Angela Ortega
THE FIBRES OF THE PRYM MAP OF ETALE CYCLIC COVERINGS OF DEGREE 7
6 Apr 2016arXiv:1604.01700v1 [math.AG]
We study the Prym varieties arising frométale cyclic coverings of degree 7 over a curve of genus 2. We prove that these Prym varieties are products of Jacobians JY × JY of genus 3 curves Y with polarization type D =(1, 1, 1, 1, 1, 7). We describe the fibers of the Prym map between the moduli space of such coverings and the moduli space of abelian sixfolds with polarization type D, admitting an automorphism of order 7.
Introduction
Given a finite covering f : C → C between smooth projective curves, one can associate to f an abelian subvariety of the Jacobian J C by taking the connected component con- is called the Prym variety of f . The restriction of the theta divisor of J C to P (f ) defines a polarization on the Prym variety which is known to be twice a principal polarization when f is anétale double covering (see [9]). The assignment [f : C → C] → P (f ) yields a Prym map between the moduli space of the corresponding coverings and the moduli space of abelian varieties of suitable dimension and polarization (not necessarily principally polarized). In very few cases the Prym maps are generically finite over its image (see [1], [5], [6], [7] and [8] ) and often the structure of the fibers can be understood in geometrical terms ( [4], [6]). We study the Prym varieties associated to a non-trivial 7 cyclicétale covering f : C → C over a curve C of genus 2. The corresponding Prym variety P (f ) is a 6-dimensional abelian variety with polarization type D = (1, 1, 1, 1, 1, 7). Let R 2,7 denote the moduli space of these coverings and B D the moduli space of abelian varieties of dimension 6 with a polarization of type D and an automorphism of order 7 compatible with the polarization. In [8] we proved that the Prym map Pr 2,7 : R 2,7 → B D is dominant and since both moduli spaces are 3-dimensional (see [8] for a proof), the map Pr 2,7 is generically finite and in fact, of degree 10. In this article we give a geometric description of the fibers of the Prym map Pr 2,7 .
First we prove in Section 2 that for all [f : C → C] ∈ R 2,7 the associated Prym variety P (f ) = JY × JY , where Y is a curve of genus 3 whose Jacobian admits a multiplication in the totally real cubic subfield Q(ζ 7 ) 0 of the cyclotomic field Q(ζ 7 ) (Proposition 2.4); moreover Y is uniquely determined by the Prym variety. In Section 3 we show that the curve Y admits a degree 7-covering Y → P 1 with ramification type (2,2,2,1), that is, f is simply ramified over each branch point at 3 points and unramified at 1. Let C 3 denote the locus in M 3 of curves Y having a covering of this type. The main result is the following.
Theorem 1.1. The elements of a generic fiber of the Prym map Pr 2,7 : R 2,7 → B D are in bijection with the set of degree 7 coverings f : Y → P 1 with ramification type (2, 2, 2, 1) over 6 general points that a curve Y ∈ C 3 admits. Remark 1.2. In [8] we considered a partial compactification of Pr 2,7 which is proper and generically finite of degree 10. So, a finite fibre consists of 10 elements counted with multiplicities. This fact together with Theorem 1.1 shows that the number of degree 7 coverings f : Y → P 1 with ramification type (2, 2, 2, 1) over each of the 6 ramification points is ≤ 10.
Acknowledgements. We would like to thank Alice Silverberg and Karl Rubin for their help with the proof of Lemma 2.7.
Description of the Prym variety as product of Jacobians
Let C be a genus 2 curve and f : C → C a non-trivialétale cyclic covering of degree 7. We shall give an explicit description of the associated Prym variety P (f ). Part of the results in this section are contained in [10]. It is known that the hyperelliptic involution ι of C lifts, though non-canonically, to an involution j : C → C. If we denote by σ the automorphism of order 7 on the cover C, then j, σ generate the dihedral group
D 7 = j, σ | j 2 = σ 7 = 1, jσj = σ −1 .
Moreover, the composed map C → P 1 is Galois with group D 7 ([3, Proposition 2.1]). Using the commutativity of the diagram
(2.1) C f j / / C f C ι / / C
one checks that the images of the fixed points of j are precisely the fixed points of ι and, since f isétale, there is only one fixed point on each fiber of a fixed point of ι. Let g : C → Y := C/ j be the double covering associated to the involution j. Then g is ramified exactly at the 6 fixed points of j, so by Hurwitz formula Y is of genus 3. Notice that since g is a ramified cover, the map g * : JY → J C is injective, so we can regard JY as subvariety in J C. As before, let P = P (f ) be the Prym variety associated to f .
Proposition 2.1. The map ψ : JY × JY → P (x, y) → x + σ(y)
is an isomorphism of abelian varieties.
Certainly ψ does not respect the canonical polarizations, since JY × JY is canonically principally polarized whereas the induced polarization Ξ on P is of type (1, . . . , 1, 7).
Proof. First we check that JY and σ(JY ) are contained in P . Recall that P = Ker(1 + σ + · · · + σ 6 ) 0 and g * (JY ) = Im(1 + j). On J C (1 + · · · + σ 6 )(1 + j) = 1 + · · · + σ 6 + j + · · · + jσ 6 = 0 since this map is the the norm map of C → P 1 . This shows that JY ⊂ P and as σ acts on P , we also have σ(JY ) ⊂ P .
Note that dim(JY × JY ) = dim P , so in order to prove the proposition it suffices to show that ψ is injective. Recall that, since Y = C/ j , the elements of JY ≃ g * JY are fixed by j. Let (x, y) ∈ JY × JY such that x + σ(y) = 0. Then x = j(x) = −jσ(y) = −σ 6 j(y) = −σ 6 (y) and hence σ 2 (x) = −σ(y) = x. This shows that x ∈ Fix(j, σ). In particular x ∈ Fix(σ) ∩ P ⊂ J C [7]. Since σ(x) = x and f isétale there exists an element z ∈ JC such that x = f * (z). Further, j(x) = x and the commutativity of diagram (2.1) implies that
x = j(x) = jf * (z) = f * ι(z) = −f * (z) = −x since on JC we have ι(z) = −z, that is x ∈ J C[2]. In conclusion x ∈ J C[2] ∩ J C[7] = {0}
and therefore ψ is injective.
In order to describe the polarization on the product let Ξ denote the polarization of type (1, 1, 1, 1, 1, 7) on P . Hence its pull back ψ * Ξ is also of type (1, 1, 1, 1, 1, 7). Now Ξ is the restriction of the canonical polarization of J C to P . So we may consider the polarization ψ * Ξ as the pullback of the canonical polarization of J C to JY × JY .
θ i : JY → JY × JY θ → J C,
where the first map is the natural embedding into the i-th factor. One checks that
θ 1 = g * : JY → J C and θ 2 = σ • g * : JY → J C.
Since the dual of g * is the norm map
N g = 1 + j : J C → JY,
this implies that
θ 1 = g * = N g = 1 + j : J C → JY and θ 2 = g * • σ −1 = σ 6 + σj : J C → JY, since σ = σ −1 . Now note that θ 1 θ 1 = N g g * = 2 JY and θ 2 θ 2 = N g σ −1 σg * = 2 JY . Hence the matrix of φ ψ * Ξ : JY × JY → JY × JY is φ ψ * Ξ = θ 1 θ 1 θ 1 θ 2 θ 2 θ 1 θ 2 θ 2 = 2 JY N g σg * N g σ −1 g * 2 JY .
So we have,
Proposition 2.2.
Let g : C → Y = C/ j be the natural map and suppose J C = J C and JY = JY via the canonical principal polarizations. If we denote by ϕ the isogeny ϕ = N g σg * : JY → JY , then the polarization ψ * Ξ on JY × JY is given by the matrix
φ ψ * Ξ = 2 JY ϕ ϕ 2 JY .
In order to study the polarizations of type ( The endomorphisms of JY ×JY are given by square matrices of degree 2 with entries in End(JY ). The symmetry in (b) is with respect to the Rosati involution. Since the Rosati involution with respect to the canonical polarization of JY × JY is just transposition of the matrices, we are looking for the symmetric positive definite matrices
A := ρ a (α) ρ a (β) ρ a (β) t ρ a (δ) with determinant 7, where α, β, δ ∈ End(JY ).
Proposition 2.3. If P admits a polarization of degree 7, then End(JY ) Z.
Proof. Suppose End(JY ) = Z. For a general Y the Rosati involution on JY × JY is transposition composed with the Rosati involution of the pieces JY , but for a general Y the Rosati involution on JY is the identity. Let A be an endomorphism of degree 7, given by the matrix A. Then α, β and δ are integers and ρ a (α) = α C 3 = diag(α, α, α) etc. and we have
7 = det A = det diag(α, α, α) diag(β, β, β) diag(β, β, β) diag(δ, δ, δ) = (αδ − β 2 ) 3 .
Since this equation does not admit an integer solution, this give a contradiction.
Recall that R 2,7 is an irreducible 3-dimensional variety and the Prym map is generically finite onto B D . So also B D is of dimension 3. Since every element of B D is isomorphic to JY × JY for some curve Y of genus 3, and since the number of decompositions P = JY × JY is at most countable, we get a 3-dimensional algebraic set, say V, of curves Y such that JY × JY admits a polarization of degree 7. In fact, as a consequence of the results in the paper V is even a variety.
Proposition 2.4. The Jacobians JY of all curves Y ∈ V admit real multiplication in the totally real cubic number subfield Q(ζ 7 ) 0 of the cyclotomic field Q(ζ 7 ).
Proof. Since by Proposition 2.3 End Q (JY ) = Q, the Jacobian admits either real, quaternion or complex multiplication (in the more general sense of [2, Section 9.6]). But an abelian variety with quaternion multiplication is even dimensional ([2, 9.4 and 9.5]). If JY admits complex multiplication by a skew field of degree d 2 over a totally complex quadratic extension of a totally real number field of degree e 0 , we would have (see [2, 9.6]) 3 = d 2 e 0 m for some integer m ≥ 1. So d = 1 and e 0 = 3 (by Proposition 2.3). Then JY admits complex multiplication by a number field. Since there are only countably many such abelian varieties, we conclude that JY admits multiplication by a totally real number field F of degree say e.
Then we have 3 = em for some positive integer m [2, §9.2] and Proposition 2.3 implies e = 3. So JY × JY admits multiplication in SL 2 (F ), with F a totally real cubic number field. On the other hand P admits a multiplication in the cyclotomic field Q(ζ 7 ) and this is a subfield of SL 2 (F ) only if F is the totally real cubic subfield Q(ζ 7 ) 0 in Q(ζ 7 ). Therefore F = Q(ζ 7 ) 0 .
As a consequence of the description of the polarization in Proposition 2.1 we have
P (f ) ≃ A × B.
We have to show that there is an automorphism α of P (f ) with α(JY ) = A.
Recall that an element ǫ ∈ End(P (f )) is idempotent if ǫ 2 = ǫ. Now the direct factors of P (f ) correspond bijectively to the non-trivial idempotents of the endomorphism ring of P (f ), i.e. of M 2 (O). Namely, if A is a direct factor, then the composition ǫ of the maps
P (f ) ≃ A × B p 1 → A i 1 → A × B ≃ P (f )
is an idempotent of End(P (f )). Here p 1 and i 1 are the natural projection and inclusion. Conversely, if ǫ is an idempotent, the factor A is given by the kernel of the endomorphism 1 − ǫ. Moreover, if ǫ ′ = αǫα −1 with an automorphism α of P (f ), then
α(1 − ǫ)α −1 = 1 − ǫ ′ .
Hence ǫ and ǫ ′ correspond to isomorphic direct factors. It suffices to show that all nontrivial idempotents (i.e. different from 0 and 1) are conjugate to each other, which is the content of the following lemma.
The number of isomorphism classes of coverings f
Let f : C → C be anétale cyclic covering of degree 7 of a smooth curve of genus 2. As we saw in Section 2, the lifting j of the hyperelliptic involution ι of C is not unique and in fact there are exactly 7 liftings j 0 , . . . , j 6 which are the 7 involutions of D 7 . If g i : C → Y i := C/ j i denotes the quotient map given by the involution j i := jσ i (so g 0 = g) we have the following cartesian diagram.
(3.1) C f ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ g i ❄ ❄ ❄ ❄ ❄ ❄ ❄ ❄ C h ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ Y i f ĩ~⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ P 1
In the previous section we saw that Y i is of genus 3 and g i is ramified exactly at one point over each Weierstrass point of C. Since f isétale, the commutativity of (3.1) implies that the branch points of f i : Y i → P 1 coincide with the branch points of h and each branch point is of ramification type (2, 2, 2, 1), i.e f i is simply ramified at 3 points and unramified at 1 point over the branch point. So the branch points of f i coincide for all i. If p 1 , . . . , p 6 ∈ P 1 are the branch points, then the permutations corresponding to the fibers (f i ) −1 (p j ) are pairwise conjugate within D 7 for all i = 0, . . . , 6 and fixed j with the same conjugation for j = 1, . . . , 6 and such that the non-ramified points of f j over each p i are pairwise different for j = 0, . . . , 6.
Note that the coverings f i correspond to conjugate subgroups of D 7 . Hence the f i : Y i → P 1 are isomorphic coverings.
Lemma 3.1. For any general smooth curve C of genus 2 with double covering h : C → P 1 , there is a canonical bijection between the sets of (a) isomorphism classes ofétale cyclic coverings f : C → C of degree 7, (b) isomorphism classes of degree-7 coverings f : Y → P 1 of ramification type (2, 2, 2, 1) over each of the 6 ramification points.
Proof. We saw above that any covering f in (a) gives 7 coverings in (b), which are all isomorphic to each other. Conversely, let f : Y → P 1 be one of the coverings in (b). Since C is general, the monodromy group G of f coincides with the Galois group of the Galois closure of Y → P 1 . Define C := C × P 1 Y. According to the Lemma of Abhyankar [11, Lemma 2.14] the projection C → C isétale. So C is smooth and it is easy to see that it is Galois over P 1 with Galois group ≃ D 7 . Hence C is the Galois closure of f . In particular C → C is cyclicétale of degree 7. Clearly both constructions are inverse to each other and isomorphic coverings in (a) correspond to isomorphic coverings in (b).
Given a covering f : C → C corresponding to a general element in R 2,7 , by Lemma 3.1 there is a corresponding degree 7 map Y → P 1 of ramification type (2, 2, 2, 1) and, according to Proposition 2.1, the Prym variety P (f ) decomposes as
P (f ) = JY × JY.
Now there are 7 4 − 1 = 2400 non-trivial 7-torsion points of C. Since a cyclicétale cover C → C of degree 7 is given by a cyclic subgroup of order 7 of JC [7], there are exactly 2400 6 = 400 isomorphism classes of cyclicétale coverings C → C. Let p 1 , . . . , p 6 be the branch points of the hyperelliptic covering C h −→ P 1 . Let N be the number of isomorphism classes of coverings f : Y → P 1 ramified exactly over p 1 , . . . , p 6 of ramification type (2, 2, 2, 1). If we denote by β : R 2,7 → M 2 the forgetful map onto the moduli space of smooth curves of genus 2 and the intersection Pr 2,7 −1 (P (f )) ∩ β −1 ([C]) consists of d elements, then Lemma 3.1 implies that
(3.2) 400 = d · N.
Let p 1 , . . . , p 6 denote 6 points of P 1 in general position. We want to compute the number N of isomorphism classes of degree 7 coverings ramified of ramification type (2, 2, 2, 1) exactly over the points p 1 , . . . , p 6 .
Let π 1 denote the monodromy group of f with base point p = p i for all i = 1, . . . , 6. We consider π 1 as a subgroup of the symmetric group S 7 of degree 7 acting on the set {1, . . . , 7}. We need the following 2 trivial lemmas (which can of course be formulated for any odd prime p instead of 7). Lemma 3.2. Let a and b be involution of S 7 such that s := ab is a cycle of order 7, then the group generated by a and b is isomorphic to D 7 . Conversely, any subgroup of S 7 isomorphic to D 7 is of this type.
Proof. For the first assertion we have to show that asa = s −1 . But
as = aab = b = b −1 = b −1 a −1 a = s −1 a.
The second assertion is obvious. Proof. It is easy to check that if a or b consist of less that 3 disjoint transpositions or if one transposition occurring in a occurs also in b, then s = ab is not of order 7, which contradicts Lemma 3.2. For the last assertion note that if 2 involutions would fix the same number i, then all involutions and hence all elements of the group would fix i.
We give an example, although this is not necessary for the sequel, but makes the following proposition perhaps clearer. We may still conjugate the subgroups with the transpositions (12), (34) and (56). One checks that (12)s 1 (12) = s 6 5 ∈ G 6 , (34)s 1 (34) = s 4 ∈ G 4 , (56)s 1 (56) = s 2 ∈ G 2 , (12)s 8 (12) = s 6 12 ∈ G 12 , (34)s 8 (34) = s 9 ∈ G 9 , (56)s 8 (56) = s 11 ∈ G 11 . This implies that G 1 , G 2 , G 4 and G 5 as well as G 8 , G 9 , G 11 and G 12 are pairwise conjugate. Hence it suffices to show that G 1 is conjugate to G 8 . But one easily checks that with the permutation p := (2463) we have pap −1 = a and pb 1 p −1 = b 8 .
which gives the assertion. Proposition 3.6. Given 6 points of P 1 in general position. There are exactly 400 isomorphism classes of degree-7 coverings f : Y → P 1 with monodromy group D 7 ramified of type (2, 2, 2, 1) over each of the 6 points.
Proof. Let p 1 , . . . , p 6 ∈ P 1 be 6 points in general position. According to Proposition 3.5 all subgroups isomorphic to D 7 are conjugate. For example, we may take the monodromy subgroup of the covering to be π 1 = a, b 1 = {1, s 1 , . . . , s 6 1 , a, as 1 , . . . , as 6 1 }. One has to compute the number of conjugacy classes of 6 involutions c i of type (2,2,2,1) (associating c i to the point p i ) such that The elements of D 7 are either an involution (an odd permutation) or an element of order 7 (an even permutation). Hence the product of 5 involutions in D 7 gives an odd permutation, so it is an involution. Thus, for any involutions c 1 , . . . , c 5 of π 1 , c 6 := c 1 · · · c 5 is an involution satisfying (3.3). Since any 2 involutions of π 1 generate the group, only 7 of the corresponding coverings are not connected, namely those with c 1 = · · · = c 6 . This gives 7 5 − 7 coverings. Two 6-tuples of involutions give isomorphic coverings if and only if they are conjugate.
In order to compute the set of conjugate classes of 6-tuples of involutions, recall that the only transitive subgroups of S 7 which are not contained in A 7 are subgroups isomorphic to
• the dihedral group D 7 ,
• the group L 7 = AGL 1 (F 7 ) of affine transformations of the line with 7 points, • S 7 itself. (For the convenience of the reader we give a proof of this statement: Let G be a proper subgroup of S 7 of this type. Clearly G is a soluble group, since it is not contained in A 7 . It is primitive, being a permutation group of degree 7. Hence, according to [12, 7.2.7], it is a subgroup of L 7 . Since D 7 is the only transitive subgroup of L 7 , this gives the statement.) The group D 7 is not normal in S 7 , but it is normal in L 7 . Since on the other hand L 7 is isomorphic to the group of outer automorphisms of D 7 , two 6-tuples of involutions give isomorphic coverings if and only if they are conjugate under an element of L 7 . Since L 7 is of order 42, we finally get 7 5 − 7 42 = 400
isomorphism classes of degree 7 coverings Y → P 1 of ramification type (2, 2, 2, 1).
Together with (3.2) we get Corollary 3.7. Let f : Y → P 1 be a degree 7 covering with monodromy group D 7 ramified of type (2, 2, 2, 1) over 6 general points. There is exactly one cyclicétale cover f : C → C such that the diagram (3.1) is commutative. In particular, P (f ) ≃ JY × JY as polarized abelian varieties of type (1, . . . , 1, 7).
Proof of the main theorem
Let C 3 denote the locus in M 3 of genus 3 curves admitting a degree 7 covering f : Y → P 1 with ramification type (2, 2, 2, 1) over 6 general points in P 1 . It is an irreducible variety of dimension 3 since it is an image of the moduli space of pairs (Y, f ), which according to Lemma 3.1 are in bijection with the elements in R 2,7 .
Theorem 4.1. The elements of a generic fiber of the Prym map Pr 2,7 : R 2,7 → B D are in bijection with the set of degree 7 coverings f : Y → P 1 with ramification type (2, 2, 2, 1) over 6 general points that a curve Y ∈ C 3 admits.
Proof. By Corollary 3.7 the elements in the fiber of P (f ) ∈ B D are necessarily in different fibers of the forgetful map β : R 2,7 → M 2 . Together with uniqueness of the decomposition P (f ) ≃ JY × JY as polarized abelian varieties (see Theorem 2.6) this implies that the covering f : C → C is completely determined by the pair (Y, f ), since the branch points of f determine the genus 2 curve C. So the elements in R 2,7 with Prym variety isomorphic to JY × JY are in bijection with the coverings f : Y → P 1 with ramification type (2, 2, 2, 1) for a fixed curve Y ∈ C 3 .
It has been shown in [8, Theorem 1.1] that the degree of the Prym map Pr 2,7 is 10, so we get as immediate corollary Corollary 4.2. There are at most 10 coverings f : Y → P 1 of degree 7 with ramification type (2, 2, 2, 1) for a curve Y ∈ C 3 .
taining the zero of the kernel of the norm map Nm f : J C → JC, [D] → [f * (D)]. The resulting variety P (f ) := (Ker Nm f ) 0 ⊂ J C
We identify J C = J C and JY = JY via the canonical polarizations. The induced split polarization on JY × JY gives an identification (JY × JY ) ∧ = JY × JY . Let θ : JY × JY ψ → P ֒→ J C be the embedding. For i = 1, 2 denote by θ i the composition
1, . . . , 1, 7) on the product P := JY × JY , we recall from [2, Section 5.2] the description of the set of polarizations of degree 7 on P . According to [2, Theorem 5.2.4] the canonical principal polarization on JY × JY induces a bijection between the sets of (a) polarizations of degree 7 on JY × JY and (b) totally positive symmetric endomorphism of JY × JY with analytic norm 7.
Proposition 2. 5 .
5Let O denote the maximal order of the totally real cubic field End Q (JY ) and ϕ : JY → JY the isogeny of Proposition 2.2. The following equation admits a solution in O 7 = det(4 · 1 − ρ a (ϕ ϕ)). Theorem 2.6. For a general covering f the decomposition P (f ) ≃ JY × JY is unique up to automorphisms. Proof. We may assume that the the ring of endomorphisms of JY is the maximal order O of the field F , since both families, the family of Prym varieties P (f ) and the family of Jacobians with multiplication in F are irreducible of the same dimension 3. So End(P (f )) ≃ M 2 (O), the ring of matrices of degree 2 with entries in O. Let A be any direct factor of P (f ). So there is an abelian subvariety B of P (f ) (necessarily isogenous to A) with
Lemma 2. 7 .N 0 +. 8 .
708Any 2 nontrivial idempotents of M 2 (O) are conjugate to each other. Proof. Let ǫ ∈ M 2 (O) be an idempotent. Its minimal polynomial p(x) is different from x and x − 1, since ǫ is nontrivial. Let M := O 2 with ǫ acting by multiplication in the natural way. Let N 0 be its 0eigenspace, i.e. the kernel of ǫ and N 1 its 1-eigenspace, i.e. the kernel of ǫ − 1. We have ǫM ⊂ N 1 and (ǫ − 1)M ⊂ N 0 , since ǫ 2 − ǫ annihilates M. N 1 = M. The first equation is clear. For the second equation note that every m ∈ M can be expressed as m = ǫm − (ǫ − 1)m. Neither N 0 nor N 1 can equal M, since otherwise we would have p(x) = x or x − 1. Hence by (2.3) neither can be zero, so N 0 and N 1 are both O-modules of rank 1. Since O is a principal ideal domain, there are m 0 , m 1 ⊂ M such that N 0 = Om 0 and N 1 = Om 1 . It follows from (2.2) and (2.3) that {m 0 , m 1 } is an O-basis of M. With respect to this basis ǫ has the The proof works more generally for M 2 (O) with O any principal ideal domain.
Lemma 3 . 3 .
33Let a and b be involutions of S 7 such that a, b ≃ D 7 . Then a and b are products of 3 disjoint transpositions such that no transposition in a occurs in b. Equivalently a, respectively b, fixes exactly one number i, respectively j, and i = j. Each of the numbers 1, . . . , 7 is fixed by exactly one of the 7 involutions.
Example 3 . 4 .
34Consider the following involutions of S 7 (with right action, i.e. the product is from left to right) a = (12)(34)(56) and b = (23)(45)(67), with product s := ab = (1357642).According to Lemma 3.2 the group a, b is isomorphic to D 7 .
Proposition 3. 5 .
5All dihedral subgroups D 7 of S 7 are conjugate to each other.Proof. Different labelings of the set on which S 7 acts give conjugate subgroups of S 7 . We label this set in such a way that a = (12)(34)(56).The labelling is unique up to the transpositions (12), (34) and (56). Moreover, according to Lemma 3.3 we may choose b such that it fixes 1. The involution b cannot contain the transposition (27), since otherwise s = ab would contain the cycle (172). So we remain with the following possibilities b i for b: b 1 = (23)(45)(67), b 2 = (23)(46)(57), b 3 = (23)(47)(56), b 4 = (24)(35)(67), b 5 = (24)(36)(57), b 6 = (24)(37)(56), b 7 = (25)(34)(67), b 8 = (25)(36)(47), b 9 = (25)(37)(46), b 10 = (26)(34)(57), b 11 = (26)(35)(47), b 12 = (26)(37)(45). We compute s 1 = ab 1 = (1357642), s 2 = ab 2 = (1367542), s 3 = ab 3 = (13742), s 4 = ab 4 = (1457632), s 5 = ab 5 = (1467532), s 6 = ab 3 = (14732), s 7 = ab 7 = (15762), s 8 = ab 8 = (1537462), s 9 = ab 9 = (1547362), s 10 = ab 10 = (16752), s 11 = ab 11 = (1637452), s 12 = ab 12 = (1647352). So exactly the groups G i := a, b i with i = 1, 2, 4, 5, 8, 9, 11 and 12 are isomorphic to D 7 .
c 2 c 3 c 4 c 5 c 6 = 1.
Prym varieties and Schottky problem. A Beauville, Invent. Math. 41A. Beauville: Prym varieties and Schottky problem. Invent. Math. 41 (1977), 146-196.
Ch, H Birkenhake, Lange, Complex Abelian Varieties. Springer -Verlag302Second editionCh. Birkenhake, H. Lange: Complex Abelian Varieties. Second edition, Grundlehren der Math. Wiss. 302, Springer -Verlag (2004).
Ch, H Birkenhake, Lange, Moduli space of abelian surfaces with isogeny. Geometry and analysis. Bombay, 1992; BombayCh. Birkenhake, H. Lange, Moduli space of abelian surfaces with isogeny. Geometry and analysis (Bombay, 1992), 225-243, Tata Inst. Fund. Res., Bombay, 1995.
R Donagi, The fibers of the Prym map. Curves, Jacobians, and abelian varieties. Amherst, MA; Providence, RIAmer. Math. Soc136R. Donagi: The fibers of the Prym map. Curves, Jacobians, and abelian varieties (Amherst, MA, 1990), 55-125. Contemp. Math., 136, Amer. Math. Soc., Providence, RI, 1992.
The stucture of the Prym map. R Donagi, R Smith, Acta Math. 146R. Donagi, R. Smith: The stucture of the Prym map. Acta Math. 146 (1981), 25-185.
Prym Varieties of Triple Cyclic Covers. C Faber, Math. Zeitschrift. 199C. Faber: Prym Varieties of Triple Cyclic Covers. Math. Zeitschrift 199 (1988), 61-97.
Prym varieties of triple coverings. H Lange, A Ortega, 10.1093/imrn/rnq287Int. Math. Res. Notices. 22H. Lange, A. Ortega: Prym varieties of triple coverings. Int. Math. Res. Notices 22 (2011), 5045-5075. doi: 10.1093/imrn/rnq287.
Ortega: T he Prym map of degree-7 cyclic coverings. H Lange, A , ArXiv:math.AG/1501.07511H. Lange, A. Ortega: T he Prym map of degree-7 cyclic coverings. ArXiv:math.AG/1501.07511.
Prym varieties I. D Mumford, Contributions to Analysis. L.V. Ahlfors, I. Kra, B. Maskit, and L. NirenbergAcademic PressD. Mumford: Prym varieties I. In L.V. Ahlfors, I. Kra, B. Maskit, and L. Nirenberg, editors, Contributions to Analysis. Academic Press (1974), 325-350.
Variétés de Prym associées aux revêtements n-cycliques d'un courbe hyperlliptique. A Ortega, Math. Z. 245A. Ortega: Variétés de Prym associées aux revêtements n-cycliques d'un courbe hyperlliptique. Math. Z. 245 (2003), 97-103.
. H Popp, Fundamentalgruppen algebraischer Mannigfaltigkeiten. Springer LNM. 176H. Popp: Fundamentalgruppen algebraischer Mannigfaltigkeiten. Springer LNM 176 (1970).
A Course in the Theory of Groups. D J S Robinson, Texts in Math. 80SpringerSecond editionD.J.S. Robinson: A Course in the Theory of Groups. Second edition, Springer Grad. Texts in Math. 80 (1996).
|
[] |
[
"ON THE KONTSEVICH INTEGRAL FOR KNOTTED TRIVALENT GRAPHS",
"ON THE KONTSEVICH INTEGRAL FOR KNOTTED TRIVALENT GRAPHS"
] |
[
"Zsuzsanna Dancso "
] |
[] |
[] |
In this paper we construct an extension of the Kontsevich integral of knots to knotted trivalent graphs, which commutes with orientation switches, edge deletions, edge unzips, and connected sums. In 1997 Murakami and Ohtsuki [MO] first constructed such an extension, building on Drinfel'd's theory of associators. We construct a step by step definition, using elementary Kontsevich integral methods, to get a one-parameter family of corrections that all yield invariants well behaved under the graph operations above.
|
10.2140/agt.2010.10.1317
|
[
"https://arxiv.org/pdf/0811.4615v3.pdf"
] | 5,217,291 |
0811.4615
|
86a3c29e81d04e1ec82d055f9313f2cce7ab6003
|
ON THE KONTSEVICH INTEGRAL FOR KNOTTED TRIVALENT GRAPHS
3 Dec 2008
Zsuzsanna Dancso
ON THE KONTSEVICH INTEGRAL FOR KNOTTED TRIVALENT GRAPHS
3 Dec 2008
In this paper we construct an extension of the Kontsevich integral of knots to knotted trivalent graphs, which commutes with orientation switches, edge deletions, edge unzips, and connected sums. In 1997 Murakami and Ohtsuki [MO] first constructed such an extension, building on Drinfel'd's theory of associators. We construct a step by step definition, using elementary Kontsevich integral methods, to get a one-parameter family of corrections that all yield invariants well behaved under the graph operations above.
Introduction
The goal of this paper is to construct an extension of the Kontsevich integral Z of knots to knotted trivalent graphs. The extension is a universal finite type invariant of knotted trivalent graphs, which commutes with natural operations that are defined on the space of graphs, as well as on the target space of Z. These operations are changing the orientation of an edge; deleting an edge; unzipping an edge (an analogue of cabling of knots); and connected sum.
One reason this is interesting is that several knot properties (such as genus, unknotting number and ribbon property, for example) are definable with short formulas involving knotted trivalent graphs and the above operations. Therefore, such an operation-respecting invariant yields algebraic necessary conditions for these properties, i.e. equations in the target space of the invariant. This idea is due to Dror Bar-Natan, and is described in more detail in [BN2]. The extension of Z is the first example for such an invariant. Unfortunately, the target space of Z is too complicated for it to be useful in a computational sense. However, we hope that by finding sufficient quotients of the target space more computable invariants could be born.
The construction also provides an algebraic description of the Kontsevich integral (of knots and graphs), due to the fact that knotted trivalent graphs are finitely generated, i.e. there's a finite (small) set of graphs such that any knotted trivalent graph can be obtained from these using the above operations. This is described in more detail in [T]. Since the extension commutes with the operations, it is enough to compute it for the graphs in the generating set. As knots are special cases of knotted trivalent graphs, this also yields an algebraic description of the Kontsevich integral of knots.
Z was first extended to knotted trivalent graphs by Murakami and Ohtsuki in [MO]. When one tries to extend Z naively, replacing knots by knotted graphs in the definition, the result is neither convergent nor an isotopy invariant. Thus, one needs to apply renormalizations to make it converge, and corrections to make it invariant. Murakami and Ohtsuki use the language of q-tangles (parenthesized tangles), building on a significant body of knowledge about Drinfel'd's associators to prove that the extension is a well-defined invariant.
An other extension was constructed in 2007 by D. Cheptea and T. Q. T. Le in [CL], using even associators. They prove that their extension is an isotopy invariant (in part building on [MO] for this part), and that it commutes with orientation switches and edge deletions and is unique if certain local properties are required -in this sense it is the strongest. They also conjecture that it coincides with Murakami and Ohtsuki's construction.
The main purpose of this paper is eliminate the black box quality of the extended invariants, which results partly from the depth of the ingredients that go into them, and partly from the fact that the proofs needed for the constructions are spread over several papers ( [MO], [LMMO], [LM] or [CL], [MO]).
Our construction differs from previous ones in that we build the corrected extension step by step on the naive one. After renormalizations to make the extension convergent, non-invariance errors arise. We fix some of these by introducing counter terms (corrections) that are precisely the inverses of the errors, and we show that thanks to some "syzygies", i.e. dependencies between the errors, all the other errors get corrected automatically. The proofs involve mainly elementary Kontsevich integral methods and combinatorial considerations.
As a result, we get a series of different corrections that all yield knotted graph invariants. One of these is the Murakami-Ohtsuki invariant. We note that as a special case, our construction also produces an associator.
Since the construction has many details and there's a risk of the main ideas getting lost among them, Section 2 is an "Executive summary" of the key points and important steps. All the details are omitted here, and follow later in the paper. The purpose of this section is to emphasize what is important and to provide an express lane to those familiar with the topic.
In an effort to make the construction and the inner workings of Z as transparent as possible, Section 3 is dedicated to reviewing the relevant results for the Kontsevich integral of knots. We state all the results that we need, mostly with proofs. The reason for reproducing the proofs is that we need to modify some of them for the results to carry over to graphs, therefore understanding the knot case is crucial for the readability of the paper. Our main reference is a nice exposition by Chmutov and Duzhin, [CD]. This is the source of any theorems and proofs in Section 2, unless otherwise stated. We also use results from Bar-Natan's paper [BN1], which is another good reference for the Kontsevich integral of knots in general. Other references include Kontsevich's original paper [K].
Section 4 is dedicated to defining the necessary framework for the extension and trying (and failing) to naively extend Z to graphs.
In Section 5 we use a baby versions of a renormalization technique from quantum field theory to eliminate the divergence that occurs in the case of the naive extension, and prove that the resulting not-quiteinvariant has some promising properties.
The bulk of the difficulty lies in Section 6, where we need to find the appropriate correction factors to make the extension an isotopy invariant. This involves some computations and a series of combinatorial considerations, but is done in a fairly elementary way overall. It will turn out that the result will almost, but not quite commute with the unzip operation (and this will happen for any invariant that commutes with edge deletion), so we need to renormalize the unzip operation to get a fully well behaved invariant.
1.1. Acknowledgments. I am greatly indebted to my advisor, Dror Bar-Natan, for suggesting this project to me, weekly helpful discussions, and proofreading this paper several times.
Executive summary
This section contains the main points of the construction, but no details. All the details will follow later in the paper.
Trivalent graphs are graphs with three vertices joining at each vertex. We allow multiple edges, loops, and circles with no vertices. Knotted trivalent graphs are embeddings of trivalent graphs into R 3 , modulo isotopy. We require all edges to be oriented and framed.
There are four operations of knotted trivalent graphs: orientation switch, edge deletion, edge unzip, and connected sum. When deleting an edge, two vertices cease to exist. Unzip is an analogue of cabling, as shown:
Connected sum depends on choices of edges to connect, and produces two new vertices:
e f Γ ′ Γ Γ# e,f Γ ′ e f
The Kontsevich integral of knots is defined by the following integral formula:
2 3 t 1 t 2 t 3 t 4 t 4 1 z 1 z ′ 1 D P Z(K) = ∞ m=0 t min <t 1 <...<tm<tmax t i non−critical P =(z i ,z ′ i ) (−1) P ↓ (2πi) m D P m i=1 dz i − dz ′ i z i − z ′ i
We naively generalize this to knotted trivalent graphs (or trivalent tangles), by simply putting graphs in the picture and applying the same formula.
The integral now takes its values in chord diagrams on trivalent graph "skeletons". These are, just like ordinary chord diagrams, factored out by the 4T relation, and one additional relation, called vertex invariance, or VI:
+ (−1) → + (−1) → (−1) → = 0 ,
where the sign (−1) → is −1 if the edge the chord is ending on is oriented to be outgoing from the vertex, and +1 if it's incoming.
We do not mod out by the one term relation, since we don't want the invariant to be framing-independent.
The naively defined extension has promising properties: it preserves the factorization property (multiplicativity) of the ordinary Kontsevich integral, and it is invariant under horizontal deformations that leave the critical points and vertices fixed. It holds promise to be a universal finite type invariant of knotted trivalent graphs. But, unfortunately, it is divergent. The divergence is caused by "short chords" near the critical points (since we didn't factor out by the one term relation, which saved us in the knot case), and short chords near the vertices (which cause a divergence the same way, but we don't have any reason to factor them out).
We fix this by a technique borrowed from quantum field theory: we know exactly what the divergence is, so we "multiply by its inverse".
We choose a fixed scale µ and open up the strands at each vertex and critical point to width µ, at a small distance ε from the vertex, as shown for a vertex of a λ shape:
µ ε
We compute the integral using this picture instead of the original one. We only allow chords between the dotted parts, but not chords coming from far away ending on them. The renormalized integral is the limit of this quantity as ε tends to zero. We do the same for vertices of a Y shape, and critical points. We will have to insist that we embed the graph in a way that only contains vertices of either λ or Y shapes (I.e, either one edge is locally above the vertex and two are below, creating a λ-shape, or two are above and one below, creating a Y -shape. We do not allow a vertex that is a local minimum or maximum at the same time.) There is of course such an embedding in any isotopy class, however, this will cause some invariance issues later.
The renormalized version is convergent (this is easy to prove), it retains the good properties of the naive extension, and it is invariant under rigid motions of critical points and vertices (i.e. deformations that do not change the number of critical points or the λ/Y type of the vertices).
There are natural orientation switch, edge delete, edge unzip, and connected sum operations defined on the chord diagrams on graph skeletons, and the renormalized integral commutes with all these operations.
Furthermore, it has sensible behavior under changing the scale µ: an easy computation shows that the value of Z will get multiplied by a simple exponential factor (a chord diagram on two strands) at all vertices and critical points.
The problem with the renormalized integral is that it is far from being an isotopy invariant: it does not tolerate deformations changing the number of critical points ("straightening humps"-the same problem arises in case of the ordinary Kontsevich integral), or changing the shape of the vertices.
Similar to the knot case, but more complicated, we will have to introduce corrections to make Z an invariant. There are four "correctors" available: we can put prescribed little chord diagrams on each minimum, maximum, λ-vertex, and Y -vertex. We will call these u, n, λ and Y .
There are eight moves needed for the isotopy invariance of Z: For example, the first syzygy tells us that once Z is invariant under move 1, it is automatically invariant under move 2. The second says that moves 1 and 3 imply 5, etc. In the end, we have reduced the problem to making Z invariant under moves 1, 3 and 4. The moves translate to equations between the correctors u, n, λ and Y .
We know how to solve the equation corresponding to move 1, as this was done even in the knot case.
The most difficult step is solving equations 3 and 4. These are obviously not independent, since the leftmost and rightmost sides are mirror images, and the Kontsevich integral of a mirror image is the mirror image. However, it is not true that any set of corrections that fix move 3 will fix move 4 automatically (i.e. there's no missing syzygy). What we need to do is to solve the two equations simultaneously.
We achieve this by showing that moves 3 and 4 are equivalent to the following equations of chord diagrams:
a u λ = Y = u λ a ,
where we "compute" a explicitly, i.e. we express it as the renormalized Kontsevich integral of the simple tangle . This is a fairly elementary, but tricky computation. All we use about a for the invariance though is that it is mirror symmetric, which is obvious from its definition. (We use more of its properties later.)
Looking at the above equations there's an obvious set of corrections that will make Z an isotopy invariant: λ = a −1 , Y = 1, u = 1 n = ν −1 , where n is determined by u the same way it is in the knot case.
Some attention needs to be paid to edge orientations, to make sure Z commutes with orientation switches.
To show that the resulting invariant commutes with edge delete and connected sum, we use another property of a that is almost obvious from its expression as a value of Z.
By rearranging the equations, we produce a one-parameter family of corrections all yielding isotopy invariants, one of these is the set of corrections used by Murakami and Ohtsuki. Finally, we show that unfortunately, no invariant will commute at the same time with the edge delete and edge unzip operations, as they are defined, so we renormalize the unzip operation to make Z fully wellbehaved. This amounts to observing what the error is and multiplying by its inverse.
A quick overview of the Kontsevich integral for knots
The reference for everything in this section, unless otherwise stated, is [CD].
3.1. Finite type invariants and the algebra A. The theory of finite type (or Vassiliev) invariants grew out of the idea of V. Vassiliev to extend knot invariants to the class of singular knots. By a singular knot we mean a knot with a finite number of simple (transverse) double points. The extension of a knot invariant f follows the rule (Vassiliev) invariant is a knot invariant whose extension vanishes on all knots with more than n double points, for some n ∈ N. The smallest such n is called the order, or type of the invariant.
f ( ) = f ( ) − f ( ) A finite type
The set of all Vassiliev invariants forms a vector space V, which is filtered by the subspaces V n , the Vassiliev invariants of order at most n:
V 0 ⊆ V 1 ⊆ V 2 ⊆ ... ⊆ V n ⊆ ... ⊂ V
This filtration allows us to study the simpler associated graded space:
grV = V 0 ⊕ V 1 /V 0 ⊕ V 2 /V 1 ⊕ ... ⊕ V n /V n−1 ⊕ ...
The components V n /V n−1 are best understood in terms of chord diagrams.
A chord diagram of order n is a circle with a set of n chords all of whose endpoints are distinct (see the figure below). The actual shape of the chords and the exact position of endpoints are irrelevant, we are only interested in the pairing they define on the 2n cyclically ordered points:
The chord diagram of a singular knot S 1 → S 3 is the oriented circle S 1 with the pre-images of each double point connected by a chord.
Let C n be the vector space spanned by all chord diagrams of order n.
Let F n be the vector space of all C-valued linear functions on C n .
A Vassiliev invariant f ∈ V n determines a function [f ] ∈ F n defined by [f ](C) = f (K), where K is any singular knot whose chord diagram is D.
The fact that [f ] is well defined (does not depend on the choice of K) can be seen as follows: If K 1 and K 2 are two singular knots with the same chord diagram, then they can be projected on the plane in such a way that their knot diagrams coincide except possibly at a finite number of crossings, where K 1 may have an over-crossing and K 2 an under-crossing or vice versa. But since f is an invariant of order n and K has n double points, a crossing flip does not change the value of f (since the difference would equal to the value of f on an (n+1)-singular knot, i.e. 0).
The kernel of the map V n → F n is, by definition, V n−1 . Thus, what we have defined is an inclusion i n : V n /V n−1 → F n . The image of this inclusion (i.e. the set of linear maps on chord diagrams that come from Vassiliev invariants) is described by two relations:
4T, the four-term relation:
f ( ) − f ( ) + f ( ) − f ( ) = 0,
for an arbitrary fixed position of (n − 2) chords (not drawn here) and the two additional chords as shown.
This follows from the following fact of singular knots: 0 1 0 1 00 11 0 0 1 1 0 0 1 1 00 11 0 0 1 1 00 11
f ( ) + f ( ) + f ( ) + f ( ) = 0,
which is easy to show using the following isotopy: To explain the name of this relation, let us say a word about framed knots. A framing on a curve is a smooth choice of a normal vector at each point of the curve, up to isotopy. This is equivalent to "thickening" the curve into a band, where the band would always be orthogonal to the chosen normal vector. A knot projection (knot diagram) defines a framing ("blackboard framing"), where we always choose the normal vector that is normal to the plane that we project to. Every framing (up to isotopy) can be represented as a blackboard framing, for some projection.
If we were to study framed knots, these are in one-to-one correspondence to knot diagrams modulo Reidemeister moves R2 and R3, as R1 adds or eliminates a twist, i.e. it changes the framing.
The framing independence relation arises from R1, meaning that the knot invariants we work with are independent of any framing on the knot. This will change later in the paper as we turn to graphs.
We define the algebra A, as a direct sum of the vector spaces A n generated by chord diagrams of order n considered modulo the FI and 4T relations. The multiplication on A is defined by the connected sum of chord diagrams, which is well defined thanks to the 4T relations (for details, see for example [BN1]).
The C-valued linear functions on A n are called weight systems of order n. The above construction shows that every Vassiliev invariant defines a weight system of the same order.
The famous theorem of Kontsevich, also known as the Fundamental Theorem of Finite Type Invariants, asserts that every weight system arises as the weight system of a finite type invariant. The proof relies on the construction of a universal finite type invariant, called the Kontsevich Integral Z, which takes its values in the graded completion of A. Given a weight system, one gets the appropriate finite type invariant by pre-composing with Z.
3.2. The definition of the Kontsevich Integral. Let us represent R 3 as a direct product of a complex plane C with coordinate z and a real line with coordinate t. Let the knot K be embedded in C × R in such a way that the coordinate t is a Morse function on K.
The Kontsevich integral of K is an element in the graded completion of A, defined by the following formula:
1 z 1 z ′ 1 D P Z(K) = ∞ m=0 t min <t 1 <...<tm<tmax t i non−critical P =(z i ,z ′ i ) (−1) P ↓ (2πi) m D P m i=1 dz i − dz ′ i z i − z ′ i
In the formula, t min and t max are the minimum and maximum of the function t on K.
The integration domain is the m-dimensional simplex t min < t 1 < ... < t m < t max divided by the critical values into a number of connected components. The number of summands in the integral is constant in each of the connected components.
In each plane {t = t j } choose an unordered pair of distinct points (z j , t j ) and (z ′ j , t j ) on K, so that z j (t j ) and z ′ j (t j ) are continuous functions. P denotes the set of such pairs for each j. The integrand is the sum over all choices of P .
For a pairing P , P ↓ denotes the number of points (z j , t j ) or (z ′ j , t j ) in P where t decreases along the orientation of K. D P denotes the chord diagram obtained by joining each pair of points in P by a chord, as the figure shows.
Over each connected component, z j and z ′ j are smooth functions. By
m i=1 dz i −dz ′ i z i −z ′ i
we mean the pullback of this form to the simplex. The term of the Kontsevich integral corresponding to m = 0 is, by convention, the only chord diagram with no chords, with coefficient one, i.e, the unit of the algebra A.
3.3. Convergence. Let us review the proof of the fact that each integral in the above formula is convergent. By looking at the definition, one observes that the only way the integral may not be finite is the (z i − z ′ i ) in the denominator getting arbitrarily small near the critical points (the boundaries of the connected components of the integration domain). This only happens near a cup or cap in the knot-otherwise the minimum distance between strands is a lower bound for the denominator.
If a chord c k is separated from the critical value by another "long" chord c k+1 ending closer to the critical value, as shown below, then the smallness in the denominator corresponding to chord c k will be canceled by the smallness of the integration domain for c k+1 , hence the integral converges:
z ′ k+1 z ′′ k z k+1 z k
The integral for the long chord can be estimated as follows (using the picture's notation):
t crit t k dz k+1 − dz ′ k+1 z k+1 − z ′ k+1 ≤ C t crit t k d(z k+1 − z ′ k+1 ) = C|(z crit − z k ) − (z crit − z ′ k+1 (t k ))| ≤ C ′ |z k − z ′ k |
For some constants C and C'. So the integral for the long chord is as small as the denominator for the short chord, therefore the integral converges.
Thus, the only way a divergence can occur is the case of an isolated chord, i.e. a chord near a critical point that is not separated from it by any other chord ending. But, by the one term relation, chord diagrams containing an isolated chord are declared to be zero, which makes the divergence of the corresponding integral a non-issue.
3.4. Invariance. Since horizontal planes cut the knot into tangles, we will use tangles and their properties to prove the invariance of the Kontsevich integral in the class of Morse knots.
A tangle chord diagram is a tangle supplied with a set of horizontal chords considered up to a diffeomorphism of the tangle that preserves the horizontal fibration.
Multiplication of tangles induces a multiplication of tangle chord diagrams.
If T is a tangle, the space A T is a vector space generated by all chord diagrams on T , modulo the set of tangle one-and four-term relations:
The tangle one-term relation (or framing independence): A tangled chord diagram with an isolated chord is equal to zero in A T .
The tangle 4T relation: Let T be a tangle consisting of n parallel vertical strands. Denote by t ij the chord diagram with a single horizontal chord connecting the i-th and j-th strands, multiplied by (−1) ↓ , where ↓ stands for the number of endpoints of the chord lying on downward-oriented strands.
... ... ... t ij = (−1) ↓ i j
The tangle 4T relation can be expressed as a commutator in terms of the t ij 's:
[t ij + t ik , t jk ] = 0.
One can check that by closing the three vertical strands into a circle respecting their orientations, the tangle 4T relation carries over into the ordinary 4T relation.
We take the opportunity here to mention a useful lemma, a slightly different version of which appears in D. Bar-Natan's paper [BN1], and a special case is stated in Murakami and Ohtsuki, [MO]. This is a direct consequence of the tangle 4T relations:
Lemma 3.1. Locality. Let T be the tangle consisting of n horizontal strings, and D be any chord diagram such that no chords end on the n-th string. Let S be the sum
n−1 i=1 t in in A T . Then S commutes with D in A T .
The Kontsevich integral is defined for tangles the same way it is defined for knots.
By Fubini's theorem, it is multiplicative:
Z(T 1 )Z(T 2 ) = Z(T 1 T 2 ),
whenever the product T 1 T 2 is defined. This implies the important fact that the Kontsevich integral of the (vertical, and by the invariance results, any) connected sum of knots is the product (in the algebra A of chord diagrams) of the Kontsevich integrals of the summands. By the invariance results this will generalize to any connected sum. We will sometimes refer to this as the factorization property, or multiplicativity.
Proposition 3.2. The Kontsevich integral is invariant under horizontal deformations (deformations preserving the t coordinate) of the knot that leave the critical points fixed.
Proof. Let us decompose the knot into a product of tangles without critical points, and other ("thin") tangles containing one unique critical point.
The following lemma addresses the case of tangles without critical points. The proposition then follows from the lemma by taking a limit.
Lemma 3.3. Let T 0 be a tangle without critical points and T λ a horizontal deformation of T 0 into T 1 , such that T λ fixes the top and the bottom of the tangle. Then Z(T 0 ) = Z(T 1 ).
We will use this lemma as an ingredient without any modification, so we omit the proof here, which uses Stokes Theorem and the fact that the differential form inside the integral is exact. Details can be found in [CD], for example.
The next lemma is the only one where we go into more detail than Chmutov and Duzhin in [CD]. We will need a modification of this proof in the graph case, so we felt it was important to use rigorous notation and touch on the fine points.
Proposition 3.4. Moving critical points. If T 0 and T 1 are two tangles that differ only in a thin needle (possibly twisted), as in the figure, such that each level {t = c} intersects the needle in at most two points and the distance between these is at most ε. Then Z(T 0 ) = Z(T 1 ).
Proof. Z(T 0 ) and Z(T 1 ) can only differ in terms with a chord ending on the needle. If the chord closest to the end of the needle connects the two sides of the needle (isolated chord), then the corresponding diagram is zero by the FI (1T) relation.
So, we can assume that the one closest to the needle's end is a "long chord", suppose the endpoint belonging to the needle is (z k , t k ). Then, there's another choice for the k − th chord which touches the needle at the opposite point (z ′′ k , t k ), as the figure shows, and D P will be the same for these two choices.
z ′′ k z k z ′ k
The corresponding two terms appear in Z(T 1 ) with opposite signs due to (−1) P ↓ , and the difference of the integrals can be estimated as follows:
tc t k−1 d(ln(z k −z ′ k ))− tc t k−1 d(ln(z ′′ k −z ′ k )) = ln z ′′ k (t k−1 ) − z ′ k (t k−1 z k (t k−1 ) − z ′ k (t k−1 ) = = ln(1 + z ′′ k (t k−1 ) − z k (t k−1 ) z k (t k−1 ) − z ′ k (t k−1 ) ≤ C|z ′′ k (t k−1 ) − z k (t k−1 )| ≤ Cε,
where t c is the value of t at the tip of the needle, and C is a constant depending on the minimal distance of the needle to the rest of the knot.
If the next, (k − 1)-th chord is long, then the double integral corresponding to the k-th and (k − 1)-th chords is at most:
tc t k−2 tc t k−1 d(ln(z k − z ′ k )) − tc t k−1 d(ln(z ′′ k − z ′ k )) d(ln(z k−1 − z ′ k−1 ) ≤ ≤ Cε tc t k−2 d(ln(z k−1 − z ′ k−1 )) = Cε ln z k−1 (t c ) − z ′ k−1 (t c ) z k−1 (t k−2 ) − z ′ k−1 (t k−2 ) ≤ ≤ CC ′ ε,
where C' is another constant depending on the ratio of the biggest and smallest horizontal distance from the needle to the rest of the knot.
If the (k − 1)-th chord is short, i.e. it connects z k−1 and z ′ k−1 that are both on the needle, then we can estimate the double integral corresponding to the k-th and (k − 1)-th chords:
tc t k−2 tc t k−1 d(ln(z k − z ′ k )) − tc t k−1 d(ln(z ′′ k − z ′ k )) dz ′ k−1 − dz k−1 z ′ k−1 − z k−1 ≤ ≤ C tc t k−2 (z ′′ k (t k−1 ) − z k (t k−1 )) dz ′ k−1 − dz k−1 |z ′ k−1 − z k−1 | = = C tc t k−2 d(z ′ k−1 − z k−1 ) = C|z ′ k−1 (t k−2 ) − z k−1 (t k−2 )| ≤ Cε.
Continuing to go down the needle, we see that the difference between Z(T 0 ) and Z(T 1 ) in degree n is proportional to (C ′′ ) n ε, for a constant C ′′ = max{C, C ′ }, and by horizontal deformations we can make ε tend to zero, therefore the difference tends to zero. This proves the proposition.
This proves the invariance of the Kontsevich integral in the class of Morse knots: To move critical points, one can form a sharp needle using horizontal deformations only, then shorten or lengthen the needles arbitrarily, then deform the knot as desired by horizontal deformations.
However, Z is not invariant under "straightening humps", i. e. deformations that change the number of critical points, as shown below. (We note that straightening the mirror image of the hump shown is equivalent to this one, see Section 6 for the details.)
To fix this problem, we apply a correction, using the following proposition:
Proposition 3.5. Let K and K ′ be two knots differing only in a small hump in K that is straightened in K ′ (as in the figure). Then
Z(K ′ ) = Z(K)Z( ).
This proposition is a consequence of the following lemma:
Lemma 3.6. Faraway strands don't interact. Let K be a Morse knot with a distinguished tangle T , with t bot and t top being the minimal and maximal values of t on T . Then, in the formula of the Kontsevich integral, for those components whose projection on the t j axis is
contained in [t bot , t top ], it is enough consider pairings where either both points (z j , t j ) and (z ′ j , t ′ j ) belong to T , or neither do. T Proof.
We can shrink the tangle T into a narrow box of width ε, and do the same for the rest of the knot between heights t bot and t top . It's easy to see that the value of the integral corresponding to "long" chords (connecting the tangle to the rest of the knot) then tends to zero.
Proof. (Of the Proposition.)
The proposition follows by choosing T to include just the hump, i.e. there will be no long chords connecting the hump to the rest of the knot in K ′ or in . Also, there cannot be any chords above or below the hump, since the highest (resp. lowest) of those would be an isolated chord.
Since the constant term of Z( ) is 1, it has a reciprocal in the graded completion of A (i.e. formal infinite series of chord diagrams). Using this we can now define an honest knot invariant Z 2 by setting
Z ′ (K) = Z(K) Z( ) c/2 ,
where c is the number of critical points in the Morse embedding of K that we use to compute Z.
3.5. Universality. Here we state Kontsevich's theorem, and the main idea of the proof, which will apply in the case of the extension to graphs word by word. A complete, detailed proof can be found in [CD] or [BN1], and Kontsevich's paper [K].
Theorem 3.7. Let w be a weight system of order n. Then there exists a Vassiliev invariant of order ≤ n whose weight system is w, given by the formula
K → w(Z ′ (K)).
This property of Z ′ is referred to as being a universal finite type invariant.
Proof. (Sketch.) Let D be a chord diagram of order n, and K D a singular knot with chord diagram D. The theorem follows from the fact that Z ′ (K D ) = D + {higher order terms}.
Since the denominator of Z ′ always begins with 1 (the unit of A), it is enough to prove that
Z(K D ) = D + {higher order terms}.
Because of the factorization property and the fact that faraway strands don't interact (Lemma 3.6), we can think locally. Around a single double piont, we need to compute the difference of Z on an overcrossing and an undercrossing. These can be deformed as follows:
Z( ) − Z( ) = Z − Z .
Since the crossings on the bottom are now identical, by the factorization property, it's enough to consider Z − Z .
Z equals 1 (the unit of A), as both z i (t) and z ′ i (t) are constant. In Z , the first term is 1, as always, so this will cancel out in the difference. The next term is the chord diagram with one single chord, and this has coefficient 1 2πi tmax t min dz−dz ′ z−z ′ = 1, by Cauchy's theorem. So the lowest degree term of the difference is a single chord with coefficient one. Now putting K D together, the lowest degree term in Z(K D ) will be a chord diagram that has a single chord for each double point, which is exactly D.
The naive extension
4.1. Knotted trivalent graphs. Let us first state the necessary definitions and basic properties.
A trivalent graph is a graph which has three edges meeting at each vertex. We will require that all edges be oriented. We allow multiple edges; loops (i.e. edges that begin and end at the same vertex); and circles (i.e. edges without a vertex).
A knotted trivalent graph is an isotopy class of embeddings of a fixed such graph in R 3 . (So in particular, knots and links are knotted trivalent graphs.) We will also require edges to be equipped with a framing, i. e. a choice of a normal vector field, which is smooth along the edges and is chosen so that the three normal vectors agree at the vertices. We can imagine the graph being a "band graph", with thickened edges, as shown in the picture.
Isotopy classes of knotted, framed graphs are in one to one correspondence with graph diagrams (projections onto a plane with only transverse double points preserving the over-and under-strand information at the crossings), modulo the Reidemeister moves R2, R3 and R4 (first defined for graphs in [Y]). R1 is omitted because we're working with framed graphs, R2 and R3 are the same as in the knot case. R4 involves moving a strand in front of or behind a vertex:
R4a :
R4b :
There are four operations defined on knotted, oriented, framed trivalent graphs:
Given a trivalent graph (knotted or not) γ and an edge e of γ, we can switch the orientation of e.
We can also delete the edge e, which means the two vertices at the ends of e also cease to exist to preserve the trivalence. To do this, it is required that the orientations of the two edges connecting to e at either end match.
Unzipping the edge e (see the figure below) means replacing it by two edges that are very close to each other-to do this we use the framing, and thus unzip is only well defined on a knotted framed graph. The two vertices at the ends of e will disappear. (It can be imagined as cutting the band of e in half lengthwise.) Again, the orientations have to match, i.e. the edges at the vertex where e begins have to both be incoming, while the edges at the vertex where e ends must both be outgoing.
Given two graphs with selected edges (Γ, e) and (Γ ′ , f ), the connected sum of these graphs along the two chosen edges is obtained by joining e and f by a new edge. For this to be well-defined, we also need to specify the direction of the new edge, the framing on it, and, thinking of e and f as bands, which side of the bands the new edge is attached to. To compress notation, let us declare that the new edge be oriented from Γ towards Γ ′ , have no twists, and, using the blackboard framing, be attached to the right side of e and f , as shown:
e f Γ ′ Γ Γ# e,f Γ ′ e f 4.2.
The algebra A(Γ). The extended integral will take its values in the algebra A(Γ), which consists of chord diagrams on the skeleton Γ, the trivalent graph (as a combinatorial object, not as embedded in R 3 ), and is again factored out by the same four term relations, along with one more class of relations called the vertex invariance relations:
+ (−1) → + (−1) → (−1) → = 0
Here, the sign (−1) → is −1 if the edge the chord is ending on is oriented to be outgoing from the vertex, and +1 if it's incoming.
The vertex invariance relation arises, topologically, the same way as the 4T relation, i.e. any weight system of a finite type invariant of knotted trivalent graphs (defined the same way as for knots) will be zero on the above sum of chord diagrams. The proof is the same as for the 4T relation, and one can use a similar trick.
We are going to study invariants of framed graphs, hence we will not factor out by the one-term (or framing independence) relation.
There are several reasons to do this, one being that the unzip operation uses the framing, so it's natural to want invariants to be perceptive of it as well. Also, while up to isotopy, all the information in a framing of a knot can be described by an integer (the number of twists), this is no longer true for graphs. A "local" twist of an edge can, through an isotopy, become a "global" twist of two other edges connecting to it at a vertex. Therefore, one can argue that framing is a more interesting property of graphs than of knots. Now let us define operations on the spaces A(Γ):
Given a graph Γ and an edge e, the orientation switch operation is a linear map ⊣(Γ) → A(s e (Γ)) that multiplies a chord diagram D by (−1) k where k is the number of chords in D ending on e.
Edge delete is a linear map A(Γ) → A(d e (Γ)), defined as follows: when the edge e is deleted, all diagrams that had a chord ending on e will become zero, with all other chords unchanged.
There is an operation on A(O) corresponding to the cabling of knots, one reference is Bar-Natan's paper [BN1]. The graph unzip operation is the graph analogy of cabling, so the corresponding map is analogous as well:
Unzip is a linear map A(Γ) → A(u e (Γ)). When e is unzipped, each chord that ends on it will be replaced by a sum of two chords, one ending on each new edge (i.e., if k chords end on e, then u e will send this particular chord diagram to a sum of 2 k chord diagrams). (Note that the u e (Γ) combinatorially can be two different graphs depending on the framing of e in the embedding of Γ.)
For graphs Γ and Γ ′ , with edges e and e ′ , the connected sum # e,e ′ : A(Γ) × A(Γ ′ ) → A(Γ# e,e ′ Γ ′ ) is defined in the obvious way, by performing the connect sum operation on the skeletons and not changing the chords in any way. This is well defined due to the 4T and V I relation.
(What needs to be proved is that we can move a chord ending over the attaching point of the new edge, this is don in the same spirit as the proof of Lemma 3.1 in [BN1], using "hooks".)
It is easy to check that all the operations are well-defined (agree with the 4T and VI relations).
The naive Kontsevich integral of a graph.
We can try to extend the definition to knotted trivalent graphs (and trivalent tangles) the natural way: consider a Morse embedding of the graph (or tangle) in R 3 , and define the integral by the same formula, requiring that t 1 , ..., t n are non-critical and also not the heights of vertices. (We do not do any correction or renormalization yet.) 4.4. The good properties.
• Factorization property.
The extension of the integral would obviously preserve the factorization property of the Kontsevich integral, meaning that it would be multiplicative with respect to stacking trivalent tangles and the (vertical) connected sum of graphs (i.e. it would commute with the connect sum operation).
• Nice behavior under orientation switches.
Z will commute with the orientation switch operation due to the signs (−1) ↓ in the formula that defines Z. (In other words, when we switch the orientation of an edge, the coefficients of each chord diagram in the result of the integral will be multiplied by (−1) k , where k is the number of chords ending on the edge we switched the orientation of).
• Nice behavior under the vertical edge delete operation.
Let us (wrongly) assume that our extension is a convergent knotted graph invariant. Consider an embedding of the graph in which the edge e to delete is a straight vertical line, with the top vertex forming a Y , the bottom vertex a λ. (Obviously such an embedding exists within each isotopy class.)
Now, if we delete the edge e, then in the result of the integral, every chord diagram in which a chord ended on e would disappear (declare these to be zero), and the coefficient of any other chord diagram stays unchanged (as the integral used to compute it is unchanged). In other words, the extended Kontsevich integral commutes with the edge delete operation.
• Nice behavior under vertical unzip.
Let the embedding of the graph be as above. When we unzip the vertical edge e, we do it so that the two new edges are parallel and very close to each other.
In the result of the integral, the chord diagrams that contained k chords ending on e will be replaced by a sum of 2 k chord diagrams, as each chord is replaced by "the sum of two chords", one of them ending on the first new edge, the other ending on the second. (Since for each choice of z i on e we will now have two choices.) The coefficient for the sum of these new diagrams will be the same as the coefficient of their "parent", (since the two new edges are arbitrarily close to each other).
If we were to choose a chord to have both ends on the two new parallel edges, the resulting integral will be zero, as z i − z ′ i will be a constant function.
Again, the coefficients of the diagrams that don't involve chords ending on e are unchanged. Therefore, the extended Kontsevich integral, assuming it exists and is an invariant, will commute with the unzip operation.
4.5. The problem. The problem with the extension is that the integral, as defined above, is divergent. Causing this are the chords that are near a vertex (not separated from a vertex by another chord). These are just like the isolated chords in the knot case, but, contrary to the knot case, we have no reason to factor out by all the chord diagrams containing such chords.
Also, if we want to drop the 1T relation for the sake of working with framed graphs, we have to fix the divergence coming from the isolated chords near critical points as well.
Eliminating the divergence
To eliminate the divergence we have to renormalize at the vertices, and critical point. We do this using a simple version of a renormalization technique from quantum field theory: we know the exact type of divergence, and thus we "divide by it" to get a convergent integral. 5.1. The renormalized integral Z 2 . First let us restrict our attention to a vertex of a "λ" shape. Fix a scale µ and chose a small ε. We change the integral at the vertex by "opening up" the two lower strands at a distance ε from the vertex, to a width µ at the height of the vertex. The old strands (solid lines on the picture) up to distance ε from the vertex, and above the vertex, will be "globally active", meaning that we allow any chords (long or short) to end on them. The opening strands (dashed lines on the picture) are "locally active", meaning that we allow chords between them, but chords from outside are not allowed to end on them. We define the value of Z 2 as the limit of this new integral as ε tends to zero.
We will do the same to a vertex of a "Y "-shape, however, we will have to restrict our attention to these two types of vertices. (I.e. we do not allow vertices to be local minima or maxima.) Of course, any graph can be embedded in R 3 in such a way that all vertices are of one of these two types, but this will cause a problem with the invariance of Z 2 , which will need to be fixed.
To get an invariant of framed graphs, we use the same method to renormalize at the critical points and thereby make isolated chords cause no divergence, this is why we can drop the one term (framing independence) relation.
Proposition 5.1. The renormalized integral Z 2 is convergent.
Proof. The globally active part corresponding to the highest short chord c i can be computed as follows:
tv −ε t i−1 dz i − dz ′ i z i − z ′ i = tv−ε t i−1 dln(z i − z ′ i ) = ln z i (t v − ε) − z ′ i (t v − ε) z i (t i−1 ) − z ′ i (t i−1 )
.
The locally active part on the other hand:
tv tv−ε d(ln(z i − z ′ i )) = ln µ z i (t v − ε) − z ′ i (t v − ε)
.
The integral for this highest chord is the sum of the above, and is therefore bounded by:
ln µ z i (t i−1 ) − z ′ i (t i−1 ) ≤ C|lnµ|,
for some constant C. Repeating for as many short chords as there are, we see that the integral is convergent.
The good properties.
Theorem 5.2. Z 2 is invariant under horizontal deformations that leave the critical points and vertices fixed, and rigid motions of the critical points and vertices. Z 2 has the factorization property, and commutes with orientation switch, vertical edge delete, edge unzip and connected sum. Moreover, it has good behavior under changing the renormalization scale µ.
By rigid motions of critical points we mean shrinking or extending a sharp needle, like in the case of the standard Kontsevich integral (Lemma 3.4), with the difference that we do not allow twists on the needle, but require the two sides of the needle to be parallel straight lines. This difference is due to dropping the framing independence relation, as adding or eliminating twists would change the framing.
For vertices, a rigid motion is moving the vertex down two very close edges without twists, as shown in the figure:
To prove that the integral commutes with the vertical edge unzip operation and to investigate the behavior under changing the scale µ, we will use the following lemma:
Lemma 5.3. Let w 1 , w 2 be distinct complex numbers and let β be another complex number. Let B be the 2-strand "rescaling braid" defined by the map [τ, T ] → [τ, T ] × C n t → (t, e βt w 1 , e βt w 2 ). Then
Z 2 (B) = exp βt 12 (T − τ ) 2πi ∈ A(↑ 2 ),
where A(↑ 2 ) is the space of chord diagrams on two upward oriented vertical strands, and t 12 is the chord diagram with one chord between the two strands.
Proof. The m-th term of the sum is
1 (2πi) m t m 12 T τ T t 1 ... T t m−1 dln(e βtm w 1 )−e βtm w 2 )...dln(e βt 1 w 1 )−e βt 1 w 2 ) = = 1 (2πi) m t 12 β m T τ T t 1 ... T t m−1 dt m dt m−1 ...dt 1 = (βt 12 (T − τ )) m (2πi) m m! ,
which proves the claim.
We note that lemma 5.3 is easily extended to the case of the n-strand rescaling braid, defined the same way, where in the answer t 12 would be replaced by t ij . We also state the following reformulation, that follows from the lemma by elementary algebra: Now we proceed to prove the theorem:
Proof. Factorization property. The factorization property for tangles is untouched by the renormalization, as the height at which tangles are glued together must be non-critical and not contain any vertices.
For the vertical connected sum of knotted graphs γ 1 and γ 2 , if we connect the maximum point of the γ 1 with the minimum of γ 2 , the minimum and maximum renormalizations will become vertex renormalizations when computing the Kontsevich integral of γ 1 #γ 2 .
Invariance.
To prove invariance under horizontal deformations that leave the critical points and vertices fixed, we use the same proof as in the case of the standard integral (Proposition 3.2), i.e. cut the graph into tangles with no critical points or vertices, and thin tangles containing the vertices and critical points, apply Lemma 3.3 to the former kind, then take a limit.
For invariance under rigid motions of critical points, since we have proven the invariance under horizontal deformations and the needle is not twisted, we can assume that the sides of the needle are two parallel lines and ε is the horizontal distance between them.
By the factorization property, the value of Z 2 for the needle extended can be written as a product of the values for the part under the needle, the two parallel strands, and the renormalization for the critical point that is the tip of the needle:
The value of Z 2 for the needle retracted is the product of the value for the part under the needle and the renormalization part. What we have to show therefore is that in the first case (needle extended) the coefficients for any diagram that contains any chords on the parallel strands tends to zero as the width of the needle tends to zero. This is indeed the case: the integral is 0 for any diagram on two parallel strands that contains any short chord, since d(z k − z ′ k ) = 0. For long chords, the highest long chord can be paired up with the one ending on the other strand, as in the proof of 3.4. The reason for this is that their difference commutes with any short chords that occur in the renormalization part, by the Locality Lemma 3.1. Now we can use the same estimates as in Proposition 3.4 to finish the proof.
To prove invariance under rigid motions of vertices, let us assume that all edges are outgoing. All other cases are proven the same way after inserting the appropriate sign changes. Similarly to the needle case, we can assume that the part we shrink consists of two parallel strands at horizontal distance ε. We need to prove that the difference of the values of Z 2 for the two pictures shown below tends to zero as ε tends to zero.
For the value corresponding to the left picture, just like in the needle case, we can assume that there are no short chords connecting the two parallel strands. The long chords ending on the parallel strands come in pairs, with the same sign, and their coefficients are the same in the limit.
These pairs commute with any short chords in the renormalization part by 3.1.
Also, by the vertex invariance relation, each sum of a pair of such chords equals one chord ending on the top vertical edge, which, from the right side integral, will have the same coefficient as the former sum, as ε → 0. This concludes the proof.
Good behavior under orientation switch The renormalization doesn't change anything about the signs that correspond to the orientations of the edges, so Z 2 still commutes with orientation switches.
Good behavior under edge delete.
When deleting a vertical edge, the renormalization that was originally inserted for the two vertices at either end of the edge, becomes exactly the renormalization we need for the two critical points that replace the vertices, as on the figure.
Good behavior under edge unzip. It is slightly harder to see that Z 2 commutes with unzipping an edge. If we unzip a vertical edge e and then compute the value of Z 2 , the vertices on either end of e disappear, so no renormalization will occur.
However, if we first compute the integral and then perform unzip on the result in A(Γ), then the coefficients for each resulting chord diagram will be as if we had computed them using renormalizations as in the picture below.
What we need to show is that the contribution from the "upper" renormalization will cancel the contribution from the "lower".
Let T denote the tangled graph from the lower renormalization to the upper renormalization (including any "faraway part" that is not on the picture). Let us divide T into three tangles: let T 1 denote the lower renormalized area (short chords only), T 2 the unzipped edge and any faraway part of the graph at this height, and T 3 the upper renormalized area.
By the factorization property,
Z(T ) = Z(T 1 )Z(T 2 )Z(T 3 )
In the integral's result, the short chords occurring at either renormalized part can slide up and down the unzipped edge by Lemma 3.1, as they commute with the pairs of incoming long chords.
In other words, Z(T 1 ) commutes with Z(T 2 ), and therefore Z(
T ) = Z(T 2 )Z(T 1 )Z(T 3 ). Z(T 1 )Z(T 3 ) = exp ln(µ/ε)t 12 2πi
exp ln(ε/µ)t 12 2πi = 1 ∈ A(T ) by the reformulated rescaling Lemma 5.4, so the renormalizations cancel each other.
Good behavior under changing the scale µ.
Changing the scale µ to some other µ ′ amounts to adding a small, two strand "rescaling braid" at each vertex and critical point:
µ µ ′
By the reformulated Lemma 5.4, this means that when changing the scale from µ to µ ′ , the element exp ln(µ ′ /µ)t 12 2πi ∈ A(↑ 2 ) is placed at each λ-shaped vertex and maximum point, and exp ln(µ/µ ′ )t 12 2πi = exp − ln(µ ′ /µ)t 12 2πi ∈ A(↑ 2 ) is placed at each Y -vertex and minimum.
6. Corrections-constructing a knotted graph invariant 6.1. Missing moves. As in the case of knots, the Kontsevich integral is not invariant under certain deformations that do not change the isotopy class of the framed graph. In the case of knots, the only such deformation was "straightening a hump", and we fixed this by multiplying with Z( ) −c/2 , in other words, this can be done by placing the element Z( ) −1 ∈ A(↑ 1 ) on each cup. (As the number of minima is the same as the number of maxima, and an element of A(↑ 1 ) commutes with all other chords by Lemma 3.1. As elements of A 1 , Z( ) = Z( ), by the fact that faraway strands don't interact (Lemma 3.6) and since the top and bottom renormalizations on the circle cancel each other out by Lemma 5.4. )
The situation now is more complex than in the case of knots. Z 2 is invariant under deformations that do not change the number of critical points or the shape of a vertex. To transform it into a knotted framed graph invariant Z 3 , we need to make corrections to create invariance under the following eight moves: Moves 1 and 2 will fix the problem of straightening humps, just like in the case of knots.
Moves 3, 4, 5 and 6 guarantee that we can switch a vertex freely from a λ shape to a Y shape and vice versa.
Moves 7 and 8 are needed because we excluded vertices that are critical points at the same time. Any such vertex can be perturbed into a λ or a Y shape by an arbitrarily small deformation, but we need the value of the invariant to be the same whether we push the middle edge over to the left or to the right.
To implement these corrections, we have four "correctors" available: we can put one each on cups, caps, Y -vertices and λ-vertices. The ones on cups and caps are elements of A(↑ 1 ), the ones on vertices we can think of as elements of A(↑ 2 ), as one of the edges can be swept free of chords using the vertex invariance relation.
The eight moves above define eight equations (1 and 2 are equations in A 1 , the rest of them in A 2 , the unknowns being the four correctors. The question is whether this system of equations can be solved.
6.2. Syzygies. To reduce the number of equations that the correctors need to satisfy, we use the syzygies below: 4.
1.
2.
5. 3.
The first syzygy implies that once Z 3 is invariant under move 1, it is automatically invariant under move 2 (we used this fact already in the knot case).
The second says that invariance under moves 1 and 3 implies invariance under move 5.
By the third, invariance under moves 2 and 4 implies invariance under 6.
The fourth tells us that fixing move 1, 3 and 4 fixes 7.
Finally, the fifth syzygy shows that fixing moves 2, 5 and 6 fixes move 8 as well.
Therefore, it's enough to make Z invariant under moves 1, 3 and 4, which together imply invariance under everything else.
6.3. Translating to equations. We know already how to make Z 3 invariant under move 1, as this was done in the knot case. We need to place a correction u ∈ A 1 at each minimum, and a correction n ∈ A 1 at each maximum, the only equation u and n need to satisfy being un = Z 2 ( ) −1 ∈ A(↑ 1 ). This will settle move 1 the same way it did for knots. (The tool to prove this for knots was the lemma of a distinguished tangle, Lemma 3.6, which said that faraway parts of a knot don't interact. The proof of this applies in the graph context word by word.)
We will sometimes denote Z 2 ( ) by ν. Now let us translate moves 3 and 4 to equations on correctors Y and λ, both cyclically symmetric on two strands.
For simplicity, let us assume that in the picture for moves 3 and 4, the width of the opening at the top is 1. (This can be achieved by horizontal deformations when applying these moves on any graph.) Let us first also assume that the fixed scale µ is also one. (We can correct this later as we know exactly how changing µ effects the value of Z 2 ).
Finally, we choose one convenient set of orientations for the edgesall other cases will follow from this one by performing the orientation change operation on the appropriate edges.
To compute the values of Z 2 on the left side of move 3 and the right side of move 4, we will make use of the following lemma:
Lemma 6.1. Assuming that the width at the opening at the top of each of the pictures below is 1, and µ = 1,
Z 2 = , Z 2 = ,
i.e. the value of Z 2 on these graphs has no chords at all.
Proof. There are two types of chords appearing in Z 2 :
• Horizontal chords connecting the left vertical strand to the diagonal edge • Horizontal chords connecitng the diagonal edge to the right vertical strand Note that there are no chords on the bottom part: the opening of the strands is width 1, and so is the renormalization scale for the minimum, so when computing Z 2 , we get exp ln(1)t 12 2πi as we showed in Lemma 5.4, which equals 1 ∈ A 2 (since ln(1) = 0), meaning no chords at all.
Also, there are no chords connecting the left and right vertical strands, as these are parallel, so in the definition of Z, d(z i − z ′ i ) = 0. The key observation we will use is that the two types of chords commute. This is an elementary computation making repeated use of the Locality Lemma 3.1 and the VI relation. What we will prove is that any chord diagram with chords as above is equivalent to one where all the horizontal chords on the right are on the top, followed by all the horizontal chords on the left at the bottom: = To prove this, let us first establish a few basic equalities.
For the bottom chord on the right, we have the following:
= + This is true since we can slide the right end of the chord down, then use the VI relation. We will use the following shorthand notation:
+ = ,
so the sum above will be denoted as .
Next, since a chord on the left commutes with both chords in this sum (obviously with the horizontal one, and by the Locality Lemma 3.1 with the short one, it commutes with the whole sum: = We can also pull only the box part of a sum over a left chord: = This, when expanded, is just a 4T relation (two terms on each side).
And lastly we calim that we can separate nested sums:
= , as we can slide the lower box over the sum in the middle, since it commutes with both parts by the eqation above and the Locality Lemma 3.1. Therefore, we can pull all the chords on the right to the top like we wanted to, by the following algorithm:
First, we pull all the right ends of the chords over the vertices, creating a number of boxes (i.e. a linear combination of chord diagrams). Then we pull the box end of what was the lowest chord (which has the highest box now) up to its "root", over left horizontal chords if needed, which we can do by the second equality. Continue by pulling the next box up to its "root, over horizontal left chords and the entire lowest sum, both of which are legal steps from above. Continue doing this until all the boxes are united with their roots. Then we can slide all the sums to the top as they commute with the horizontal left chords, as shown in our first equality. Now we can re-nest them all, by the last equality, and one by one pull the box ends over all the horizontal left chords and the vertices to get back the horizontal chords on the right, all on the top now. This concludes the proof of the observation. Now it's easy to show that all these chords indeed cancel out in Z 2 : since the left ones all commute with the right ones, the value doesn't change if we compute Z 2 separately on the left and on the right.As the opening of strands is 1, as well as the renormalization scale, we have exp ln(1)t 12 2Πi = 1 on both sides, by Lemma 5.4. This concludes the proof. (The proof for the mirror image is the same.)
An easy corollary is the following:
Lemma 6.2. Again assuming that the width at the opening at the top of each of the pictures below is 1, and µ = 1, the following is true:
Z 2 = a = Z 2 ,
where a = Z 2 ( ).
Let us say a word about our slight abuse of notation in stating this lemma:
For the statement, we mean that the edges of the answer are oriented according to the orientations of the edges of the picture we're computing Z 2 of, So the leftmost and rightmost sides are not really equal.
For the definition of a ∈ A 2 , due to the symmetry of the picture, it doesn't matter which way the two parallel edges go, as long as they are oriented the same way. In other words, S 1 S 2 (a) = a.
Proof. The proof is essentially identical to the previous one. For the first step of the key observation, we need to slide the right end of the lowest right chord over the bottom part of the picture. This is done by using one more VI relation. The resulting sum of two chords commutes with a by the Locality Lemma 3.1. Thus we can pull it over a and use the VI relation once more to get a single chord again.
A few edge orientations differ from the ones in Lemma 6.1. (We chose orientations there to avoid negative signs, here we're switching to the ones we will use later.) We can repeat the proof of Lemma 6.1 with the same edge orientations, then switch the ones we need to switch at the end. This causes no problems, as Z 2 commutes with the orientation switch operation. Since no chords end on the edges we're reorienting, the result doesn't change sign.
A similar lemma, which is a crucial ingredient in the Murakami-Ohtsuki construction, [MO], is proved by Le, Murakami, Murakami and Ohtsuki in [LMMO], the proof involves the computation of the Zvalue of an associator in terms of values of the multiple Zeta function.
We can now use Lemma 6.2 to easily compute the value of Z 2 on the trivalent tangles that appear in moves 3 and 4. This is done in the following two corollaries: Corollary 6.3. Still assuming that µ = 1, Z 2 = a , and Z 2 = a Note that we are abusing notation again: the way we defined a, it was an element of A 2 where the strands were horizontal. Due to the symmetry of that picture though, we can slide a either up the curve on the left, or up the curve on the right to a vertical position, we will get the same result. This is what we call a in the proposition above.
Proof. Unzipping the vertical edge on the right in , we get .
Since Z 2 commutes with vertical unzip, and in Z 2 no chords end on the edge we're unzipping, we can deduce that
Z 2 = a .
By the factorization property,
Z 2 = Z 2 Z 2 .
However, the second factor is 1 (i.e. has no chords) by Lemma 5.4, since the opening of the strands at the top is 1, and so is the width of the renormalization at the bottom.
Therefore, we deduce that Z 2 = a .
Also, we can forget about the vertical strand on the right, as we know from Lemma 3.6 (which generalizes to graphs word by word) that faraway strands don't interact. This proves the first part of the corollary. The proof or the second part is the same, with all the pictures mirrored.
Corollary 6.4. Z 2 = a , and Z 2 = a .
Proof. By the factorization property,
Z 2 = Z 2 Z 2 .
Again, the second factor is 1 (no chords at all), by Lemma 5.4, if we assume that the width of the opening where we cut is 1 (which we are free to do of course, by horizontal deformations), and the width we use for the vertex renormalization is also 1. The first part of the corollary then follows, as does the second part by mirroring the pictures.
We are now ready to form the equations corresponding to moves 3 and 4 (still assuming that µ = 1). We have proven the following Proposition:
Proposition 6.5. With corrections λ ∈ A 2 , Y ∈ A 2 , and u ∈ A 1 , such that a u λ = Y = u λ a ,
and n = u −1 ν −1 , Z 3 is invariant under moves 1-8, and therefore it is an isotopy invariant of knotted trivalent graphs.
Note that we used above that Z 2 of a Y vertex with an opening of width 1 at the top is 1, since µ = 1. 6.4. The resulting invariants. There is an obvious set of corrections satisfying the above equations: λ = a −1 , Y = 1 and u = 1. This forces n = (Z 2 ( )) −1 .
Note that this λ and Y value works only with the orientations of the edges chosen as in the pictures above. Correct notation would be to say that λ ↓↑↑ = a −1 , where in the subscript the first arrow shows the orientation of the top strand of the vertex, the second stands for the lower left, and the last stands for the lower right strand.
It's easy to now come up with a complete set of corrections for all orientations: if we change the orientation of one of the lower strands, we apply the corresponding orientation switch operation to the correction: λ ↓↓↑ = S 1 (a −1 ), λ ↓↑↓ = S 2 (a −1 ), λ ↓↓↓ = S 1 S 2 (a −1 ) = a −1 . These satisfy equation 3 and 4, since Z 2 commutes with the orientation switch operation.
If we switch the orientation of all the edges, the proof remains unchanged (due to the fact that S 1 S 2 (a) = a), so we have λ ↑↓↓ = a −1 , and then by the above reasoning, the rest follows: λ ↑↑↓ = S 1 (a −1 ), λ ↑↓↑ = S 2 (a −1 ), and λ ↑↑↑ = S 1 S 2 (a −1 ) = a −1 .
It is worth noting that S 1 (a) = S 1 (S 1 S 2 (a)) = S 2 (a). Since Y is trivial, orientation switches don't affect Y . For n and u the orientation of the strand doesn't matter, since these are elements in A 1 , so each chord has two endings on the one strand, thus orientation change operation will not change any signs.
Proposition 6.6. With the complete set of corrections described above, Z 3 commutes with the orientation switch, edge delete and connected sum operation.
Proof. It is obvious that Z 3 commutes with the orientation switch, as Z 2 has the same property and we designed the corrections (λ especially) to keep it true.
We only need to deal with vertical edge delete: since we now have an isotopy invariant, we can first deform the edge to be deleted into a straight vertical line with a Y vertex on top and λ vertex on bottom. Thus, it's enough to check that the following equalities are true:
Y = u , λ = n ,
and also for the opposite orientations of the strands.
Each time λ appears above, we need to check the equations for all appropriate choices of λ depending on the orientations of the strands, for example λ ↓↑↓ and λ ↑↑↓ in the second equation.
Indeed, λ ↓↑↓ = S 2 (a −1 ) = S 2 (S 1 S 2 (a −1 ) = S 1 (a −1 ) = λ ↑↑↓ , and
= Z 2 ( ) = Z 2 ( ) = Z 2 ( ),
where all equalities are understood in A 1 . The first equality is true by definition of a and the fact that Z 2 commutes with orientation switches. The second is just horizontal deformations and moving critical points, while the third holds because faraway strands don't interact (Lemma 3.6). Now the equality follows by taking inverses on both sides. There's nothing to check for Y and u, as they're both trivial. Connected sum is the "reverse" of edge delete, the fact that Z 3 commutes with it is proved by backtracking the above proof, using that the connected sum is well-defined.
Before we deal with the unzip operation, let us produce a one parameter family of corrections that all yield knotted graph invariants well behaved with respect to the orientation switch, edge delete, and connected sum operations. The reader who is satisfied with one invariant is welcome to skip ahead to the paragraph following Remark 1.
The statement of the following lemma appears in [MO], the proof there is phrased in terms of q-tangles, but is based on the same trick.
Lemma 6.7. a −x ν −x = b −x , where we define b ∈ A 2 to be b = Z 2 ( ), an "upside down a".
Proof. Throughout the proof we will assume that all strand openings are of width 1 as well as µ = 1, however, the statement itself is just an equality in A(Y ), and thus independent of the choice of µ. Then, the same way we had done in Lemma6.1 to Corollary 6.4, we can compute
Z 2 = b .
By multiplicativity, the fact that faraway strands don't interact (Lemma 3.6, and our previous computations,
Z 2 = a b .
By an unzip, it follows that
Z 2 = b a .
We know that "adding a hump" amounts to multiplying by a factor of ν = Z 2 ( ), therefore
Z 2 = Z 2 ν,
where multiplication by ν is on the left strand.
But Z 2 = 1, due to µ = 1, as seen before. So we have
ν −1 a b = 1.
Using that ν −1 commutes with everything due to the Locality Lemma 3.1, and multiplying by b −1 , we get
a ν −1 = b −1 .
The lemma then follows for all x ∈ N, and by Taylor expansions for all x.
Proposition 6.8. The corrections
λ ↓↑↑ = a x−1 u = ν −x Y ↓↓↑ = b −x n = ν x−1
make Z 3 an universal finite type invariant of knotted trivalent graphs that commutes with the orientation change, edge delete and connected sum operation.
Proof. We need to check that equations 1, 3 and 4 are satisfied to prove the invariance part: Equation 1 says nu = ν −1 , which holds for the n and u above.
Equation 3 and 4 translated to
a u λ = Y = u λ a .
This is satisfied since
a u λ = a −x ν −x = b −x = Y ,
and the mirror image done the same way. The complete set of corrections follows from these by inserting the appropriate orientation switches. This will ensure that Z 3 commutes with the orientation switch operation.
The fact that Z 3 is a universal finite type invariant is true by the exact same proof that applies to knots.
For edge delete and connected sum, the proof is the same as for our first set of corrections.
Remark 1. Murakami and Ohtsuki used the symmetric corrections we get for x = 1/2 in [MO].
Throughout the above calculations and proofs, we had always assumed that the renormalization scale µ was chosen to be 1. However, when we change the scale, all that happens is that factors of exp ln(µ ′ /µ)t 12 2πi will be placed at λ-vertices and maxima, while the reciprocal is placed at Y -vertices and minima. Therefore, if the scale is chosen differently, we can account for this by multiplying the correction terms with the inverses of these factors. 6.5. Renormalizing unzip. There is one shortcoming of Z 3 still: it fails to commute with the unzip operation.(As before, it is enough to restrict our attention to vertical unzip and delete.)
Commuting with unzip would require that Y λ = 1, in other words, after we unzip a vertical edge, we would want the top and the bottom corrections to cancel each other out. (As we have discussed before, the chords ending on the unzipped edge come in sums of pairs, and λ and Y commute with these by the Locality Lemma 3.1, so they could indeed cancel each other out.)
However, any set of corrections that makes Z 3 a knotted graph invariant can either let it commute with the edge delete operation or with the unzip, but not both. This is easy to show: As said above, for Z 3 to commute with unzip, we need Y λ = 1.
Since Y has an inverse in A 2 (otherwise it couldn't be a correction), this inverse has to be λ.
Therefore, we have
1 = λ Y = n u = ν −1
, a contradiction. (The second equality is due to the assumption that Z 3 commutes with the edge delete operation.) In the argument above, we're ignoring some orientation issues for simplicity. The edge orientations compatible with unzip are not the same as the ones compatible with edge deletion, so, strictly speaking, we're not talking about the same λ and Y . However, they only differ by orientation switches, and orientation switches commute with taking inverses, so this doesn't interfere with the proof. Now that we're convinced that this issue is unavoidable, we fix it by renormalizing the unzip operation on A, in other words we modify the algebra A slightly, changing only the unzip operation.
The new unzip u : A(γ) → A(u(γ)) has to satisfy u(Z 3 (γ)) = Z 3 (u(γ)). The obvious way to achieve this is to introduce a correction that cancels out the error Y λ.
Take, for example, our first set of corrections. For (vertical) unzip to be defined, either all edges need to be oriented upwards, or all downwards. In both cases λ ↓↓↓ = λ ↑↑↑ = a −1 , and Y = 1.
Let u = i a • u, where by i a we mean "inject a copy of a on the unzipped edge". (This will also commute with the pairs of chords ending on the edge, so it doesn't matter where we place it.) Let us sketch a simple proof using properties of Z 2 and Z 3 :
ON THE KONTSEVICH INTEGRAL FOR KNOTTED TRIVALENT GRAPHS 43
By multiplying the tangles used to define a and b, computing Z 2 and unzipping a vertical edge with no chords, we get: b a = Z 2 .
We can produce this graph by an unzip: = u , but this not being a vertical unzip, it does not commute with Z 2 , so let us use Z 3 with our first set of corrections:
Z 3 = uZ 3 = u a −1 a −1 ν −1 = a −1 u`ν −1´.
For the second equality above, we use the fact that Z 2 of this graph is trivial (everything cancels out), so in Z 3 all we have is the corrections. Now we can get Z 2 back from Z 3 by undoing the corrections: b a = Z 2 = ν ν u`ν −1´, which completes the proof. Since the element on the right side of the equality is central (by the Locality Lemma 3.1), this implies that a and b commute, and hence a x b x = (ab) x . So the renormalization in this case (i.e. injecting a 1/2 b 1/2 on the unzipped edge) has an additional, different description: we achieve the same by first injecting ν −1/2 on the edge, then unzipping, and then injecting ν 1/2 on both new edges.
Taking any member of the one-parameter family of corrections, we can renormalize unzip by planting λ −1 Y −1 on the pair of edges resulting from the unzip. This way, in all cases, Z 3 becomes an invariant that commutes with all four operations on the knotted trivalent graphs and A, with this renormalized version of u.
6.6. Parenthesized tangles and Drinfeld's associators. Let us end with a sketch of how this construction relates to parenthesized tangles (a.k.a. q-tangles) and Drinfeld's associators. There is an easy map α from parenthesized tangles to trivalent tangles with one free end on top and one free end on bottom. We join ends by vertices according to the parenthetization, as shown:
α The parenthetization of the bottom of the tangle shown is ( * ( * ( * * ))), while on the top it is (( * * )( * * )). The ends are joined by vertices accordingly.
Composition of parenthesized tangles translates to joining the bottom end of the first trivalent tangle with the top end of the second, and then unzipping the middle edges until no more unzips are possible. As an example, let's multiply the above tangle by its inverse: u u u α Looking at the picture, we can convince ourselves that we get the same result by first applying α (as seen on the previous figure), joining the resulting trivalent tangles, and unzipping the middle, or by first multiplying the parenthesized tangles, and then applying α.
The inverse of α amounts to unzipping all edges starting from the top and from the bottom until there are no trivalent vertices left.
Since Z 3 commutes with the (renormalized) unzip operation, we see that in particular, Z 3 will satisfy the pentagon and hexagon equations, hence it is a construction of an associator.
by resolving both double points). FI, the framing independence relation (a.k.a. One-term relafrom the Reidemeister one move of knot diagrams. (The dotted arcs here and on the following pictures mean that there might be further chords attached to the circle, the positions of which are fixed throughout the relation.)
Lemma 5 . 4 .
54For the two strand rescaling braid B where the bottom distance between the strands is l, and the top distance is L, independently of T and τ .
Remark 2 .
2In the case of Murakami and Ohtsuki's choice of corrections, we have Y λ = b −1/2 a −1/2 . The following fact is proved byLe and Murakami in [LM]:
It's not hard to check that these are enough. To simplify, we use five syzygies that reduce the number of moves:2
3
1
7
4
6
5
8
4.
1.
2.
5.
2
1
1
5
3
2
6
4
1
7
3
4
1
2
8
5
6
2
3.
3 t 1 t 2 t 3 t 4 t 4
D Bar-Natan, On the Vassiliev knot invariants. 34D. Bar-Natan: On the Vassiliev knot invariants, Topology 34 (1995), 423- 472.
D Bar-Natan, Algebraic knot theory-a call for action. D. Bar-Natan: Algebraic knot theory-a call for action, http://www.math.toronto.edu/drorbn/papers/AKT-CFA.html
A TQFT associated to the LMO invariant of threedimensional manifolds. D Cheptea, T Q T Le, Commun. Math. Physics. 272D. Cheptea, T. Q. T. Le: A TQFT associated to the LMO invariant of three- dimensional manifolds Commun. Math. Physics 272 (2007), 601-634.
Duzhin: The Kontsevich integral. S V Chmutov, D , Acta Applicandae Math. 662S. V. Chmutov, D. Duzhin: The Kontsevich integral, Acta Applicandae Math. 66 2 (April 2001), 155-190.
Vassiliev's knot invariants. M Kontsevich, Adv. in Soviet Math. 2M. Kontsevich: Vassiliev's knot invariants, Adv. in Soviet Math. 16 2 (1993) 137-150.
Parallel version of the universal Vassiliev-Kontsevich invariant. T Q T Le, J Murakami, J. Pure Appl. Algebra. 121T. Q. T. Le, J. Murakami: Parallel version of the universal Vassiliev- Kontsevich invariant, J. Pure Appl. Algebra 121 (1997), 271-291.
Ohtsuki: A three-manifold invariant via the Kontsevich integral. T Q T Le, H Muarakami, J Murakami, T , Osaka J. Math. 36T. Q. T. Le, H. Muarakami, J. Murakami, T. Ohtsuki: A three-manifold invariant via the Kontsevich integral, Osaka J. Math 36 (1999), 365-395.
Topological quantum field theory for the universal quantum invariant. J Murakami, T Ohtsuki, Communications in Mathematical Physics. 188J. Murakami, T. Ohtsuki: Topological quantum field theory for the universal quantum invariant, Communications in Mathematical Physics 188 3 (1997), 501-520.
The algebra of knotted trivalent graphs and Turaev's shadow world. D P Thurston, Geom. Topol. Monographs. 4D. P. Thurston: The algebra of knotted trivalent graphs and Turaev's shadow world, Geom. Topol. Monographs 4 (2002), 337-362.
An invariant of spatial graphs. S Yamada, J. Graph Theory. 13S. Yamada: An invariant of spatial graphs, J. Graph Theory 13 (1989), 531- 551.
|
[] |
[
"Single and Double Pion Photoproduction off the Deuteron",
"Single and Double Pion Photoproduction off the Deuteron"
] |
[
"Manuel Dieterle \nDepartment of Physics\nA2 Collaboration\nUniversity of Basel\nCH-4056BaselSWITZERLAND\n"
] |
[
"Department of Physics\nA2 Collaboration\nUniversity of Basel\nCH-4056BaselSWITZERLAND"
] |
[] |
There is evidence that the photoproduction of single and double pions off bound nucleons inside a nucleus are not only affected by Fermi motion but also by other nuclear effects, such as final state interactions or meson rescattering. We will present preliminary results of a high statistics measurement of single and double pion photoproduction of quasi-free protons and neutrons off the deuteron carried out at the Mainzer Microtron.
| null |
[
"https://arxiv.org/pdf/1108.6241v1.pdf"
] | 12,932,280 |
1108.6241
|
fe2b3829ff38594e523975c55bb5a1eceb263c8e
|
Single and Double Pion Photoproduction off the Deuteron
31 Aug 2011
Manuel Dieterle
Department of Physics
A2 Collaboration
University of Basel
CH-4056BaselSWITZERLAND
Single and Double Pion Photoproduction off the Deuteron
31 Aug 2011
There is evidence that the photoproduction of single and double pions off bound nucleons inside a nucleus are not only affected by Fermi motion but also by other nuclear effects, such as final state interactions or meson rescattering. We will present preliminary results of a high statistics measurement of single and double pion photoproduction of quasi-free protons and neutrons off the deuteron carried out at the Mainzer Microtron.
Introduction
The following paragraphs present preliminary results about single and double π 0 photoproduction off the deuteron that both originate from the same experiment accomplished in December 2007 at the Mainzer Microtron (MAMI) in Mainz, Germany. The MAMI electron beam facility produces a continuous photon beam with energies up to 1.5 GeV. The photon beam was circularly polarized and the main detectors used in this experiments providing nearly full angular coverage are the Crystal Ball calorimeter (CB) surrounding the target and the TAPS-detector which is placed as a forward wall. The separation of neutral and charged particles is done with plastic scintillators, either as bars arranged in a cylindrical setup surrounding the target (CB) or as hexagonally shaped vetos (TAPS). Furthermore a χ 2 -test is used to identify the photons stemming from the meson decay and to isolate the recoil neutron. The single and double π 0 cross sections were measured throughout the second and third resonance region in coincidence with recoil protons (quasi-free exclusive reaction on the proton), in coincidence with recoil neutrons (quasi-free exclusive reaction on the neutron) and without a condition for the detection of recoil nucleons (quasi-free inclusive reaction). Both quasi-free exclusive reactions sum up to the quasi-free inclusive channel, since the contribution of the coherent process is negligible in the energy region of interest.
Results
The left-hand side of figure 1 shows the preliminary single π 0 total cross section of the quasi-free exclusive reaction on the proton γ + d → π 0 + p(n) (filled blue circles) together with the quasi-free exclusive reaction on the neutron γ + d → π 0 + n(p) (filled red circles). The right-hand side of figure 1 shows the preliminary single π 0 total cross section of the quasi-free inclusive reaction γ + d → π 0 + (N) (filled black circles) together with the sum of the two quasi-free exclusive reactions (open magenta circles). As well shown are previous results [1] for the inclusive reaction (open green circles) and the predicted cross sections from the theoretical models MAID [2] (dashed lines) and SAID [3] (full lines) folded with Fermi motion. It can be seen in figure 1 that the shapes of the measured cross sections (full red, blue and black circles) are in nice agreement with the theoretical MAID [2] and SAID [3] models but there is a disagreement in magnitude. Furthermore, the sum of the quasi-free exclusive cross sections (open magenta circles) add up perfectly to the measured quasi-free inclusive cross section (full black circles). This indicates a very clean identification of the reaction channels. In addition, the measured quasi-free inclusive cross section (full black circles) is in good agreement with earlier results from MAMI B [1].
The left-hand side of figure 2 shows preliminary results for the 2π 0 total cross sections of the quasi-free exclusive reaction on the proton γ + d → π 0 π 0 + p(n) (filled blue triangles) together with the quasi-free exclusive reaction on the neutron γ + d → π 0 π 0 + n(p) (filled red triangles) and the quasi-free inclusive reaction γ + d → π 0 π 0 + (N) (filled black triangles).
The right-hand side of figure 2 illustrates the measured beam helicity asymmetries I ⊙ (Φ) = 1/P γ (dσ + − dσ − )/(dσ + + dσ − ) = 1/P γ (N + − N − )/(N + + N − ) [4] of the quasi-free exclusive reaction on the proton (full blue triangles) and on the neutron (full red triangles) for different regions of beam energy. The dashed lines correspond to a fit to the data.
Interpretation
The identical shape of the measured single π 0 total cross sections and the theoretical MAID [2] and SAID [3] models (see figure 1) demonstrates a correct understanding of the resonance contributions to the different reactions, i.e. mainly D 13 (1520), F 15 (1680) to single π 0 photoproduction on the proton and D 13 (1520), D 15 (1675) on the neutron. The discrepancy in absolute height between the measured single π 0 total cross sections and the theoretical models (see figure 1) can not be explained by nuclear Fermi motion. The fact that folding the theoretical models with Fermi motion does not overcome this problem reveals the importance of other nuclear effects, such as final state interactions, meson rescattering or others. This discrepancy was already earlier observed by B. Krusche et al. [1] and H. Shimizu [5]. H. Shimizu reported an overestimation of the data by models of ∼ 125%, as shown at the left-hand side of figure 3. The same level of overestimation was observed in this work, as depicted on the right-hand side in figure 3, where the models are scaled down by a factor of 0.8.
Double π 0 photoproduction is mainly used to study the properties of sequential decays since this is the dominated decay mechanism. Even though the MAID model [2] predicts a nearly identical total cross section for the production on the proton and on the neutron, the resoncance contribution to the two reactions is rather different. For example, the electromagnetic excitation of the F 15 (1680) (D 15 (1675)) is predicted to be much stronger on the proton (neutron) than on the neutron (proton). For this reason one would suggest that sensitive quantities such as the beam helicity asymmetry would depend on such a coupling and hence will not be the same on the proton and on the neutron. The measured 2π 0 total cross sections on the proton and neutron (left-hand side of figure 2) show no big difference and hence confirm the predictions. However, it is rather astonishing that the measured beam helicity asymmetries for the proton and neutron are as well identical within the error bars. Additionally, the results on the quasi-free proton were defolded from Fermi motion and compared to published data on the free proton and are in good agreement with each other (not shown here) which indicates a correct reconstruction of the reaction. Yet, earlier works [4] also contradicted many model predictions concerning the asymmetries, therefore further input is needed in order to understand this behavior.
Figure 1 :
1Preliminary single π 0 total cross sections of the quasi-free exclusive (left-hand side) and quasi-free inclusive (right-hand side) reaction. The labelling is explained in the text.
Figure 2 :
2Very preliminary double π 0 total cross sections (left-hand side) and beam helicity asymmetries (right-hand side). The labelling is explained in the text.
Figure 3 :
3Quasi-free exclusive total cross sections compared to the MAID[2] and SAID[3] models scaled down by a factor 0.8. Left-hand side: H. Shimizu[5], right-hand side: This work. Note the reverted color coding on the left figure between red and blue.
[email protected]
AcknowledgementsThis work is supported by Schweizerischer Nationalfonds (SNF), Deutsche Forschungsgemeinschaft (DFG) and EU/FP6. All the work on the double π 0 channel is done by M. Oberle et al..
. B Krusche, Eur. Phys. J. A. 6309B. Krusche et al., Eur. Phys. J. A 6 (1999) 309
. D Drechsel, O Hanstein, S S Kamalov, L Tiator, Nucl. Phys. A. 645145D. Drechsel, O. Hanstein, S.S. Kamalov and L. Tiator, Nucl. Phys. A 645 (1999) 145
VPI and SU Scattering Analysis Interactive Dialin. R Arndt, R. Arndt et al., VPI and SU Scattering Analysis Interactive Dialin
. D Krambrich, F Zehr, A Fix, L Roca, Phys. Rev. Lett. 10352002D. Krambrich, F. Zehr, A. Fix, L. Roca et al., Phys. Rev. Lett. 103 (2009) 052002
Slides presented at the NNR workshop. H Shimizu, H. Shimizu, Slides presented at the NNR workshop 2009, June 8.-10., 2009, Edinburgh
|
[] |
[
"Chaos in Hamiltonians with a Cosmological Time Dependence",
"Chaos in Hamiltonians with a Cosmological Time Dependence"
] |
[
"Henry E Kandrup \nDepartent of Astronomy\nDepartment of Physics, and Institute for Fundamental\nTheory University of Florida\n32611GainesvilleFlorida\n"
] |
[
"Departent of Astronomy\nDepartment of Physics, and Institute for Fundamental\nTheory University of Florida\n32611GainesvilleFlorida"
] |
[] |
This paper summarises a numerical investigation of how the usual manifestations of chaos and regularity for flows in time-independent Hamiltonians can be alterred by a systematic time-dependence of the form arising naturally in an expanding Universe. If the time-dependence is not too strong, the observed behaviour can be understood in an adiabatic approximation. One still infers sharp distinctions between regular and chaotic behaviour, even though "regular" does not mean "periodic" and "chaotic" will not in general imply strictly exponential sensitivity towards small changes in initial conditions. However, these distinctions are no longer absolute, as it is possible for a single orbit to change from regular to chaotic and/or visa versa. If the time-dependence becomes too strong, the distinction between regular and chaotic can disappear so that no sensitive dependence on initial conditions is manifest. PACS number(s): 98.80.-k, 05.45.Ac, 05.45.Pq I. MOTIVATION
| null |
[
"https://export.arxiv.org/pdf/astro-ph/9903434v1.pdf"
] | 116,962,565 |
astro-ph/9903434
|
1f32192281054066ebd6ca8be064836c9544d570
|
Chaos in Hamiltonians with a Cosmological Time Dependence
29 Mar 1999
Henry E Kandrup
Departent of Astronomy
Department of Physics, and Institute for Fundamental
Theory University of Florida
32611GainesvilleFlorida
Chaos in Hamiltonians with a Cosmological Time Dependence
29 Mar 1999arXiv:astro-ph/9903434v1 (March 21, 2022)
This paper summarises a numerical investigation of how the usual manifestations of chaos and regularity for flows in time-independent Hamiltonians can be alterred by a systematic time-dependence of the form arising naturally in an expanding Universe. If the time-dependence is not too strong, the observed behaviour can be understood in an adiabatic approximation. One still infers sharp distinctions between regular and chaotic behaviour, even though "regular" does not mean "periodic" and "chaotic" will not in general imply strictly exponential sensitivity towards small changes in initial conditions. However, these distinctions are no longer absolute, as it is possible for a single orbit to change from regular to chaotic and/or visa versa. If the time-dependence becomes too strong, the distinction between regular and chaotic can disappear so that no sensitive dependence on initial conditions is manifest. PACS number(s): 98.80.-k, 05.45.Ac, 05.45.Pq I. MOTIVATION
In time-independent Hamiltonian systems sharp, qualitative distinctions can be made between two different types of behaviour, namely regular and chaotic. Regular orbits are multiply periodic; chaotic orbits are aperiodic. Chaotic orbits exhibit an exponentially sensitive dependence on initial conditions, which is manifested by the existence of at least one positive Lyapunov exponent; regular orbits exhibit at most a power law dependence on initial conditions [1]. This distinction is, moreover, absolute in the sense that it holds for all times: an orbit that starts chaotic will remain chaotic forever; a regular orbit remains regular. This distinction is important physically because it implies very different sorts of behaviour (although topological obstructions like cantori [2] can make a chaotic orbit "nearly regular" for very long times).
An obvious question then is to what extent this distinction persists in cosmology where, oftentimes, one is confronted with a Hamiltonian manifesting a systematic time-dependence reflecting the expansion of the Universe. If the Hamiltonian acquires an explicit secular time-dependence, one would expect that periodic orbits no longer exist; and one might anticipate further that "chaotic" need not imply a strictly exponential dependence on initial conditions. Given, moreover, that the form of the Hamiltonian could change significantly over the course of time, one might anticipate the possibility that a single orbit could shift in behaviour from "chaotic" to "regular" and/or visa versa. In other words, the distinction between regular and chaotic need not be absolute.
Of especial interest is what happens to the gravitational N -body problem for a collection of particles of comparable mass in the context of an expanding Universe. It is by now well known [3] that, when formulated for a system of compact support which exhibits no systematic expansion, the N -body problem is chaotic in the sense that small changes in initial conditions tend to grow exponentially. Largely independent of the details (at least for N ≫ 1), a small initial perturbation typically grows exponentially on a time scale t * ∼ R/v comparable to the natural crossing time for the system. Does this exponential instability persist in the context of an expanding Universe, or does the expansion vitiate the chaos?
To better understand various phenomena in the early Universe, attention has also focused on the behaviour of systems modeled as a small number of interacting low frequency modes (and, perhaps, an external environment, typically visualised as a stochastic bath). The resulting description is usually Hamiltonian (albeit possibly perturbed by the external environment), but, because of the expansion of the Universe, which induces a systematic redshifting of the modes, the Hamiltonian is typically time-dependent. Were this time-dependence completely ignored, as might be appropriate in flat space, the resulting solutions would divide naturally into "regular" and "chaotic." The obvious question is to what extent such appellations continue to make sense when one allows properly for an expanding Universe? One might be concerned that such systems are intrinsically quantum, and that there is no such thing as "quantum chaos". [4] However, at least in flat space classical distinctions between regular and chaotic behaviour are typically manifested in the semi-classical behaviour of true quantum systems, so that one might anticipate that any diminution of chaos associated with an expanding Universe could have important implications for such phenomena as decoherence and the classical-to-quantum transition. [5] Section II discusses the onset of chaos in timeindependent Hamiltonian systems as resulting from parameteric instability and, by generalising the discussion to time-dependent Hamiltonians, makes specific predictions as to how chaos should be manifested in the context of an expanding Universe. Section III confirms and extends these predictions with numerical simulations performed for a simple class of models, namely two-and three-degree-of-freedom generalisations of the dihedral potential [6]. Section IV concludes with speculations on the implications of these results for real systems in the early Universe.
II. CHAOS AND PARAMETRIC INSTABILITY
To facilitate sensible predictions as to the manifestations of chaos in time-dependent Hamiltonian systems, it is worth recalling why, in a time-independent Hamiltonian system, chaos implies an exponential dependence on initial conditions. A time-independent Hamiltonian of the form
H(r, p) = p 2 /2 + V (r)(1)
leads immediately to the evolution equation
d 2 r a dt 2 = − ∂V (r) ∂r a .(2)
Whether or not an orbit generated as a solution to this equation is chaotic depends on how the orbit responds to an infinitesimal perturbation. In particular, one definition of Lyapunov exponents is that they represent the average eigenvalues of the linearised stability matrix as evaluated along the orbit in an asymptotic t → ∞ limit. [7] To understand whether or not the orbit is chaotic is thus equivalent to understanding the qualitative properties of the linearised evolution equation,
d 2 δr a dt 2 = − ∂ 2 V ∂r b ∂r a r0(t) δr b .(3)
At each point along the orbit, this relation can be viewed as an eigenvalue equation, with different eigenvectors δr A satisfying
d 2 δr A dt 2 = −Ω 2 A (t)δr A ,(4)
where Ω 2 A represents some (in general) complicated function of time. Formally, one can expand Ω 2 A in a complete set of modes, both discrete and continuous, so that
d 2 δr A dt 2 = − C A 0 + α C A α cos (ω α t + ϕ α ) δr A ,(5)
where of course the sum is interpreted as a Stiltjes integral. The important question is then what sorts of solutions this equation admits.
Were Ω 2 A constant and negative, this corresponding to an imaginary frequency, it is clear that the solution to eq. (4) would diverge exponentially, so that the unperturbed orbit must be chaotic. This is in fact the essence of Hopf's [8] proof that (in modern language) a geodesic flow on a compact manifold of constant negative curvature forms a K-flow. However, chaos does not require that Ω 2 A be negative. Indeed, as stressed, e.g., by Pettini [9] in a somewhat more formal setting, eq. (4) can admit solutions that grow exponentially even if Ω 2 A is everywhere positive.
As a simple example, suppose that, in eq. (5), only one time-dependent mode need be considered, so that (with an appropriate rescaling) this relation reduces to the Matthieu equation [10]
d 2 δr A dt 2 = −(α + βcos 2t)δr A .(6)
For positive α and |β| < α, Ω 2 is always positive but, nevertheless, for appropriate values of α and β eq. (6) exhibits a parametric instability which triggers solutions that grow exponentially in time.
Eq. (6) provides a simple way to understand physically why, if one probes a curve of initial conditions in the phase space of some Hamiltonian system, one finds generically that that curve decomposes into disjoint regular and chaotic regions. Moving along the phase space curve corresponds to motion through the α − β Matthieu plane, but a generic curve in this plane will typically intersect both stable and unstable regions, i.e., regions where δr A remains bounded and regions where δr A grows exponentially.
Given this logic, it is easy to see what changes might arise if the unperturbed evolution equation is altered to allow for the expansion of the Universe. Suppose that one is considering a Universe idealised as a spatially flat Friedmann cosmology, for which
ds 2 = −dt 2 + a 2 (t)δ ab dx a dx b .(7)
The natural starting point for most field theories is then an action
S = d 4 x a 3 (t) ∂ t Φ 2 − a −2 δ ab ∂ a Φ∂ b Φ − V (Φ) (8)
which leads to mode equations of the form
d 2 φ k dt 2 + 3ȧ a dφ k dt = − k 2 a 2 φ k − ∂V ∂φ k .(9)
Similarly, when formulated in the average co-moving frame, the evolution equation for a particle moving in a fixed potential yields a proper peculiar velocity satisfying [11] dv a dt +ȧ
a v a = − ∂V (r, a(t)) ∂r a .(10)
In either case, eq. (2) has been changed in two significant ways, namely through the introduction of a framedragging term ∝ȧ/a and the explicit time-dependence now entering into the right hand side. The framedragging contribution contribution can always be scaled away by a redefinition of the basic field variable, but the time-dependence on the right hand side cannot in general be eliminated. It follows that, generically, eq. (2) will be replaced by
d 2 r a dt 2 = − ∂V (r, a(t)) ∂r a .(11)
The simplest case arises when the time-dependence enters only as an overall multiplicative factor, so that
d 2 r a dt 2 = −R[a(t)] ∂V (r) ∂r a .(12)
To the extent that perturbations of a solution to eq. (2) can be represented qualitatively by the Matthieu eq. (6), it is not unreasonable to suppose that perturbations of its time-dependent generalisation (12) can be represented by a generalised Matthieu equation of the form
d 2 δr A dt 2 = −R(t)(α + βcos 2t)δr A .(13)
or, perhaps, some generalisation thereof in which 2t is replaced by some T (t).
The qualitative character of the solutions to this equation are easy to predict theoretically and corroborate numerically, at least when the time-dependence of R is not too strong and this time variability can be treated in an adiabatic approximation: There is still a sharp distinction between stable and unstable behaviour, but "unstable" does not in general correspond to strictly exponential growth. For the true Matthieu equation (6), unstable solutions do grow exponentially, with δr A ∝ exp(ωt), but allowing for a nontrivial R(t) leads instead to solutions of the form
δr A ∼ exp [ R 1/2 (t)ωdt](14)
If, e.g., R(t) ∝ t p , chaos should correspond to the existence of perturbations that grow as
δr A ∼ exp[t 1+p/2 ].(15)
In other words, the evolution of δr A will be sub-or superexponential. Two other points should also be clear. First and foremost, the distinction between regular and chaotic should no longer be absolute. Equation (13) can be interpreted as an ordinary Matthieu equation with time-dependent "constants"α = R 1/2 α andβ = R 1/2 β. However, the fact that these "constants" change in time means that the equation satisfied by δr A is drifting through the α−β plane, so that the solutions can drift into and/or out of resonance. In other words, the behaviour of a small perturbation can in principle change from stable to unstable and/or visa versa, which corresponds to the possibility of transitions between regular and chaotic behaviour.
The other point is that, if the time-dependence is too strong, the adiabatic approximation could fail and the expected behaviour could be quite different. In particular, for R ∝ t −p , with p → −2, eq. (15) implies that the subexponential evolution expected for p somewhat smaller than zero will degenerate into a simple power law evolution where, seemingly, all hints of chaos are lost. An important question would seem to be how negative p must become before the distinctions between chaos and regularity are completely obliterated.
Corroboration of the behaviour predicted in eq. (15) and a partial answer to this last question are provided by the numerical computations described in the following section.
III. NUMERICAL SIMULATIONS
The experiments described here were performed for time-dependent extensions of the potential
V 0 (x, y, z) = −(x 2 + y 2 + z 2 ) + 1 4 (x 2 + y 2 + z 2 ) 2 − 1 4 (y 2 z 2 + bz 2 x 2 + cx 2 y 2 ),(16)
with variable parameters b and c of order unity, which constitutes a three-dimensional analogue of the so-called dihedral potential [6]. For b = c = 1, the case treated in greatest detail, this corresponds to a slightly cubed Mexican hat potential. At high energies, the potential is essentially quartic, although the quadratic couplings ensure that there are significant measures of both regular and chaotic orbits. At relatively low energies, somewhat less than zero, orbits are confined to a three-dimensional trough where, qualitatively, they behave like orbits in the "stadium problem," [12] scattering off the walls in a fashion that renders them largely chaotic. Some of the experiments involved allowing for a timedependence of the form
V (x, y, z, t) = R(t)V 0 (x, y, z)(17)
with R(t) ∝ t p . Others involved the alternative timedependence
V (x, y, z, t) = V 0 [R(t)t, R(t)y, R(t)z],(18)
again with R(t) ∝ t p . Individual sets of experiments involved freezing the energy at some fixed value, usually E[V 0 ] = 10.0, and selecting an ensemble of some 1000 to 5000 initial conditions. Three-degree-of-freedom initial conditions were generated by setting x = 0, z = const, uniformly sampling the energeticaly allowed regions of the y−z −p y −p z hypercube, and then solving for p x = p x (y, z, p y , p z , E). Two-degree-of-freedom initial conditions were generated by setting x = z = p z = 0, uniformly sampling the energetically allowed regions of of y − p y plane, and solving for p x = p x (y, p y , E). The computations were started at t = 1, t = 10, or t = 100 and ran for a total time T = 256 or T = 512, this corresponding in the absence of any time-dependence to ∼ 100 − 200 characteristic crossing times. The orbital data were recorded at intervals ∆t = 0.25. An estimate of the largest short time Lyapunov exponent [13] was computed in the usual way [6] by simultaneously evolving a slightly perturbed orbit (initial perturbation δx = 1.0 × 10 −8 ) which was periodically renormalised at intervals ∆t = 1.0 to keep the perturbation small. This led for each orbit to a numerical approximation to the quantity
χ(t) = lim δZ(0)→0 1 t ln ||δZ(t)|| ||δZ(0)||(19)
for a phase space perturbation δZ = (|δr| 2 + |δp| 2 ). 1/2 Since interest focuses on the probability that "chaos" does not correspond to a purely exponential growth, the data were reinterpreted by partitioning the Lyapunov data into bits of length ∆t = 1.0, each of which probed the growth of the perturbation during the interval ∆t i :
χ(∆t i ) = χ(t i + ∆t)(t i + ∆t) − χ(t i )t i ∆t .(20)
These bits were recombined to generate partial sums
ξ(t i ) = 1 ∆t i−1 j=1 χ(∆t j ) = 1 ∆t ln ||δZ(t i + t 0 )|| ||δZ(t 0 )||| ,(21)
which capture the net growth of the logarithm of the perturbation. These partial sums were then fit to a polynomial growth law
ξ = A + Bt q(22)
A purely exponential growth would yield q = 1; suband super-exponential growth correspond, respectively, to q < 1 and q > 1. If the perturbation only grows as a power law, eq. (22) should not yield a reasonable fit. The principal conclusion of the computations is that, overall, the adiabatic approximation works very well so that, for a broad range of values of p in the potential (17), eq. (15) appears satisfied. This is illustrated in Fig. 1 (a), which exhibits the best fit exponent q as a function of p. The data in this plot combine experiments for both two-and three-degree-of-freedom chaotic orbits which allowed for several different values of b and c. Changing the values of b and/or c can significantly change the rate at which a small initial perturbation grows, thus altering the characteristic value of B in eq. (22), but the exponent q seems independent of these details.
As a concrete example, consider the special case of two-degree-of-freedom orbits. For p = 0, i.e., no timedependence, initial conditions with E[V 0 ] = 10.0 evolved in (17) exhibit a clean split into regular and chaotic, with some 72% of the orbits chaotic and the remaining regular. Moreover, for this initial energy cantori are relatively are unimportant, so that there are few "sticky" chaotic orbits trapped near regular islands. N [ξ], the distribution of the final values of ξ, is strongly bimodal, and it is almost always easy to distinguish visually between regularity and chaos. For the chaotic orbits one finds (as must be the case) excellent agreement with eq. (15).
As p increases to assume small positive values, the final ξ's continue to yield a bimodal distibution, N [ξ], which indicates that, in terms of their sensitive dependence on initial conditions, the orbits still divide into two distinct classes. However, there is a systematic decrease in the abundance of regular orbits, so much so that, for p > 0.3, there are few if any regular orbits. Moreover, a detailed examination of individual orbits show that they can exhibit abrupt changes in behaviour between chaotic intervals where ξ grows as t 1+p/2 and regular intervals where ξ exhibits little if any growth. For 0.3 < p < 0.6 (almost) every orbit in the evolved ensembles of initial conditions seems a member of a single chaotic population with q = 1 + p/2.
However, for larger values of p orbits which began as chaotic subsequently exhibit abrupt transitions to regularity and remain regular for the duration of the integration. This behaviour reflects that fact that, at sufficiently late times, orbits with p > 0 become trapped in one of the four global minima of the potential with V 0 (x 0 , y 0 ) = −4/3, located at x 0 = ± 4/3, y 0 = ± 4/3, and oscillates in what is essentially an integrable quadratic potential. Indeed, when the orbit becomes trapped close enough to one of these minima, so that V is strictly negative, the orbit quickly loses energy until it comes to sit almost exactly at rest. This implies that, eventually, ξ decreases with time, this reflecting the fact that all orbits in the given basin of attraction are driven towards the same final point.
As p decreases from zero to somewhat negative values, two more or less distinct populations again appear to persist, at least initially, although now the relative abundance of chaotic orbits decreases rapidly. For relatively small values of p, say p < −0.5 or so, ξ grows so slowly that the chaotic and regular contributions to N [ξ] exhibit some considerable overlap, and it is no longer easy in every case to distinguish regular from chaotic. For p < −1.0 the relative measure of chaotic orbits appears to be extremely small, and for p < −1.8 it is unclear whether any "chaotic" orbits exist at all. To the extent that "chaotic orbits" do exist, they are nearly indistinguishable from "regular" orbits.
These This behaviour can be contrasted with what obtains for two-degree-of-freedom orbits for the same potential V 0 if one now allows for the time-dependence given by eq. (18). Here once again one finds that changing p leads to sub-or super-exponential sensitivity, and, as is evident from Fig. 1 (b), that increasing p tends to yield larger values of q. Moreover, there is often tangible evidence for two distinct types of orbits, seemingly chaotic and regular. However, the details change appreciably.
In this case, one finds that increasing p from zero to slightly positive values tends to increase the overall abundance of regular orbits, and that even those orbits which seems chaotic overall often exhibit regular intervals during which ξ exhibits little, if any, growth. For values as alrge as p = 0.4, it seems that most -albeit not all -of the orbits are regular nearly all the time. However, for somewhat larger values of p the relative importance of chaos again begins to increase, although one continues to observe regular intervals. For sufficiently high values of p, one finds that, as for the potential (17), all the orbits eventually get trapped near one of the four minima of the potential, at which point the orbits become completely regular and ξ becomes negative.
Alternatively, if one passes from zero to negative values of p, one finds that the relative abundance of regular orbits rapidly decreases. Indeed, for p < −0.1 or so there appear to be virtually no regular orbits. However, the chaos is vitiated in the sense that the growth of a small perturbation is decidedly subexponential. Indeed, as p decreases, one quickly reaches a point where the dependence on initial conditions is so weak that, even though it seems reasonable to term the orbits chaotic, that chaos must be viewed as extremely weak.
This behaviour is exhibited in Figs In terms of their sensitive dependence on initial conditions orbits in these time-independent Hamiltonian systems tend to divide relatively cleanly into two distinct classes; and, for the case of a time-dependence of the form given by eq. (17), those that exhibit such a sensitive dependence are well fit overall by the scaling predicted by eq. (15). However, to justify designating these orbit classes "regular" and "chaotic," it is important to verify that these distinctions also coincide with the general shapes of the orbits, as manifested visually or through the computation of a Fourier transform. This is in fact easily done. Orbits that are chaotic in terms of their sensitive dependence on initial conditions tendy systematically to look "more irregular" and to have "messier" Fourier spectra than do those which do not manifest a sensitive dependence on initial conditions. For time-independent Hamiltonian systems, one can distinguish between regular and chaotic by determining the extent to which most of the power is concentrated in a few special frequencies: in a t → ∞ limit, the power spectrum for a regular orbit will only have support at a countable set of frequencies whereas a chaotic orbit will typically have power for a continuous range of frequencies. However, this is not the case for a timedependent Hamiltonian system! As the energy changes in time, the characteristic frequencies of an orbit must change so that a long time integration necessarily yields broad band power. However, what is true for a regular orbit is that this power tends to vary comparatively smoothly with frequency.
This can be understood easily in the context of an adiabatic approximation. At any given instant, it makes sense to speak of the two principal frequencies that define (say) a two-degree-of-freedom loop orbit; but, as the energy changes, the values of these two frequencies will change continuously with time. Indeed, the phase space can eventually evolve to the point that an orbit that starts as a loop becomes unstable and is transformed into a chaotic orbit or, perhaps, a different type of regular orbit.
Representative examples of regular and chaotic orbits in time-dependent Hamiltonians are given in Figs. 6 and 7 which, respectively, exhibit data generated for the potential (17) with p = −0.4 and the potential (18) with p = 0.3. [14] In each Figure, the left hand panels correspond to two-degree-of-freedom chaotic orbits; the right hand sides correspond to two-degree-of-freedom regular orbits. The top panels display the orbits in the x − y plane. The middle panels exhibit the power spectra, |x(ω)| and |y(ω)|. The bottom panels show the total ξ(t).
IV. PHYSICAL IMPLICATIONS
At least for the time-dependent potentials described in this paper, it seems possible to make sharp distinctions between regularity and and chaos, even if these distinctions are not absolute: as its energy changes, an orbit can evolve from regular to chaotic and/or visa versa. However, the time-dependence alters the exponential dependence on initial conditions manifested in the absence of any time-dependence, yielding instead a sub-or superexponential sensitivity. For the models described in Section III, p < 0 corresponds physically to an expanding Universe; and, as such, the computations corroborate the physical expectation that the expansion should suppress chaos. Whether or not in the real Universe this expansion is strong enough to kill the chaos completely depends on the details of the assumed behaviour of both the scale factor and the dynamical model.
But why should one care? Why would the presence or absence of chaos matter? Perhaps the most important implication of chaos for the bulk evolution of selfgravitating systems is its potential role in violent relaxation, the coarse-grained evolution of a non-dissipative system towards a well-mixed state. As formulated originally by Lynden-Bell, [15] violent relaxation is a phase mixing process whereby a generic blob of collisionless matter, be it localised in space or characterised by any other phase space distribution, tends to disperse until it approaches some equilibrium or steady state. The cru-cial question is one of efficiency. How quickly, and how efficiently, will the matter disperse? At a given level of resolution, how long must one wait before the matter constitutes a reasonable sampling of a near-equilibrium?
The important point, then, is that the answer to these questions depends crucially on whether the flow be regular or chaotic. [16] If, in the absence of an expanding Universe, the matter executes regular, nonchaotic trajectories, the approach towards equilibrium will be comparatively inefficient. As probed by a coarse-grained distribution function and/or lower order phase space moments, there is a coarse-grained evolution towards equilibrium, but it proceeds only as a power law in time. If, however, the matter executes chaotic orbits, this approach will be exponential in time, proceeding at a rate that correlates with the values of the positive Lyapunov exponents.
The obvious inference is that to the extent that the expansion of the Universe weakens chaos, it should also weaken this chaotic mixing. This suggests that phase mixing will be a comparatively inefficient mechanism at early times, when the dynamics is dominated by the overall expansion, and that it can only begin to play an important role at later times, once an overdense region has "pinched off" from the overall expansion and begun to evolve more or less independently.
general trends are manifested in Figs. 2 (a) -(f), which exhibits the final N [ξ] at t = 266 for integrations with, respectively, p = −1.8, p = −0.5, p = 0.0, p = 0.2, p = 0.5, and p = 2.5. Further insights into the behaviour of small perturbations can be inferred from Figs. 3 (a) -(f), which plot ξ(t) for 150 randomly chosen orbits from each of these integrations.
. 4 (a) -(f), which plot N [ξ] at t = 266 for integrations with, respectively, p = −0.6, p = −0.3, p = −0.05, p = 0.5. p = 0.9 and p = 1.25. Figs. 3 (a) -(f) plot ξ(t) for 150 randomly chosen orbits from each of these integrations.
FIG. 2 .
2(a) Normalised distribution N [ξ(t = 522)] for two-degree-of-freedom orbits with a = b = 1 evolved for the interval 10 < t < 266 in the potential V = V0(t p r) with p = −1.8.(b) The same for p = −0.5. (c) p = 0.0. (d) p = 0.2. (e) p = 0.5 (f) p = 2.5.FIG. 3. (a) ξ(t) for 150 representative two-degree-of-freedom orbits evolved in the potential V = t p V0(r) with p = −1.8 (b) The same for p = −0.5. (c) p = 0.0. (d) p = 0.2. (e) p = 0.5 (f) p = 2.5. FIG. 4. (a) Normalised distribution N [ξ(t = 522)] for two-degree-of-freedom orbits with a = b = 1 evolved for the interval 10 < t < 266 in the potential V = V0(t p r) with p = −0.6. (b) The same for p = −0.3. (c) p = −0.05. (d) p = 0.5. (e) p = 0.9 (f) p = 1.25. FIG. 5. (a) ξ(t) for 150 representative two-degree-of-freedom orbits evolved in the potential V = V0(t p r) with p = −0.6 (b) The same for p = −0.3. (c) p = −0.05. (d) p = 0.3. (e) p = 0.9 (f) p = 1.25. FIG. 6. (a) A chaotic two-degree-of-freedom orbit evolved in the potential (17) with p = −0.4. (b) A regular orbit evolved with the same value of p. (c) The Fourier transformed quantities 10 −3 |x(ω)| and 10 −3 |y(ω)| (the latter translated upwards by 0.5) generated from the orbit in (a). (d) The same for the orbit in (b). (e) ξ(t) for the orbit in (a). (f) The same for the orbit in (b).
FIG. 7 .
7(a) A chaotic two-degree-of-freedom orbit evolved in the potential (18) with p = 0.3. (b) A regular orbit evolved with the same value of p. (c) The Fourier transformed quantities |x(ω)| and |y(ω)| (the latter translated upwards by 40) generated from the orbit in (a). (d) The same for the orbit in (b). (e) ξ(t) for the orbit in (a). (f) The same for the orbit in (b).
ACKNOWLEDGMENTSThe author would like to acknowledge arguments with himself, most of which he lost.
Strictly speaking, orbits need not be aperiodic if and only if they exhibit an exponentially sensitive dependence on initial conditions but, as a practical matter, these two characterisations of chaos typically go hand in hand. Chaos and Integrability in Nonlinear Dynamics. New YorkWileyStrictly speaking, orbits need not be aperiodic if and only if they exhibit an exponentially sensitive dependence on initial conditions but, as a practical matter, these two characterisations of chaos typically go hand in hand. See, e.g., M. Tabor, Chaos and Integrability in Nonlinear Dy- namics (Wiley: New York, 1989).
p. 264; I. Percival, in Nonlinear Dynamics and the Beam-Beam Interaction. S Cf, G Aubry, Andre, Solitons and Condensed Matter Physics. M. Month and J. C. HerraraBerlinSpringer57302Cf. S. Aubry and G. Andre, in Solitons and Condensed Matter Physics, edited by A. Bishop and T. Schneider (Springer, Berlin, 1978), p. 264; I. Percival, in Nonlinear Dynamics and the Beam-Beam Interaction, edited by M. Month and J. C. Herrara, AIP Conf. Proc. 57 (1979), p. 302.
. H E Cf, H Kandrup, Smith, Astrophys. J. 374255Cf. H. E. Kandrup and H. Smith, Astrophys. J. 374, 255 (1991);
. J Goodman, D C Heggie, P Hut, Astrophys. J. 415715J. Goodman, D. C. Heggie, and P. Hut, Astro- phys. J. 415, 715 (1993);
. H E Kandrup, M E Mahon, H Smith, Astrophys. J. 428458and numerous references thereinH. E. Kandrup, M. E. Mahon, and H. Smith, Astrophys. J. 428, 458 (1994); and nu- merous references therein.
G Cf, B V Casati, Chirikov, Quantum Chaos. CambridgeCambridge University PressCf. G. Casati and B. V. Chirikov, Quantum Chaos (Cam- bridge University Press, Cambridge, 1995).
. D Cf, Giulini, Decoherence and the Appearance of a Classical World in Quantum Field Theory. BerlinSpringerCf. D. Giulini et al, Decoherence and the Appearance of a Classical World in Quantum Field Theory (Springer, Berlin, 1996).
. D Armbruster, J Guckenheimer, S Kim, Phys. Lett. A. 140416D. Armbruster, J. Guckenheimer, and S. Kim, Phys. Lett. A 140, 416 (1989).
. . G Cf, L Bennetin, J.-M Galgani, Strelcyn, Phys. Rev. A. 142338Cf. G. Bennetin, L. Galgani, and J.-M. Strelcyn, Phys. Rev. A 14, 2338 (1976).
. E Hopf, Trans. Am. Math. Soc. 39229E. Hopf, Trans. Am. Math. Soc. 39, 229 (1936).
The analysis in these papers is formulated in the context of Maupertuis' Principle, whereby the Hamiltonian system is refor. . M Cf, Pettini, Phys. Rev. bf E. 472722Phys. Rev. E. mulated as a geodesic flowCf. M. Pettini, Phys. Rev. bf E 47, 828 (1993); see also H. E. Kandrup, Phys. Rev. E 56, 2722 (1997). The analysis in these papers is formulated in the context of Mauper- tuis' Principle, whereby the Hamiltonian system is refor- mulated as a geodesic flow.
E T Cf, G N Whittaker, Watson, A Course of Modern Analysis. CambridgeCambridge University PressCf. E. T. Whittaker and G. N. Watson, A Course of Mod- ern Analysis (Cambridge University Press, Cambridge, 1965).
P J E Peebles, The Large Scale Structure of the Universe. PrincetonPrinceton University PressP. J. E. Peebles, The Large Scale Structure of the Uni- verse (Princeton University Press, Princeton, 1980).
A J Cf, M A Lichtenberg, Lieberman, Regular and Chaotic Dynamics. BerlinSpringerCf. A. J. Lichtenberg and M. A. Lieberman, Regular and Chaotic Dynamics (Springer, Berlin, 1992).
. P Grassberger, R Badii, A Poloti, J. Stat. Phys. 51135P. Grassberger, R. Badii, and A. Poloti, J. Stat. Phys. 51, 135 (1988).
which are characterised by a net sense of rotation. However, with or without a time-dependence these potentials also admit large numbers of so called box orbits which exhibit no net sense of rotation and which, topologically. The regular examples in Figs. 6 and 7 both correspond to loop orbits. resmeble Lissajous figuresThe regular examples in Figs. 6 and 7 both correspond to loop orbits, which are characterised by a net sense of rotation. However, with or without a time-dependence these potentials also admit large numbers of so called box orbits which exhibit no net sense of rotation and which, topologically, resmeble Lissajous figures.
. D Lynden-Bell, Mon. Not. R. Astr. Soc. 136101D. Lynden-Bell, Mon. Not. R. Astr. Soc. 136, 101 (1968).
. H E Kandrup, M E Mahon, Phys. Rev. E. 493735H. E. Kandrup and M. E. Mahon, Phys. Rev. E 49, 3735 (1994);
. D Merritt, M Valluri, Astrophys. J. 47182D. Merritt and M. Valluri, Astrophys. J. 471, 82 (1996);
. H E Kandrup, Mon. Not. R. Astro. Soc. 301960H. E. Kandrup, Mon. Not. R. Astro. Soc. 301, 960 (1998).
Best fit exponent q for a growth rate ∝ exp(at q ) for two-degree-of-freedom chaotic orbits evolving in the potential V = t p V0(r) with a = b = 1. The line yields the prediction of an adiabatic approximation. (b) The same. except for the potential V = V0(t p rFIG. 1. (a) Best fit exponent q for a growth rate ∝ exp(at q ) for two-degree-of-freedom chaotic orbits evolving in the poten- tial V = t p V0(r) with a = b = 1. The line yields the prediction of an adiabatic approximation. (b) The same, except for the potential V = V0(t p r).
|
[] |
[
"Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice",
"Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice"
] |
[
"M Scutari ",
"I Mackay ",
"D Balding ",
"W Valdar ",
"L C Solberg ",
"D Gauguier ",
"S Burnett ",
"P Klenerman ",
"W O Cookson "
] |
[] |
[
"Theor Appl Genet"
] |
The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either F ST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics.
|
10.1371/journal.pgen.1006288
| null | 8,212,243 |
1509.00415
|
61e8eb2aa787e7b6d9a93328c7abae20c2c48828
|
Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice
2016. 2014. 2006
M Scutari
I Mackay
D Balding
W Valdar
L C Solberg
D Gauguier
S Burnett
P Klenerman
W O Cookson
Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice
Theor Appl Genet
1292016. 2014. 200625273129Received: September 4, 2015 Accepted: August 10, 2016RESEARCH ARTICLE 1 / 19 a11111 Editor: John Micheal Hickey, The Roslin Institute, The University of Edinburgh, UNITED KINGDOM Data Availability Statement: Data are publicly available within the following papers: Bentley AR, Scutari M, Gosman N, Faure S, Bedford F, Howell P, et al. Applying Association Mapping and Genomic Selection to the Dissection of Key Traits in Elite European Wheat. [42]; and Li JZ, Absher DM, Tang H, Southwick AM, Casto AM, Ramachandran S, et al. Worldwide human genetics, we find that accuracy decays approximately linearly in either of those measures. Quantifying this decay has fundamental applications in all branches of genetics, as it measures how studies generalise to different populations.
The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either F ST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics.
Author Summary
The availability of increasing amounts of genomic data is making the use of statistical models to predict traits of interest a mainstay of many applications in life sciences. Applications range from medical diagnostics for common and rare diseases to breeding characteristics such as disease resistance in plants and animals of commercial interest. We explored an implicit assumption of how such prediction models are often assessed: that the individuals whose traits we would like to predict originate from the same population as those that are used to train the models. This is commonly not the case, especially in the case of plants and animals that are parts of selection programs. To study this problem we proposed a model-agnostic approach to infer the accuracy of prediction models as a function of two common measures of genetic distance. Using data from plant, animal and Introduction Predicting unobserved phenotypes using high-density SNP or sequence data is the foundation of many applications in medical diagnostics [1][2][3], plant [4,5] and animal [6] breeding. The accuracy of genomic predictions will depend on a number of factors: relatedness among genotyped individuals [7,8]; the density of the markers [7,9,10]; and the genetic architecture of the trait, in particular the allele frequencies of causal variants [11,12] and the distribution of their effect sizes [7].
Most of these issues have been explored in the literature, and have been tackled in various ways either from a methodological perspective or by producing larger data sets and more accurate phenotyping. However, the extent to which predictive models generalise from the populations used to train them to distantly related target populations appears not to have been widely investigated (two exceptions are [7,13]). The accuracy of prediction models is often evaluated in a general setting using cross-validation with random splits, which implicitly assumes that test individuals are drawn from the same population as the training sample; in that case accuracy to predict phenotypes is only bounded by heritability, although unaccounted "missing heritability" is common [14,15]. However, this assumption is violated in many practical applications, such as genomic selection, that require predictions of individuals that are genetically distinct from the training sample: for instance, causal variants may differ in both frequency and effect size between different ancestry groups (in humans, e.g. [16] for lactose persistence), subspecies (in plants and animals, e.g. [17] for rice) or even families [18]. In such cases crossvalidation with random splits may overestimate predictive accuracy due to the mismatch between model validation and the prediction problem of interest [19,20] even when population structure is taken into account [21]. The more distantly the target population is related to the training population, the lower the average predictive accuracy of a genomic model; this has been demonstrated on both simulated and real dairy cattle data [20,22,23].
In this paper we will investigate the relationship between genetic distance and predictive accuracy in the prediction of quantitative traits. We will simulate training and target samples with varying genetic distances by splitting the training population into a sequence of pairs of subsets with increasing genetic differentiation. We will measure predictive accuracy with Pearson's correlation, which we will estimate by performing genomic prediction from one subset to the other in each pair. Among various measures of relatedness available in the literature, we will consider mean kinship and F ST , although we will only focus on the latter. We will then study the mean Pearson's correlation as a function of genetic distance, which we will refer to as the "decay curve" of the former over the latter.
This approach is valuable in addressing several key questions in the implementation of genomic selection programs, such as: How often (e.g., in terms of future generations) will the genomic prediction model have to be re-estimated to maintain a minimum required accuracy in the predictions of the phenotypes? How should we structure our training population to maximise that accuracy? Which new, distantly related individuals would be beneficial to introduce in a selection program for the purpose of maintaining a sufficient level of genetic variability?
Materials and Methods
Genomic Prediction Models
A baseline model for genomic prediction of quantitative traits is the genomic BLUP (GBLUP; [24,25]), which is usually written as
y ¼ μ þ Zg þ ε with g $ Nð0; Ks 2 g Þ and ε $ Nð0; s 2 ε Þ;ð1Þ
where g is a vector of genetic random effects, Z is a design matrix that can be used to indicate the same genotype exposed to different environments, K is a kinship matrix and ε is the error term. Many of its properties are available in closed form thanks to its simple definition and normality assumptions, including closed form expressions of and upper bounds on predictive accuracy that take into account possible model misspecification [15]. Other common choices are additive linear regression models of the form
y ¼ μ þ Xβ þ εð2Þ
where y is the trait of interest; X are the markers (such as SNP allele counts coded as 0, 1 and 2 with 1 the heterozygote); β are the marker effects; and ε are independent, normally-distributed errors with variance s 2 ε . Depending on the choice of the prior distribution for β, we can obtain different models from the literature such as BayesA and BayesB [25], ridge regression [26], the LASSO [27] or the elastic net [28]. The model in Eq (1) is equivalent to that in Eq (2) if the kinship matrix K is computed from the markers X and has the form X X T and β * N(0, VAR(β)) [29,30]. In the remainder of the paper we will focus on the elastic net, which we have found to outperform other predictive models on real-world data [31]. This has been recently confirmed in [32].
Predictive accuracy is often measured by the Pearson correlation (r) between the predicted and observed phenotypes. When we use the fitted values from the training population as the predicted phenotypes, and assuming that the model is correctly specified,r 2 coincides with the proportion of genetic variance of the trait explained by the model and thereforer 2 ⩽ h 2 , the heritability of the trait. (An incorrect model may lead to overfitting, and in that caser 2 ⩾ h 2 .) When using cross-validation with random splits,r CV ⩽r and typically the difference will be noticeable (r CV (r). However,r CV may still overestimate the actual predictive accuracyr D in practical applications where target individuals for prediction are more different from the training population than the test samples generated using cross-validation [14]. This problem may be addressed by the use of alternative model validation schemes that mirror more closely the prediction task of interest; for instance, by simulating progeny of the training population to assess predictive accuracy for a genomic selection program. This approach is known as forward prediction and is common in animal breeding [19,33].
Another possible choice is the prediction error variance (PEV). It is commonly used in conjunction with GBLUP because, for that model, it can be estimated (for small samples) or approximated (for large samples) in closed form from Henderson's mixed model equations [34]. In the general case no closed form estimate is available, but PEV can still be derived from Pearson's correlation [35] for any kind of model as both carry the same information:
PEV ¼ ð1 Àr 2 Þ Ã VAR ðyÞ:ð3Þ
For consistency with our previous work [31] and with [4], whose results we partially replicate below, we will only consider predictive correlation in the following.
Kinship Coefficients and F ST
A common measure of kinship from marker data is average allelic correlation [24,36], which is defined as K = [k ij ] with
k ij ¼ 1 m X m k¼1X ikX jkð4Þ
whereX ik andX jk are the standardised allele counts for the ith and jth individuals and the kth marker. An important property of allelic correlation is that it is inversely proportional to the Euclidean distance between the marker profiles X i , X j of the corresponding individuals: if the markers are standardised
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2n À 2k ij q ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2n À 2 X m k¼1X ikX jk s ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi X m k¼1X 2 ik þX 2 jk À 2X ikX jk s ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X m k¼1 ðX ik ÀX jk Þ 2 s :ð5Þ
This result has been used in conjunction with clustering methods such as k-means or partitioning around medoids (PAM; [37]) to produce subsets of minimally related individuals from a given sample by maximising the Euclidean distance [14,19,38].
At the population level, the divergence between two populations due to drift, environmental adaptation, or artificial selection is commonly measured with F ST . Several estimators are available in the literature, and reviewed in [39]. In this paper we will adopt the estimator from [40], which is obtained by maximising the Beta-Binomial likelihood of the allele frequencies as a function of F ST .F ST then describes how far the target population has diverged from the training population, which translates to "how far" a genomic prediction model will be required to predict. In terms of kinship, we know from the literature that the mean kinship coefficient k between two individuals in different populations is inversely related toF ST [41]: kinship can be interpreted as the probability that two alleles are identical by descent, which is inversely related to F ST which is a mean inbreeding coefficient. Intuitively, the fact that individuals in the two populations are closely related implies that the latter have not diverged much from the former: if k is large, the marker profiles (and therefore the corresponding allele frequencies) will on average be similar. As a result, any clustering method that uses the Euclidean distance to partition a population into subsets will maximise their F ST by minimising k. The simulations and data analyses below confirm experimentally that k andF ST are highly correlated, which makes them equivalent in building the decay curves; thus we will report results only forF ST (see Section C in S1 Text).
Real-World Data Sets
We evaluate our approach to construct decay curves for predictive accuracy using two publicly-available real-world data sets with continuous phenotypic traits, and a third, human, genotype data set.
WHEAT. We consider 376 wheat varieties from the TriticeaeGenome project, described in [4]. Varieties collected from those registered in France (210 varieties), Germany (90 varieties) and the UK (75 varieties) between 1946 and 2007 were genotyped using a combination of 2712 predominantly DArT markers. Several traits were recorded; in this paper we will focus on grain yield, height, flowering time, and grain protein content. Genotype-environment interactions were accounted for by an incomplete block design over trial fields in different countries, to prevent genomic prediction being biased by the country of registration of each variety. As in [4], we also group varieties in three groups based on their year of registration: pre-1990 (103 varieties), 1990 to 1999 (120 varieties), and post-1999 (153 varieties).
MICE.
The heterogeneous mice population from [42] consists of 1940 individuals genotyped with 12545 SNPs; among the recorded traits, we consider growth rate and weight. The data include a number of inbred families, the largest being F005 (287 mice), F008 (293 mice), F010 (332 mice) and F016 (309 mice).
HUMAN. The marker profiles from the Human Genetic Diversity Panel [43] include 1043 individuals from different ancestry groups: 151 from Africa, 108 from America, 435 from Asia, 167 from Europe, 146 from the Middle East and 36 from Oceania. Each has been genotyped with 650,000 SNPs; for computational reasons we only use those in chromosomes 1 and 2, for a total of 90,487 SNPs.
All data sets have been pre-processed by removing markers with minor allele frequencies < 1% and those with > 20% missing data. The missing data in the remaining markers have been imputed using the impute R package [44]. Finally, we removed one marker from each pair whose allele counts have correlation > 0.95 to increase the numerical stability of the genomic prediction models.
Decay Curves for Predictive Accuracy
We estimate a decay curve ofr D as a function of F ST as follows:
1. Produce a pair of minimally related subsets (i.e., with maximum F ST ) from our training population using k-means clustering, k = 2 in R [45]. PAM was also considered as an alternative clustering method, but produced subsets identical to those from k-means for all the data sets studied in this paper. The largest of these two subsets will be used to train the genomic prediction model, and will be considered the ancestral population for the purposes of computing F ST ; the smallest will be the target used for prediction. In the following we will call them the training subsample and the target subsample, respectively.
ComputeF
ST andr ð0Þ D for the pair of subsets with a genomic prediction model. We com-puteF ð0Þ ST using the Beta-Binomial estimator from [40]; and we computer ð0Þ D with the elastic net implementation in the glmnet R package [46]. Other models can be used: the proposed approach is model-agnostic as it only requires the chosen model to be able to produce estimates of its predictive correlation. The optimal values for the penalty parameters of the elastic net are chosen to maximiser CV on the training subset using 5 runs of 10-fold crossvalidation as in [47]. ðF [48]), which can be used to produce both the mean and its 95% confidence interval at any point in the range of observedF ST . We denote withr D the resulting estimate of predictive correlation for any givenF ST .
The pair of subsets produced by k-means corresponds to m = 0, hence the notation ðF ð0Þ ST ;r ð0Þ D Þ, and we increase m by steps of 2 to 20 until theF ST between the subsamples is at most 0.005. We choose the stepping for each data set to be sufficiently small to cover the interval ½0;F ð0Þ ST as uniformly as possible. The larger m is, the smaller we can expectF ðmÞ ST to be. We repeat step 3(a) and 3(b) 40 times for each m to achieve the precision needed for an acceptably smooth curve.
As an alternative approach, we also consider estimating the decay rate ofr D by linear regression of ther ðmÞ D against theF ðmÞ ST ; we will denote the resulting predictive accuracy estimates withr L . For any set value ofF ST , we compare ther L at thatF ST with the corresponding valuê r D from the decay curve estimated by averaging all ther ðmÞ D for which jF ðmÞ ST ÀF ST j ⩽ 0:01. Assuming that the decay curve is in fact a straight line reduces the number of subsamples that we need to generate, enforces smoothness and makes it possible to computer L for values of F ST larger thanF ð0Þ ST . On the other hand, the estimatedr L will be increasingly unreliable aŝ r L ! 0, because the regression line will provide negativer L instead of converging asymptotically to zero. We also regress the ðr ðmÞ D Þ 2 against the ðF ðmÞ ST Þ 2 to investigate whether they have a stronger linear relationship than ther ðmÞ D with theF ðmÞ ST , as suggested in [22] using simulated genotypes and phenotypes mimicking a dairy cattle population.
The size of the training (n TR ) and target (n TA ) subsamples is determined by k-means. For the data used in this paper, k-means splits the training populations in two subsamples of comparable size; but we may require a smaller n TA ( n TR to estimater ð0Þ D and ther ðmÞ D while at the same time a larger n TR is needed to fit the genomic prediction model. In that case, we increase n TR by moving individuals from the target subsample while keeping theF ð0Þ ST between the two as large as possible. The impact on the estimatedF ST is likely to be small, because its precision depends more on the number of markers than on n TR and n TA [40]. The estimatedr 0 D andr ðmÞ D might be inflated because we are altering the subsets, even whenF ST does not change appreciably. Its variance, which can be approximated as in [49], decreases linearly in n TA except that can be compensated by generating more pairs of subsamples for each value of m.
Simulation Studies
We study the behaviour of the decay curves via two simulation studies. Genomic selection. We simulate a genomic selection program using the wheat varieties registered in the last 5 years of the WHEAT data as founders. The simulation is a forward simulation implemented as follows for 10, 50, 200 and 1000 causal variants, and decay curves are produced for each.
1. We set up a training population of 200 founders: 96 varieties from the WHEAT data, 104 obtained from the former via random mating without selfing using the HaploSim R package [50]. HaploSim assumes that markers are allocated at regular intervals across the genome, we allocated them uniformly in 21 chromosomes (wheat varieties in the WHEAT data are allohexaploid, with 2n = 6x = 42) to obtain roughly the desired amount of recombination and to preserve the linkage disequilibrium patterns as much as possible.
2. We generate phenotypes by selecting causal variants at random among markers with minor allele frequency > 5% and assigning them normally-distributed additive effects with mean zero. Noise is likewise normally distributed with mean zero and standard deviation 1, and the standard deviation of the additive effects is set such that h 2 % 0.55. We choose this value as the mid-point of a range of heritabilities, [0.40, 0.70], we consider to be of interest.
3. We fit a genomic prediction model on the whole training population.
4. For 100 times, we perform a sequence of 10 rounds of selection. In each round:
a. we generate the marker profiles of 200 progeny via random mating, again without selfing;
b. we generate the phenotypes for the progeny as in step 2;
c. we compute theF ST between the training population and the progeny generated in 4a;
d. we use the marker profiles from step 4a and the genomic prediction model from 3 to obtain predicted values for the phenotypes, which are then used together with those from step 4b to compute predictive correlation;
e. we select the 20 individuals with the largest phenotypes as the parents of the next round of selection.
5. We compute the average predictive correlation r and the averageF ST for each round of selection, which are used as reference points to assess how well the results of the genomic selection simulation are predicted by the decay curve. rÞ reference points from step 5.
We then repeat this simulation after adding the varieties available at the end of the second round of selection to the training population while considering the scenario with 200 and 1000 causal variants. The size of the training population is thus increased to 800 varieties, allowing us to explore the effects of a larger sample size and of considering new varieties from the breeding program to update the genomic prediction models when their predictive accuracy is no longer acceptable. In the following, we refer to this second population as the "augmented population" as opposed to the "original population" including only the 200 varieties described in steps 1 and 2 above.
Cross-population prediction. We explore cross-population predictions using the HUMAN data and simulated phenotypes. Similarly to the above, we pick 5, 20, 100, 2000, 10000 and 50000 causal variants at random among those with minor allele frequency > 5% and we assign them normally-distributed effects such that h 2 % 0.55. The same effect sizes are used for all populations. We then use individuals from Asia as the training population to estimate the decay curves. Those from other continents are the target populations for which we are assessing predictive accuracy, and we compute theirF ST and the corresponding predictive correlationsr P . We use the ðF ST ;r P Þ points as terms of comparison to assess the quality of the curve, which should be close to them or at least cross the respective 95% confidence intervals.
Real-World Data Analyses
Finally, we estimate the decay curves for some of the phenotypes available in the WHEAT and MICE data. For both data sets we also produce and average 40 values ofr CV using hold-out cross-validation. In hold-out cross-validation we repeatedly split the data at random into training and target subsamples whose sizes are fixed to be the same as those arising from clustering in step 1 of the decay curve estimation. Then we fit an elastic net model on the training subsamples and predict the phenotypes in the target subsamples to estimatesr CV . Ideally, the decay curve should cross the area in which the ðF ST ;r CV Þ points cluster.
WHEAT data. For the WHEAT data, we construct decay curves for grain yield, height, flowering time and grain protein content using the French wheat varieties as the training population. UK and German varieties are the target populations, for which we estimate ðF ST ;r P Þ. Furthermore, we also construct a second decay curve for yield using the varieties registered before 1990 as the training population, as in [4]. Varieties registered between 1990 and 1999, and those registered after 2000, are used as target populations.
MICE data. For the MICE data, we construct decay curves for both growth rate and weight using each of the F005, F008, F010 and F016 inbred families in turn as the training population; the remaining families are used as target populations.
Results
General Considerations
The decay curves from the simulations are shown in Figs 1, 2 and 3, and the corresponding predictive correlations are reported in Tables 1 and 2 and S1 Text. The predictive correlations for the WHEAT and MICE data sets are reported in Table 2, and the decay curves are shown in Figs 1, 2 and 3 and S1 Text. A summary of the different predictive correlations defined in the Methods and discussed here is provided in Table 1.
In all the simulations and the real-world data analyses ther D from the decay curve is close to the linear interpolationr L ; considering all the reference populations in Table 2 represented byr L (p = 0.784, see Section D in S1 Text). The range of the predictive correlationsr ðmÞ D around the decay curves varies between 0.05 and 0.10, and it is constant over the range of observedF ST for each curve. It does not appear to be related to either the size of the training subsample or the number of causal variants. This is apparent in particular from the genomic selection simulation, in which both are jointly set to different combinations of values. Similarly, there seems to be no relationship between the spread and the magnitude of the predictive correlations (r ðmÞ D 2 ½0; 0:75). This amount of variability is comparable to that of other studies (e.g., the range of ther ðmÞ D is smaller than that in the cross-validated correlations in [32]) once we take into account that the ðF ðmÞ ST ;r ðmÞ D Þ are individual predictions and are not averaged over multiple repetitions. Furthermore, subsampling further reduces the size of the training subpopulations; and fitting the elastic net requires a search over a grid of values for its two tuning parameters, which may get stuck in local optima.
Real-World Data Analyses
Several interesting points arise from the analysis of the real phenotypes in the WHEAT and MICE data, shown in Table 2 and in Figs B.1, B.2 and B.3 in S1 Text. Firstly, cross-validation always produces pairs of subsamples withF ST ⩽ 0:01 and highr CV that are located at the left end of the decay curve. The averageF ST is 0.006 for the WHEAT data and 0.001 for the MICE data, and the difference between the averager CV and the correspondingr D is ( 0.02 10 times out of 12 (83%, see Table B.4 in S1 Text). The spread of ther CV is also similar to that of thê r ðmÞ D . Secondly, we note that in the WHEAT data all decay curves but that for flowering time cross the 95% confidence intervals for the cross-country predictive correlationsr P for Germany and UK reported in [4]. Even in the MICE data, in which all families are near the end or beyond the reach of the decay curves, the latter (or their linear approximations) cross the 95% confidence intervals for ther P 18 times out of 24 (75%). However, we also note that those intervals are wide due to the limited sizes of those populations. Furthermore, the decay curves for the phenotypes in the WHEAT data confirm two additional considerations originally made in [4]. Firstly, [4] noted that the distribution of the Ppd-D1a gene, which is a major driver of this flowering time, varies substantially with the country of registration and thus cross-country predictions are not reliable. Fig B.1 in S1 Text shows that the decay curve vastly overestimates the predictive correlation for both Germany and the UK. Splitting the WHEAT data in two halves that contain equal proportions of both alleles of Ppd-D1a and that are genetically closer overall (F ST ¼ 0:04), we obtain a decay curve that fits the predictive correlations reported in the original paper (r D ¼ 0:77,r P ¼ 0:79). Secondly, we also split the data according to their year of registration and use the oldest varieties (pre-1990) as a training sample for predicting yield. Again the decay curve crosses the 95% confidence intervals for the predictive correlations reported in [4]
Simulation Studies
The decay curves from the genomic selection simulation on the original training population (200 varieties), shown in blue in Fig 1, span two rounds of selection and three generations. When considering 200 or 1000 causal variants, the curve overlaps the mean behaviour of the simulated data points (shown in green) almost perfectly: the difference between the generation means r and the decay curve is 0.06 for the first three generations, with the exception of the first generation in the simulation with 1000 variants (j r Àr D j ¼ 0:09). As the number of causal variants decreases (50,10), the decay curve increasingly overestimates r, although the difference remains ⩽ 0.10 for the first two generations; and both show a slower decay than the r. This appears to be due to a few alleles of large effect becoming fixed by the selection, leading to a rapid decrease of r without a corresponding rapid increase inF ST . The decay curves fitted on the augmented training populations (800 varieties, now including those available at the end of the second round of selection, Fig 2) fit the first four generations well (j r Àr D j ⩽ 0:04 for the first two, j r Àr D j ⩽ 0:06 for the third and the fourth). As before, the only exception is the first generation in the simulation with 1000 variants, with an absolute difference of 0.09. However, the decay curves are also able to capture the long-range decay rates through their linear approximations. When considering 200 causal variants, j r Àr L j % 0:08 for generations 5 to 7 and % 0.10 for generations 8 and 9; and j r Àr L j ( 0:05 for generations 4 to 9 when considering 1000 causal variants. This can be attributed to the increased sample size of the trainingpopulation, which both improves the goodness of fit of the estimated decay curve; and makes the decay rate of the r closer to linear, thus making it possible for thê r L to approximate it well over a large range of F ST values. To investigate this phenomenon, we gradually increased the initial training population to 4000 varieties through random mating and we observed that for such a large sample size r indeed decreases linearly as a function of F ST . We conjecture that this is due to a combination of the higher values observed for r and their slower rate of decay, which prevents the latter from gradually decreasing as r is still far from zero after 10 generations. In addition, we note that increasing the number of causal variants has a similar effect; with 200 and 1000 causal variants r indeed decreases with an approximately linear trend, which is not the case with 10 and 50 causal variants.
The cross-population prediction simulation based on the HUMAN data (Fig 3) generated results consistent with those above. As before, the number of causal variants appears to influence the behaviour of the decay curve: while ther ðmÞ D decrease linearly for 20, 100 and 2000 casual variants, they converge to 0.65 for 5 causal variants. However, unlike in the genomic selection simulation, the quality of the estimated decay curve does not appear to degrade as the number of causal variants decreases. This difference may depend on the lack of a systematic selection pressure in the current simulation, which made the decay curve overestimate predictive correlation when considering 10 variants in the previous simulation. Finally, as in the analysis of the MICE MIDDLE EAST, AMERICA, AFRICA and OCEANIA correspond to ther P for the individuals from those continents, and the red brackets are the respective 95% confidence intervals.
doi:10.1371/journal.pgen.1006288.g003 Table 1. Summary of the predictive correlations defined in the Methods.
r CV Predictive correlation computed on the whole training population by hold-out cross-validation with random splits. data, the linear approximationr L to the decay curve provides a way to extend the reach of the decay curve to estimate predictive correlationsr P for distantly related populations (AMERICA, AFRICA, OCEANIA). Again we observe some loss in precision (see Table 2), but the extension still crosses the 95% confidence intervals of thoser P 14 times out of 18 (78%).
Discussion
Being able to assess the predictive accuracy is important in many applications, and will assist in the development of new models and in the choice of training populations. A number of papers have discussed various aspects of the relationship between training and target populations in genomic prediction, and of characterising predictive accuracy given some combination of r P is the predictive correlation for the target population from the full training population.r D is the decay curve estimate ofr P , and is only available if the target population falls within the span of the decay curve.r L is the corresponding estimate from the linear extrapolation. n TR is the size of the training subsamples and n TA is the size of the target subsamples; those marked with an asterisk have been reduced to increase n TR . genotypes and pedigree information. For instance, [51] discusses how to choose which individuals to include in the training population to maximise prediction accuracy for a given target population using the coefficient of determination. [52] separates the contributions of linkage disequilibrium, co-segregation and additive genetic relationships to predictive accuracy, which can help in setting expectations about the possible performance of prediction. [53] and [22] link predictive accuracy to kinship in a simulation study of dairy cattle breeding; and [54] investigates the impact of population size, population structure and replication in a simulated biparental maize populations. The approach we take in this paper is different in a few, important ways. Firstly, we choose to avoid the parametric assumptions underlying GBLUP and the corresponding approximations based on Henderson's equations that provide closed-form results on predictive accuracy in the literature. It has been noted in our previous work [31] and in the literature (e.g. [32]) that in some settings GBLUP may not be competitive for genomic prediction; hence we prefer to use models with better predictive accuracy such as the elastic net for which the parametric assumptions do not hold. Our model-agnostic approach is beneficial also because decay curves can then be constructed for current and future competitive models, since the only requirement of our approach is that they must be able to produce an estimate of predictive correlation. Secondly, we demonstrate that the decay curves estimated with the proposed approach are accurate in different settings and on human, plant and animal real-world data sets. This complements previous work that often used synthetic genotypes and analysed predictive accuracy in a single domain, such as forward simulation studies on dairy cattle data. Finally, we recognise that the target population whose phenotypes we would like to predict may not be available or even known when training the model. In plant and animal selection programs, one or more future rounds of crossings may not yet have been performed; in human genetics, prediction may be required into different demographic groups for which no training data are available. Therefore, we are often limited to extrapolating ar D to estimate ther P we would observe if the target population were available. Prior information onF ST values is available for many species such as humans [39,43]; and can be used to extract the correspondingr D from a decay curve. We observe that the decay rate ofr D is approximately linear inF ST for most of the curves, suggesting that regressing ther ðmÞ D against theF ðmÞ ST is a viable estimation approach. This has the advantage of being computationally cheaper than producing a smooth curve with LOESS since it requires fewer ðF ðmÞ ST ;r ðmÞ D Þ points and thus fewer genomic prediction models to be fitted. In fact, if we assume that the decay rate is linear we could also estimate it as the slope of the line passing through ðF ST % 0;r CV Þ and ðF ðmÞ ST ;r ðmÞ D Þ for a single, small value of m. It should be noted, however, that several factors can cause departures from linearity, including the number of causal variants underlying the trait, the use of small training populations and the confounding effect of exogenous factors. In the case of the MICE data, for instance, predictions may be influenced by cage effects; in the case of the WHEAT data, environmental and seasonal effects might not be perfectly captured and removed by the trials' experimental design. We also note that the decay curves for traits with small heritabilities will almost never be linear, becauser D converges asymptotically to zero. Unlike the results reported in [22], we do not find a statistically significant difference between the strength of the linear relationship betweenr D andF ST and that between the respective squares. There may be several reasons for this discrepancy; the simulation study in [22] was markedly different from the analyses presented in this paper, since it used simulated genotypes to generate the population structure typical of dairy cattle and since it used GBLUP as a genomic prediction model.
We also observe that whenF ðmÞ ST % 0, bothr ðmÞ D andr L are, as expected, similar to ther CV obtained by applying cross-validation to the training populations selected from the WHEAT and MICE data. This suggests that indeedr CV is an accurate measure of predictive accuracy only when the target individuals for prediction are drawn from the same population as the training sample, as previously argued by [14] and [19], among others.
Some limitations of the proposed approach are also apparent from the results presented in the previous section. The most important of these limitations appears to be that in the context of a breeding program the performance of the decay curve depends on the polygenic nature of the trait being predicted, as we can see by comparing the panels in Fig 1. This can be explained by the fact that causal variants underlying less polygenic, highly and moderately heritable traits will necessarily have some individually large effects. As each of those variants approaches fixation due to selection pressure, allele frequencies in key areas of the genome will depart from those in the training population and the accuracy of any genomic prediction model will rapidly decrease [21]. However, these selection effects are genomically local and so have little impact onF ST . A similar effect has been observed for flowering time in the WHEAT data. [4] notes that the Ppd-D1a gene is a major driver of early flowering, but it is nearly monomorphic in one allele in French wheat varieties and nearly monomorphic in the other allele in Germany and the UK. As a result, even though theF ST for those countries are as small as 0.031 and 0.042,r D widely overestimatesr P in both cases. A possible solution would be to computeF ST only on the relevant regions of the genome or, if their precise location is unknown, on the relevant chromosomes; or to weightF ST to promote genomic regions of interest.
On the other hand, in the case of more polygenic traits a larger portion of the genome will be in linkage disequilibrium with at least one causal variant, and their effects will be individually small. Therefore,F ST will increase more quickly in response to selection pressure and changes in predictive accuracy will be smoother, thus allowingr D to track them more easily. Indeed, in the WHEAT data the genomic prediction model for flowering time has a much smaller number of non-zero coefficients (28) compared to yield (91), height (286) and grain protein content (121). Similarly, in the MICE data the model fitted on F010 to predict weight has only 168 non-zero coefficients while others range from 212 to 1169 non-zero coefficients. By contrast, all models fitted for predicting weight, which correspond to curves that well approximate other families'r P , have between 1128 and 2288 non-zero coefficients.
The simulation on the HUMAN data suggests different considerations apply to outbred species. Having some large-effect causal variants does not necessarily result in low quality decay curves; on the contrary, if we assume that the trait is controlled by the same causal variants in the training and target populations it is possible to have a good level of agreement between thê r D and ther P . Intuitively, we expect strong effects to carry well across populations and thusr D does not decrease beyond a certain F ST . However, this will mean that the curves will not be linear andr L will underestimater P (see Fig 3, top left panel). We also note that effect sizes are the same in all the populations, which may make our estimates of predictive accuracy optimistic.
Another important consideration is that since the decay curve is extrapolated from the training population, its precision decreases as F ST increases, as can be seen from both simulations and by comparing the WHEAT and MICE data. Predictions will be poor in practice if the target and the training populations are too genetically distinct; an example are rice subspecies [17], which have been subject to intensive inbreeding. The trait to be predicted must have a common genetic basis across training and target populations. However, the availability of denser genomic data and of larger samples may improve both predictive accuracy and the precision of the decay curve for large F ST . Furthermore, the range of the decay curve in terms of F ST depends on the amount of genetic variability present in the training population; the more homogeneous it is, the more unlikely that k-means clustering will be able to split it in two subsets with highF ð0Þ ST . One solution is to assume the decay is linear and user L instead ofr D to estimater P ; but as we noted above this is only possible ifr P ) 0. Ifr P % 0, the decay curve estimated with LOESS fromr D can converge asymptotically to zero asF ST increases; but the linear regression used to estimater L will continue to decrease untilr L ( 0. Another possible solution is to try to increaseF ST by moving observations between the two subsets, but improvements are marginal at best and there is a risk of inflatingr D .
Even with such limitations, estimating a decay curve for predictive correlation has many possible uses. In the context of plant and animal breeding, it is a useful tool to answer many key questions in planning genomic selection programs. Firstly, different training populations (in terms of allele frequencies, sample size, presence of different families, etc.) can be compared to choose that which results in the slowest decay rate. Secondly, the decay curve can be used to decide when genomic prediction can no longer be assumed to be accurate enough for selection purposes, and thus how often the model should be re-trained on a new set of phenotypes. Unlike genotyping costs, phenotyping costs for productivity traits have not decreased over the years. Furthermore, the rate of phenotypic improvements (i.e. selection cycle time) can be severely reduced by the need of performing progeny tests. Therefore, limiting phenotyping to once every few generations can reduce the cost and effort of running a breeding program. The presence of close ancestors in the training population suggests that decay curves are most likely reliable for this purpose, as we have shown both in the simulations and in predicting newer wheat varieties from older ones in the WHEAT data.
The other major application of decay curves is estimating the predictive accuracy of a model for target populations that, while not direct descendants of the training population, are assumed not to have strongly diverged and thus to have comparable genetic architectures. Some examples of such settings are the cross-country predictions for the WHEAT data, the cross-family predictions for the MICE data and across human populations. In human genetics, decay curves could be used to study the accuracy of predictions and help predict the success of interventions of poorly-studied populations. In plant and animal breeding, on the other hand, it is common to incorporate distantly related samples in selection programs to maintain a sufficient level of genetic variability. Decay curves can provide an indication of how accurately the phenotypes for such samples are estimated, since the model has not been trained to predict them well and they are not as closely related as the individuals in the program.
Supporting Information
D
Þ will act as the far end of the decay curve (in terms of genetic distance).3. For increasing numbersm of individuals: a. create a new pair of subsamples by swapping m individuals at random between the training and the test subsamples from step 1; b. fit a genomic prediction model on the new training subsample and use it to predict the new target subsample, thus obtaining ðF ðmÞ ST ;r ðmÞ D Þ using the same algorithms as in step 2. 4. Estimate the decay curve from the sets of ðF ðmÞ ST ;r ðmÞ D Þ points using local regression (LOESS;
D
Þ and its linear approximationr L from the training population, and we compare it with the average ðF ST ;
and the generation means in Tables A.1 and A.2 in S1 Text, jr D Àr L j ( 0:02 41 times out of 47 (87%). Both estimates of predictive correlation are close to the respective reference values r andr P ; the difference (in absolute value) is ( 0.05 39 times (41%) and ( 0.10 69 times (73%) out of 94. The proportion of small differences increases when considering only target populations that fall within the span of the decay curve: 23 out of 44 (52%) are ( 0.05 and 38 are ( 0.10 (84%). This is expected because the decay curve is already an extrapolation from the training population, so extending it further with the linear interpolationr L reduces its precision. Regressing ðr ðmÞ D Þ 2 against the ðF ðmÞ ST Þ 2 does not produce a stronger linear relationship than that
Fig 1 .
1Simulation of a 10-generation breeding program using 200 varieties from the WHEAT data. Simulation of a 10-generation breeding program developed using 200 varieties generated from 2002-2007 WHEAT data with 10 (top left), 50 (top right), 200 (bottom left) and 1000 (bottom right) causal variants. The decay curves, ther ð0Þ D and ther ðmÞ D are in blue, and their linear interpolation (r L ) is shown as a dashed blue line. The open green circles are predictive correlations for the simulated populations, and the green solid points are the mean ðF ST ; rÞ for each generation. doi:10.1371/journal.pgen.1006288.g001
and the correlations themselves are within 0.05 of the averager D from the decay curve both for 1990-1999 (F ST ¼ 0:028, r D ¼ 0:44,r P ¼ 0:40) and post-2000 (F ST ¼ 0:033,r D ¼ 0:44,r P ¼ 0:42) varieties.
Fig 2 .
2Simulation of a 10-generation breeding program with a training population augmented to 800 varieties, after two rounds of selection. Simulation of a 10-generation breeding program with an updated genomic prediction model. The updated model is fitted on the 800 varieties available after the second round of selection in the simulations for 200 (left) and 1000 (right) causal variants inFig 1.Formatting is the same as inFig 1.
doi:10.1371/journal.pgen.1006288.g002
Fig 3 .
3Simulation of quantitative traits from the HUMAN data. Simulation of quantitative traits with 5 (top left), 20 (top right), 100 (middle left), 2000 (middle right), 10000 (bottom left) and 50000 (bottom right) causal variants from the Asian individuals in the HUMAN data. The blue circles are ther ðmÞ D used to build the curve, and the red point isr ð0Þ D . The blue line is the mean decay trend, with a shaded 95% confidence interval, and the dashed blue line is the linear interpolation provided by ther L . The red squares labelled EUROPE,
D
Predictive correlation for a target subsample computed from a genomic prediction model fitted on the corresponding training subsample after swapping m individuals between the two. Used to construct the decay curve via LOESS together with the correspondingF ðmÞ ST . The subsamples are created from the training population via clustering to be minimally related.r D Predictive correlation estimated by the decay curve at a givenF ST . r L Linear approximation to the decay curve computed by regressing ther ðmÞ D against the associatedF ðmÞ ST . r P Predictive correlation for a target population computed by fitting a genomic prediction model on the whole training population, used as a reference point in assessing the decay curve. r Mean predictive correlation for a generation in the genomic selection simulation, computed from a genomic prediction model fitted on the founders. doi:10.1371/journal.pgen.1006288.t001
S1
Text. Supplementary information on the Methods and the Results. Figures for the decay curves from the simulation studies. Relationship betweenF ST and k. Comparison of the linear relationships between ρ 2 versusF 2 ST and ρ andF ST . (PDF)
Table 2 .
2Predictive correlations for the analyses shown in Figs B.1, B.2 and B.3 in S1 Text.Trait
Training Population
Target Population
n TR
n TAF ð0Þ
STr PrDrL
WHEAT, Yield
France
UK
132
70
0.031
0.55
0.60
0.58
France
Germany
132
70
0.042
0.56
0.56
0.51
WHEAT, Height
France
UK
132
70
0.031
0.57
0.63
0.58
France
Germany
132
70
0.042
0.60
0.55
0.54
WHEAT, Flowering time
France
UK
132
70
0.031
0.36
0.70
0.70
France
Germany
132
70
0.042
0.23
0.67
0.68
WHEAT, Grain protein content
France
UK
132
70
0.031
0.59
0.54
0.51
France
Germany
132
70
0.042
0.47
0.46
0.45
MICE, Weight
F005
F008
155
132
0.065
0.14
0.18
0.21
F005
F010
155
132
0.062
0.17
0.20
0.21
F005
F016
155
132
0.061
0.15
0.20
0.22
F008
F005
203
90*
0.066
0.24
-
0.30
F008
F010
203
90*
0.063
0.21
-
0.31
F008
F016
203
90*
0.056
0.16
-
0.34
F010
F005
241
90*
0.063
0.39
-
0.52
F010
F008
241
90*
0.062
0.22
-
0.52
F010
F016
241
90*
0.067
0.18
-
0.52
F016
F005
238
70*
0.063
0.34
0.29
0.35
F016
F008
238
70*
0.057
0.07
0.32
0.35
F016
F010
238
70*
0.069
0.27
-
0.30
MICE, Growth rate
F005
F008
207
80*
0.065
0.10
0.19
0.20
F005
F010
207
80*
0.062
0.02
0.19
0.20
F005
F016
207
80*
0.061
0.05
0.20
0.20
F008
F005
199
90*
0.066
0.18
-
0.19
F008
F010
199
90*
0.063
0.08
-
0.19
F008
F016
199
90*
0.056
0.05
-
0.21
F010
F005
237
90*
0.063
0.03
0.12
0.13
F010
F008
237
90*
0.062
0.07
0.12
0.14
F010
F016
237
90*
0.067
0.01
-
0.11
F016
F005
219
90*
0.063
0.00
-
0.05
F016
F008
219
90*
0.057
0.06
0.07
0.06
F016
F010
219
90*
0.069
0.04
-
0.03
PLOS Genetics | DOI:10.1371/journal.pgen.1006288 September 2, 2016
PLOS Genetics | DOI:10.1371/journal.pgen.1006288 September 2, 2016 9 / 19
AcknowledgmentsThe work presented in this paper forms part of the MIDRIB project ("Molecular Improvement of Disease Resistance in Barley"), which was a collaboration with Limagrain UK Ltd.; in particular, we would like to thank Anne-Marie Bochard and Mark Glew for their contributions. We would also like to thank Jonathan Marchini (Department of Statistics, University of Oxford) and his group for their insightful comments and suggestions.Author ContributionsAnalyzed the data: MS.Wrote the paper: MS IM DB.
Noninvasive Prenatal Diagnosis of Fetal Chromosomal Aneuploidy by Massively Parallel Genomic Sequencing of DNA in Maternal Plasma. Rwk Chiu, Kca Chan, Y Gao, Vym Lau, W Zheng, T Y Leung, 10.1073/pnas.081064110519073917PNAS. 10551Chiu RWK, Chan KCA, Gao Y, Lau VYM, Zheng W, Leung TY, et al. Noninvasive Prenatal Diagnosis of Fetal Chromosomal Aneuploidy by Massively Parallel Genomic Sequencing of DNA in Maternal Plasma. PNAS. 2008; 105(51):20458-20463. doi: 10.1073/pnas.0810641105 PMID: 19073917
Development and Validation of a Clinical Cancer Genomic Profiling Test Based on Massively Parallel DNA Sequencing. G M Frampton, A Fichtenholtz, G A Otto, K Wang, S R Downing, J He, 10.1038/nbt.269624142049Nat Biotechnol. 31Frampton GM, Fichtenholtz A, Otto GA, Wang K, Downing SR, He J, et al. Development and Validation of a Clinical Cancer Genomic Profiling Test Based on Massively Parallel DNA Sequencing. Nat Bio- technol. 2013; 31:1023-1031. doi: 10.1038/nbt.2696 PMID: 24142049
Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning. G Abraham, J A Tye-Din, O G Bhalala, A Kowalczyk, J Zobel, M Inouye, 10.1371/journal.pgen.100413724550740PLoS Genet. 1021004137Abraham G, Tye-Din JA, Bhalala OG, Kowalczyk A, Zobel J, Inouye M. Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning. PLoS Genet. 2014; 10(2):e1004137. doi: 10. 1371/journal.pgen.1004137 PMID: 24550740
Applying Association Mapping and Genomic Selection to the Dissection of Key Traits in Elite European Wheat. A R Bentley, M Scutari, N Gosman, S Faure, F Bedford, P Howell, 10.1007/s00122-014-2403-yTheor Appl Genet. 12712Bentley AR, Scutari M, Gosman N, Faure S, Bedford F, Howell P, et al. Applying Association Mapping and Genomic Selection to the Dissection of Key Traits in Elite European Wheat. Theor Appl Genet. 2014; 127(12):2619-2633. doi: 10.1007/s00122-014-2403-y PMID: 25273129
Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines. J Spindel, H Begum, D Akdemir, P Virk, B Collard, E Redoña, 10.1371/journal.pgen.100498225689273PLoS Genet. 1121004982Spindel J, Begum H, Akdemir D, Virk P, Collard B, Redoña E, et al. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines. PLoS Genet. 2015; 11(2):e1004982. doi: 10.1371/journal.pgen.1004982 PMID: 25689273
Mapping Genes for Complex Traits in Domestic Animals and Their Use in Breeding Programmes. M E Goddard, B J Hayes, 10.1038/nrg257519448663Nat Rev Genet. 10Goddard ME, Hayes BJ. Mapping Genes for Complex Traits in Domestic Animals and Their Use in Breeding Programmes. Nat Rev Genet. 2009; 10:381-391. doi: 10.1038/nrg2575 PMID: 19448663
Genomic Selection: Prediction of Accuracy and Maximisation of Long Term Response. M E Goddard, 10.1007/s10709-008-9308-0Genetica. 136Goddard ME. Genomic Selection: Prediction of Accuracy and Maximisation of Long Term Response. Genetica. 2009; 136:245-257. doi: 10.1007/s10709-008-9308-0 PMID: 18704696
Relatedness in the Post-Genomic Era: is it Still Useful?. D Speed, D J Balding, 25404112Nat Rev Genet. 16Speed D, Balding DJ. Relatedness in the Post-Genomic Era: is it Still Useful? Nat Rev Genet. 2015; 16:33-44. PMID: 25404112
Common SNPs Explain a Large Proportion of the Heritability for Human Height. J Yang, B Benyamin, B P Mcevoy, S Gordon, A K Henders, D R Nyholt, 10.1038/ng.60820562875Nat Genet. 427Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, et al. Common SNPs Explain a Large Proportion of the Heritability for Human Height. Nat Genet. 2010; 42(7):565-569. doi: 10.1038/ ng.608 PMID: 20562875
Power and Predictive Accuracy of Polygenic Risk Scores. F Dudbridge, 10.1371/journal.pgen.100334823555274PLoS Genet. 931003348Dudbridge F. Power and Predictive Accuracy of Polygenic Risk Scores. PLoS Genet. 2013; 9(3): e1003348. doi: 10.1371/journal.pgen.1003348 PMID: 23555274
Multiple Rare Alleles Contribute to Low Plasma Levels of HDL Cholesterol. J C Cohen, R S Kiss, A Pertsemlidis, Y L Marcel, R Mcpherson, H H Hobbs, 10.1126/science.1099870doi: 10.1126/ science.1099870Science. 355685Cohen JC, Kiss RS, Pertsemlidis A, Marcel YL, McPherson R, Hobbs HH. Multiple Rare Alleles Contrib- ute to Low Plasma Levels of HDL Cholesterol. Science. 2004; 35(5685):869-872. doi: 10.1126/ science.1099870
Schizophrenia: a Common Disease Caused by Multiple Rare Alleles. J M Mcclellan, E Susser, M C King, 10.1192/bjp.bp.106.02558517329737Br J Psychiatry. 1903McClellan JM, Susser E, King MC. Schizophrenia: a Common Disease Caused by Multiple Rare Alleles. Br J Psychiatry. 2007; 190(3):194-199. doi: 10.1192/bjp.bp.106.025585 PMID: 17329737
An Equation to Predict the Accuracy of Genomic Values by Combining Data from Multiple Traits, Populations, or Environments. Ycj Wientjes, P Bijma, R F Veerkamp, Mpl Calus, 10.1534/genetics.115.18326926637542Genetics. 2022Wientjes YCJ, Bijma P, Veerkamp RF, Calus MPL. An Equation to Predict the Accuracy of Genomic Values by Combining Data from Multiple Traits, Populations, or Environments. Genetics. 2016; 202 (2):799-823. doi: 10.1534/genetics.115.183269 PMID: 26637542
Beyond Missing Heritability: Prediction of Complex Traits. R Makowsky, N M Pajewski, Y C Klimentidis, A I Vazquez, C W Duarte, D B Allison, 10.1371/journal.pgen.100205121552331PLoS Genet. 741002051Makowsky R, Pajewski NM, Klimentidis YC, Vazquez AI, Duarte CW, Allison DB, et al. Beyond Missing Heritability: Prediction of Complex Traits. PLoS Genet. 2011; 7(4):e1002051. doi: 10.1371/journal. pgen.1002051 PMID: 21552331
Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor. G De Los Campos, A I Vazquez, R L Fernando, Y C Klimentidis, D Sorensen, 10.1371/journal.pgen.100360823874214PLoS Genet. 971003608de los Campos G, Vazquez AI, Fernando RL, Klimentidis YC, Sorensen D. Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor. PLoS Genet. 2013; 9(7):e1003608. doi: 10.1371/journal.pgen.1003608 PMID: 23874214
Convergent Adaptation of Human Lactase Persistence in Africa and Europe. S A Tishkoff, F A Reed, A Ranciaro, B F Voight, C C Babbitt, J S Silverman, 10.1038/ng194617159977Nat Genet. 391Tishkoff SA, Reed FA, Ranciaro A, Voight BF, Babbitt CC, Silverman JS, et al. Convergent Adaptation of Human Lactase Persistence in Africa and Europe. Nat Genet. 2006; 39(1):31-40. doi: 10.1038/ ng1946 PMID: 17159977
Genome-Wide Association Mapping Reveals a Rich Genetic Architecture of Complex Traits in Oryza Sativa. K Zhao, C Tung, G C Eizenga, M H Wright, M L Ali, A H Price, 10.1038/ncomms146721915109Nat Commun. 2467Zhao K, Tung C, Eizenga GC, Wright MH, Ali ML, Price AH, et al. Genome-Wide Association Mapping Reveals a Rich Genetic Architecture of Complex Traits in Oryza Sativa. Nat Commun. 2011; 2:467. doi: 10.1038/ncomms1467 PMID: 21915109
Evaluation of Genomic Selection Training Population Designs and Genotyping Strategies in Plant Breeding Programs Using Simulation. J M Hickey, S Dreisigacker, J Crossaa, S Hearne, R Babu, B M Prasanna, 10.2135/cropsci2013.03.0195Crop Sci. 544Hickey JM, Dreisigacker S, Crossaa J, Hearne S, Babu R, Prasanna BM, et al. Evaluation of Genomic Selection Training Population Designs and Genotyping Strategies in Plant Breeding Programs Using Simulation. Crop Sci. 2015; 54(4):1476-1488. doi: 10.2135/cropsci2013.03.0195
Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking. H D Daetwyler, Mpl Calus, R Pong-Wong, G De Los Campos, J M Hickey, 10.1534/genetics.112.14798323222650Genetics. 1932Daetwyler HD, Calus MPL, Pong-Wong R, de los Campos G, Hickey JM. Genomic Prediction in Ani- mals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking. Genetics. 2013; 193 (2):347-365. doi: 10.1534/genetics.112.147983 PMID: 23222650
The Impact of Genetic Relationship Information on Genomic Breeding Values in German Holstein Cattle. D Habier, J Tetens, F R Seefried, P Lichtner, G Thaller, 10.1186/1297-9686-42-520170500Genet Sel Evol. 425Habier D, Tetens J, Seefried FR, Lichtner P, Thaller G. The Impact of Genetic Relationship Information on Genomic Breeding Values in German Holstein Cattle. Genet Sel Evol. 2010; 42:5. doi: 10.1186/ 1297-9686-42-5 PMID: 20170500
Reliability of Genomic Predictions Across Multiple Populations. Apw De Roos, B J Hayes, M E Goddard, 10.1534/genetics.109.10493519822733Genetics. 1834de Roos APW, Hayes BJ, Goddard ME. Reliability of Genomic Predictions Across Multiple Populations. Genetics. 2009; 183(4):1545-1553. doi: 10.1534/genetics.109.104935 PMID: 19822733
Reliability of Direct Genomic Values for Animals with Different Relationships within and to the Reference Population. M Pszczola, T Strabel, A Mulder, Mpl Calus, 10.3168/jds.2011-433822192218J Dairy Sci. 951Pszczola M, Strabel T, Mulder A, Calus MPL. Reliability of Direct Genomic Values for Animals with Dif- ferent Relationships within and to the Reference Population. J Dairy Sci. 2012; 95(1):389-400. doi: 10. 3168/jds.2011-4338 PMID: 22192218
The Importance of Information on Relatives for the Prediction of Genomic Breeding Values and the Implications for the Makeup of Reference Data Sets in Livestock Breeding Schemes. S A Clark, J M Hickey, H D Daetwyler, Jhj Van Der Werf, 10.1186/1297-9686-44-4Genet Sel Evol. 444Clark SA, Hickey JM, Daetwyler HD, van der Werf JHJ. The Importance of Information on Relatives for the Prediction of Genomic Breeding Values and the Implications for the Makeup of Reference Data Sets in Livestock Breeding Schemes. Genet Sel Evol. 2012; 44:4. doi: 10.1186/1297-9686-44-4 PMID: 22321529
Efficient Methods to Compute Genomic Predictions. P M Vanraden, 10.3168/jds.2007-0980J Dairy Sci. 9111VanRaden PM. Efficient Methods to Compute Genomic Predictions. J Dairy Sci. 2008; 91(11):4414- 4423. doi: 10.3168/jds.2007-0980 PMID: 18946147
Prediction of Total Genetic Value Using Genome-Wide Dense Marker Maps. The Meuwissen, B J Hayes, M E Goddard, 11290733Genetics. 157Meuwissen THE, Hayes BJ, Goddard ME. Prediction of Total Genetic Value Using Genome-Wide Dense Marker Maps. Genetics. 2001; 157:1819-1829. PMID: 11290733
Ridge Regression: Biased Estimation for Nonorthogonal Problems. A E Hoerl, R W Kennard, 10.1080/00401706.1970.1048863412TechnometricsHoerl AE, Kennard RW. Ridge Regression: Biased Estimation for Nonorthogonal Problems. Techno- metrics. 1970; 12(1):55-67. doi: 10.1080/00401706.1970.10488634
Regression Shrinkage and Selection via the Lasso. R Tibshirani, J R Stat Soc Series B. 581Tibshirani R. Regression Shrinkage and Selection via the Lasso. J R Stat Soc Series B. 1996; 58 (1):267-288.
Regularization and Variable Selection via the Elastic Net. H Zou, T J Hastie, 10.1111/j.1467-9868.2005.00503.xJ R Stat Soc Series B. 672Zou H, Hastie TJ. Regularization and Variable Selection via the Elastic Net. J R Stat Soc Series B. 2005; 67(2):301-320. doi: 10.1111/j.1467-9868.2005.00503.x
Efficient Computation of Ridge-Regression Best Linear Unbiased Prediction in Genomic Selection in Plant Breeding. H P Piepho, J O Ogutu, T Schulz-Streeck, B Estaghvirou, A Gordillo, F Technow, 10.2135/cropsci2011.11.0592Crop Sci. 523Piepho HP, Ogutu JO, Schulz-Streeck T, Estaghvirou B, Gordillo A, Technow F. Efficient Computation of Ridge-Regression Best Linear Unbiased Prediction in Genomic Selection in Plant Breeding. Crop Sci. 2012; 52(3):1093-1104. doi: 10.2135/cropsci2011.11.0592
Technical note: Derivation of Equivalent Computing Algorithms for Genomic Predictions and Reliabilities of Animal Merit. I Strandén, D J Garrick, 10.3168/jds.2008-1929J Dairy Sci. 926Strandén I, Garrick DJ. Technical note: Derivation of Equivalent Computing Algorithms for Genomic Predictions and Reliabilities of Animal Merit. J Dairy Sci. 2009; 92(6):2971-2975. doi: 10.3168/jds. 2008-1929 PMID: 19448030
Improving the Efficiency of Genomic Selection. M Scutari, I Mackay, D J Balding, 23934612Stat Appl Genet Mol Biol. 124Scutari M, Mackay I, Balding DJ. Improving the Efficiency of Genomic Selection. Stat Appl Genet Mol Biol. 2013; 12(4):517-527. PMID: 23934612
Variable-Selection Emerges on Top in Empirical Comparison of Whole-Genome Complex-Trait Prediction Methods. D C Haws, I Rish, S Teyssedre, D He, A C Lozano, P Kambadur, 10.1371/journal.pone.013890326439851PLoS One. 1010138903Haws DC, Rish I, Teyssedre S, He D, Lozano AC, Kambadur P, et al. Variable-Selection Emerges on Top in Empirical Comparison of Whole-Genome Complex-Trait Prediction Methods. PLoS One. 2015; 10(10):e0138903. doi: 10.1371/journal.pone.0138903 PMID: 26439851
Accuracy of Genome-Enabled Prediction in a Dairy Cattle Population Using Different Cross-Validation Layouts. M A Pérez-Cabal, A I Vazquez, D Gianola, Gjm Rosa, K A Weigel, 10.3389/fgene.2012.0002722403583Front Genet. 327Pérez-Cabal MA, Vazquez AI, Gianola D, Rosa GJM, Weigel KA. Accuracy of Genome-Enabled Pre- diction in a Dairy Cattle Population Using Different Cross-Validation Layouts. Front Genet. 2012; 3:27. doi: 10.3389/fgene.2012.00027 PMID: 22403583
Approximating Prediction Error Covariances among Additive Genetic Effects within Animals in Multiple-Trait and Random Regression Models. B Tier, K Meyer, 10.1111/j.1439-0388.2003.00444.xJ Anim Breed Genet. 121Tier B, Meyer K. Approximating Prediction Error Covariances among Additive Genetic Effects within Animals in Multiple-Trait and Random Regression Models. J Anim Breed Genet. 2004; 121:77-89. doi: 10.1111/j.1439-0388.2003.00444.x
On the Distance of Genetic Relationship and the Accuracy of Genomic Prediction in Pig Breeding. The Meuwissen, J Odegard, I Andersen-Ranberg, E Grindflek, 10.1186/1297-9686-46-4925158793Genet Sel Evol. 4649Meuwissen THE, Odegard J, Andersen-Ranberg I, Grindflek E. On the Distance of Genetic Relation- ship and the Accuracy of Genomic Prediction in Pig Breeding. Genet Sel Evol. 2014; 46:49. doi: 10. 1186/1297-9686-46-49 PMID: 25158793
Population Structure and Cryptic Relatedness in Genetic Association Studies. W Astle, D J Balding, 10.1214/09-STS307Stat Sci. 244Astle W, Balding DJ. Population Structure and Cryptic Relatedness in Genetic Association Studies. Stat Sci. 2009; 24(4):451-471. doi: 10.1214/09-STS307
C M Bishop, Pattern Recognition and Machine Learning. New YorkSpringerBishop CM. Pattern Recognition and Machine Learning. New York: Springer; 2006.
Accuracies of Genomic Breeding Values in American Angus Beef Cattle Using K-means Clustering for Cross-Validation. M Saatchi, M C Mcclure, S D Mckay, M M Rolf, J Kim, J E Decker, 10.1186/1297-9686-43-40Genet Sel Evol. 43140Saatchi M, McClure MC, McKay SD, Rolf MM, Kim J, Decker JE, et al. Accuracies of Genomic Breeding Values in American Angus Beef Cattle Using K-means Clustering for Cross-Validation. Genet Sel Evol. 2011; 43(1):40. doi: 10.1186/1297-9686-43-40 PMID: 22122853
Estimating and Interpreting F ST : The Impact of Rare Variants. G Bhatia, N Patterson, S Sankararaman, A L Price, 10.1101/gr.154831.11323861382Genome Res. 239Bhatia G, Patterson N, Sankararaman S, Price AL. Estimating and Interpreting F ST : The Impact of Rare Variants. Genome Res. 2013; 23(9):1514-1521. doi: 10.1101/gr.154831.113 PMID: 23861382
Likelihood-Based Inference for Genetic Correlation Coefficients. D J Balding, 10.1016/S0040-5809(03)00007-8Theor Popul Biol. 633Balding DJ. Likelihood-Based Inference for Genetic Correlation Coefficients. Theor Popul Biol. 2003; 63(3):221-230. doi: 10.1016/S0040-5809(03)00007-8 PMID: 12689793
An Introduction to Populations Genetics Theory. Harper and Row. J F Crow, M Kimura, Crow JF, Kimura M. An Introduction to Populations Genetics Theory. Harper and Row; 1970.
Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice. W Valdar, L C Solberg, D Gauguier, S Burnett, P Klenerman, W O Cookson, 10.1038/ng184016832355Nat Genet. 38Valdar W, Solberg LC, Gauguier D, Burnett S, Klenerman P, Cookson WO, et al. Genome-Wide Genetic Association of Complex Traits in Heterogeneous Stock Mice. Nat Genet. 2006; 38:879-887. doi: 10.1038/ng1840 PMID: 16832355
Worldwide Human Relationships Inferred from Genome-Wide Patterns of Variation. J Z Li, D M Absher, H Tang, A M Southwick, A M Casto, S Ramachandran, 10.1126/science.115371718292342Science. 3195866Li JZ, Absher DM, Tang H, Southwick AM, Casto AM, Ramachandran S, et al. Worldwide Human Rela- tionships Inferred from Genome-Wide Patterns of Variation. Science. 2008; 319(5866):1100-1104. doi: 10.1126/science.1153717 PMID: 18292342
impute: Imputation for Microarray Data. T J Hastie, R Tibshirani, B Narasimhan, G Chu, R package version 1.42.0Hastie TJ, Tibshirani R, Narasimhan B, Chu G. impute: Imputation for Microarray Data; 2014. R pack- age version 1.42.0.
R: A Language and Environment for Statistical Computing. R Core Team, Vienna, AustriaR Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria; 2015.
Regularization Paths for Generalized Linear Models via Coordinate Descent. J H Friedman, T J Hastie, R Tibshirani, 10.18637/jss.v033.i0120808728J Stat Softw. 331Friedman JH, Hastie TJ, Tibshirani R. Regularization Paths for Generalized Linear Models via Coordi- nate Descent. J Stat Softw. 2010; 33(1):1-22. doi: 10.18637/jss.v033.i01 PMID: 20808728
Optimized Application of Penalized Regression Methods to Diverse Genomic Data. L Waldron, M Pintilie, M S Tsao, F A Shepherd, C Huttenhower, I Jurisica, 10.1093/bioinformatics/btr59122156367Bioinformatics. 2724Waldron L, Pintilie M, Tsao MS, Shepherd FA, Huttenhower C, Jurisica I. Optimized Application of Penalized Regression Methods to Diverse Genomic Data. Bioinformatics. 2011; 27(24):3399-3406. doi: 10.1093/bioinformatics/btr591 PMID: 22156367
Local Regression Models. W S Cleveland, E Grosse, W M Shyu, Statistical Models in S. Chambers JM, Hastie TJHobokenChapman & HallCleveland WS, Grosse E, Shyu WM. Local Regression Models. In: Chambers JM, Hastie TJ, editors. Statistical Models in S. Hoboken: Chapman & Hall; 1993.
The Sampling Variance of Correlation Coefficients Under Assumptions of Fixed and Mixed Variates. J W Hooper, 10.2307/2333193Biometrika. 453/4Hooper JW. The Sampling Variance of Correlation Coefficients Under Assumptions of Fixed and Mixed Variates. Biometrika. 1958; 45(3/4):471-477. doi: 10.2307/2333193
HaploSim: Functions to Simulate Haplotypes. A Coster, J Bastiaansen, R package version 1.8.4.Coster A, Bastiaansen J. HaploSim: Functions to Simulate Haplotypes; 2013. R package version 1.8.4.
Maximizing the Reliability of Genomic Selection by Optimizing the Calibration Set of Reference Individuals: Comparison of Methods in Two Diverse Groups of Maize Inbreds (Zea mays L.). R Rincent, D Laloë, S Nicolas, T Altmann, D Brunel, P Revilla, 10.1534/genetics.112.14147322865733Genetics. 1922Rincent R, Laloë D, Nicolas S, Altmann T, Brunel D, Revilla P, et al. Maximizing the Reliability of Geno- mic Selection by Optimizing the Calibration Set of Reference Individuals: Comparison of Methods in Two Diverse Groups of Maize Inbreds (Zea mays L.). Genetics. 2012; 192(2):715-728. doi: 10.1534/ genetics.112.141473 PMID: 22865733
Genomic BLUP Decoded: A Look into the Black Box of Genomic Prediction. D Habier, R L Fernando, D J Garrick, 10.1534/genetics.113.15220723640517Genetics. 1943Habier D, Fernando RL, Garrick DJ. Genomic BLUP Decoded: A Look into the Black Box of Genomic Prediction. Genetics. 2013; 194(3):597-607. doi: 10.1534/genetics.113.152207 PMID: 23640517
The Impact of Genetic relationship Information on Genome-Assisted Breeding Balues. D Habier, R L Fernando, Jcm Dekkers, 10.1534/genetics.107.08119018073436Genetics. 1774Habier D, Fernando RL, Dekkers JCM. The Impact of Genetic relationship Information on Genome- Assisted Breeding Balues. Genetics. 2007; 177(4):2389-2397. doi: 10.1534/genetics.107.081190 PMID: 18073436
Resource Allocation for Maximizing Prediction Accuracy and Genetic Gain of Genomic Selection in Plant Breeding: a Simulation Experiment. A J Lorenz, 10.1534/g3.112.00491123450123G3. 33Lorenz AJ. Resource Allocation for Maximizing Prediction Accuracy and Genetic Gain of Genomic Selection in Plant Breeding: a Simulation Experiment. G3. 2013; 3(3):481-491. doi: 10.1534/g3.112. 004911 PMID: 23450123
|
[] |
[
"Estimating Information-Theoretic Quantities",
"Estimating Information-Theoretic Quantities"
] |
[
"Robin A A Ince \nInstitute of Neuroscience and Psychology\nUniversity of Glasgow\n58 Hillhead StreetG12 8QBGlasgowUK\n",
"Simon R Schultz \nDepartment of Bioengineering\nImperial College London\nSouth KensingtonSW7 2AZLondonUK\n",
"Stefano Panzeri \nInstitute of Neuroscience and Psychology\nUniversity of Glasgow\n58 Hillhead StreetG12 8QBGlasgowUK\n\nCenter For Neuroscience and Cognitive Systems\nItalian Institute of Technology\nCorso Bettini31 -38068Rovereto\n"
] |
[
"Institute of Neuroscience and Psychology\nUniversity of Glasgow\n58 Hillhead StreetG12 8QBGlasgowUK",
"Department of Bioengineering\nImperial College London\nSouth KensingtonSW7 2AZLondonUK",
"Institute of Neuroscience and Psychology\nUniversity of Glasgow\n58 Hillhead StreetG12 8QBGlasgowUK",
"Center For Neuroscience and Cognitive Systems\nItalian Institute of Technology\nCorso Bettini31 -38068Rovereto"
] |
[] |
Tn) Italy 16 pages 5724 words.DefinitionInformation theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information theoretic quantities are in common use in neuroscience -(see entry "Summary of Information--Theoretic Quantities"). Estimating these quantities in an accurate and unbiased way from real neurophysiological data frequently presents challenges, which are explained in this entry.
|
10.1007/978-1-4614-7320-6_140-1
|
[
"https://arxiv.org/pdf/1501.01863v1.pdf"
] | 10,144,347 |
1501.01863
|
4219451ce510a9c21c5cd2552a88d99548532f9d
|
Estimating Information-Theoretic Quantities
Robin A A Ince
Institute of Neuroscience and Psychology
University of Glasgow
58 Hillhead StreetG12 8QBGlasgowUK
Simon R Schultz
Department of Bioengineering
Imperial College London
South KensingtonSW7 2AZLondonUK
Stefano Panzeri
Institute of Neuroscience and Psychology
University of Glasgow
58 Hillhead StreetG12 8QBGlasgowUK
Center For Neuroscience and Cognitive Systems
Italian Institute of Technology
Corso Bettini31 -38068Rovereto
Estimating Information-Theoretic Quantities
Tn) Italy 16 pages 5724 words.DefinitionInformation theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information theoretic quantities are in common use in neuroscience -(see entry "Summary of Information--Theoretic Quantities"). Estimating these quantities in an accurate and unbiased way from real neurophysiological data frequently presents challenges, which are explained in this entry.
Detailed Description Information theoretic quantities
Information theory provides a powerful theoretical framework for studying communication in a quantitative way. Within the framework there are many different quantities to assess different properties of systems, many of which have been applied fruitfully to analysis of the nervous system and neural communication. These include the entropy (H; which measures the uncertainty associated with a stochastic variable), mutual information (I; which measures the dependence relationship between two stochastic variables), and several more general divergence measures for probability distributions (Kullbeck--Liebler, Jenson--Shannon). For full definitions and details of these quantities, see the "Summary of Information--Theoretic Quantities" entry.
The Finite Sampling Problem
A major difficulty when applying techniques involving information theoretic quantities to experimental systems is that they require measurement of the full probability distributions of the variables involved. If we had an infinite amount of data, we could measure the true stimulus--response probabilities precisely. However, any real experiment only yields a finite number of trials from which these probabilities must be estimated. The estimated probabilities are subject to statistical error and necessarily fluctuate around their true values. The significance of these finite sampling fluctuations is that they lead to both statistical error (variance) and systematic error (called limited sampling bias) in estimates of entropies and information. This bias is the difference between the expected value of the quantity considered, computed from probability distributions estimated with N trials or samples, and its value computed from the true probability distribution. This is illustrated in Figure 1, which shows histogram estimates of the response distribution for two different stimuli calculated from 40 trials per stimulus. While the true probabilities are uniform (dotted black lines) the calculated maximum likelihood estimates show some variation around the true values. This causes spurious differences -in this example the sampled probabilities indicate that obtaining a large response suggests stimulus 1 is presented -which results in a positive value of the information. Figure 1C shows a histogram of information values obtained from many simulations of this system. The bias constitutes a significant practical problem, because its magnitude is often of the order of the information values to be evaluated, and because it cannot be alleviated simply by averaging over many neurons with similar characteristics.
Direct estimation of information theoretic quantities
The most direct way to compute information and entropies is to estimate the response probabilities as the experimental histogram of the frequency of each response across the available trials, and then plug these empirical probability estimates into Eqs. (1--3). We refer to this as the "plug--in" method.
The plug--in estimate of entropy is biased downwards (estimated values are lower than the true value) while the plug--in estimate of information shows an upward bias. An intuitive explanation for why entropy is biased downwards comes from the fact that entropy is a measure of variability. As we saw in Figure 1, variance in the maximum likelihood probability estimates introduces extra structure in the probability distributions; this additional variation (which is dependent on the number of trials from the asymptotic normality of the ML estimate) always serves to make the estimated distribution less uniform or structured than the true distribution. Consequently, entropy estimates are lower than their true values, and the effect of finite sampling on entropies is a downward bias. For information, an explanation for why bias is typically positive is that finite sampling can introduce spurious stimulus--dependent differences in the response probabilities, which make the stimuli seem more discernible and hence the neuron more informative than it really is. Alternatively, viewing the information as a difference of unconditional and conditional entropies, (first two expressions of Eq. in "Summary of Information Theoretic Quantities") the conditional entropy for each stimulus value is estimated from a reduced number of trials compared to the noise entropy (only the trials corresponding to that stimulus) and so its bias is considerably larger. Since entropy is biased down and the conditional entropy term appears with a negative sign, this causes the upward bias in information.
In the above explanations for the source of the bias, the source is the variability in estimates of the probability distributions considered. Variance in the maximum likelihood histogram estimators of the probabilities induces additional "noise" structure affecting the entropy and information values.
Asymptotic estimates of the limited sampling bias
To understand the sampling behaviour of information and entropy better, it is useful to find analytical approximations to the bias. This can be done in the so--called "asymptotic sampling regime". Roughly speaking this is when the number of trials is "large". More rigorously, the asymptotic sampling regime is defined as N being large enough that every possible response occurs many times: that is, N s P(r|s) >> 1 for each stimulus--response pair s,r such that P(r|s) > 0. In this regime, the bias of the entropies and information can be expanded in inverse powers of 1/N and analytical approximations obtained (Miller, 1955;Panzeri and Treves, 1996) shows that, even if N s is constant, the bias increases with the number of responses R . This has important implications for comparing neural codes. For example, a response consisting of a given number of spikes can arise from many different possible temporal patterns of spikes. Thus, R is typically much greater for a spike timing code than for a spike count code, and it follows from Eq. (1) that the estimate of information conveyed by a spike timing code is more biased than that measured for the same neurons through a spike count code. If bias is not eliminated, there is therefore a danger of concluding that spike timing is important even when it is not.
− ⎡ ⎤ = − ⎣ ⎦ − ⎡ ⎤ = − ⎣ ⎦ ⎧ ⎫ ⎡ ⎤ ⎡ ⎤ = − − − ⎨ ⎬ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭ ∑ ∑(1)
A further important feature of Eq.
(1) is that, although the bias is not the same for all probability distributions, in the asymptotic sampling regime it depends on some remarkably simple details of the response distribution (the number of trials and the number of relevant responses). Thus Eq. (1) makes it is possible to derive simple rules of thumb for estimating the bias magnitude and compare the relative bias in different situations. As detailed in the next section, the simplicity of Eq.
(1) can also be exploited in order to correct effectively for the bias (Panzeri and Treves, 1996).
Bias Correction Methods
A number of techniques have been developed to address the issue of bias, and allow much more accurate estimates of information theoretic quantities than the "plug--in" method described above. and (Victor, 2006) provide a review of such methods, a selection of which are briefly outlined here.
Panzeri-Treves (PT)
In the so--called asymptotic sampling regime, when the number of trials is large enough that every possible response occurs many times, an analytical approximation for the bias (i.e. the difference between the true value and the plug--in estimate) of entropies and information can be obtained (Miller, 1955;Panzeri and Treves, 1996) (Eq. (1)). The value of the bias computed from the above expressions is then subtracted from the plug--in estimate to obtain the corrected value. This requires an estimate of the number of relevant responses for each stimulus, s R . The simplest approach is to approximate s R by the count of responses that are observed at least once - this is the "naive" count. However due to finite sampling this will be an underestimate of the true value. A Bayesian procedure (Panzeri and Treves, 1996) can be used to obtain a more accurate value.
Quadratic Extrapolation (QE)
In the asymptotic sampling regime, the bias of entropies and information can be approximated as second order expansions in 1/N, where N is the number of trials (Treves and Panzeri, 1995;Strong et al., 1998). For example, for information:
2 ( ; ) ( ; ) plugin true a b I R S I R S N N = + + (2)
This property can be exploited by calculating the estimates with subsets of the original data, with N/2 and N/4 trials and fitting the resulting values to the polynomial expression above. This allows an estimate of the parameters a and b and hence I true (S; R). To use all available data, estimates of two subsets of size N/2 and four subsets of size N/4 are averaged to obtain the values for the extrapolation. Together with the full length data calculation, this requires seven different evaluations of the quantity being estimated.
Nemenman-Shafee-Bialek (NSB)
The NSB method (Nemenman et al., 2002(Nemenman et al., , 2004) utilises a Bayesian inference approach and does not rely on the assumption of the asymptotic sampling regime. It is based on the principle that when estimating a quantity, the least bias will be achieved when assuming an a priori uniform distribution over the quantity. This method is more challenging to implement than the other methods, involving a large amount of function inversion and numerical integration. However, it often gives a significant improvement in the accuracy of the bias correction (Montani et al., 2007(Montani et al., , 2007Montemurro et al., 2007;Nemenman et al., 2008).
Shuffled Information Estimator (I sh )
Recently, an alternative method of estimating the mutual information has been proposed . Unlike the methods above, this is a method for calculating the information only, and is not a general entropy bias correction. However, it can be used with the entropy corrections described above to obtain more accurate results. In brief, the method uses the addition and subtraction of different terms (in which correlations given a fixed stimulus are removed either by empirical shuffling or by marginalizing and then multiplying response probabilities) that do not change the information value in the limit of an infinite number of trials, but do remove a large part of the bias for finite number of trials. This greatly reduces the bias of the information estimate, at the price of only a marginal increase of variance. Note that this procedure works well for weakly correlated data, which is frequently the case for simultaneously recorded neurons. Figure 2A reports the results of the performance of bias correction procedures, both in terms of bias and Root Mean Square (RMS) error, on a set of simulated spike trains from eight simulated neurons. Each of these neurons emitted spikes with a probability obtained from a Bernoulli process. The spiking probabilities were set equal to those measured, in the 10-15 ms post--stimulus interval, from eight neurons in rat somatosensory cortex responding to 13 stimuli consisting of whisker vibrations of different amplitude and frequency (Arabzadeh et al., 2004). The 10-15 ms interval was chosen since it was found to be the interval containing highest information values. Figure 2A shows that all bias correction procedures generally improve the estimate of I(S;R) with respect to the plug--in estimator, and the NSB correction is especially effective. Figure 2B shows that the bias--corrected estimation of information (and RMS error; lower panel) is much improved by using I sh (R;S) rather than I(R;S). The use of I sh (R;S) makes the residual errors in the estimation of information much smaller and almost independent from the bias correction method used. Taking into account both bias correction performance and computation time, for this simulated system the best method to use is the shuffled information estimator combined with the Panzeri-Treves analytical correction. Using this, an accurate estimate of the information is possible even when the number of samples per stimulus is R where R is the dimension of the response space.
Examples of bias correction performance
It should be noted that while bias correction techniques such as those discussed above are essential if accurate estimation of the information is required, whether to employ them depends upon the question being addressed. If the aim is quantitative comparison of information values between different experiments, stimuli or behaviours, they are essential. If instead the goal is simply to detect the presence of a significant interaction, the statistical power of information as a test of independence is greater if the uncorrected plug--in estimate is used (Ince et al., 2012). This is because all the bias correction techniques introduce additional variance; for a hypothesis test which is unaffected by underlying bias, this variance reduces the statistical power.
Information Component Analysis Techniques
The principal use of Information Theory in neuroscience is to study the neural code -and one of the commonly studied problems in neural coding is to take a given neurophysiological dataset, and to determine how the information about some external correlate is represented. One way to do this is to compute the mutual information that the neural responses convey on average about the stimuli (or other external correlate), under several different assumptions about the nature of the underlying code (e.g. firing rate, spike latency, etc). The assumption is that if one code yields substantially greater mutual information, then downstream detectors are more likely to be using that code (the Efficient Coding Hypothesis).
A conceptual improvement on this procedure, where it can be done, is to take the total information available from all patterns of spikes (and silences), and to break it down (mathematically) into components reflecting different encoding mechanisms, such as firing rates, pairwise correlations, etc (Panzeri et al., 1999;Panzeri and Schultz, 2001). One way to do this is to perform an asymptotic expansion of the mutual information, grouping terms in such a way that they reflect meaningful coding mechanisms.
Taylor series expansion of the mutual information
A taylor series expansion of the mutual information is one such approach. For short time windows (and we will discuss presently what "short" means), the mutual information can be approximated as I (R;S) = TI t + T 2 2 I tt + ...
where the subscript t indicates the derivative with respect to the time window length T. For time windows sufficiently short that only a pair of spikes are contained within the time window (either from a population of cells or from a single spike train), the first two terms are all that is needed. If the number of spikes exceeds two, it may still be a good approximation, but higher order terms would be needed to capture all of the information. One possibility is that Equation (3) could be extended to incorporate higher order terms, however we have instead found it better in practice to take an alternative approach (see next section).
With only a few spikes to deal with, it is possible to use combinatorics to write out the expressions for the probabilities of observing different spike patterns. This was initially done for the information contained in the spikes fired by a small population of cells (Panzeri et al., 1999), and then later extended to the information carried by the spatiotemporal dynamics of a small population of neurons over a finite temporal wordlength . In the former formulation, we define r i (s) to be the mean response rate (number of spikes in T divided by T) of cell i (of C cells) to stimulus s over all trials where s was presented. We then define the signal cross--correlation density to be
ν ij = r i (s)r j (s) s r i (s) s r j (s) s − 1 .(4)
This quantity captures the correlation in the mean response profiles of each cell to different stimuli (e.g. correlated tuning curves). Analogously, we define the noise cross--correlation density to be
γ ij (s) = r i (s)r j (s) r i (s)r j (s) − 1 .(5)
The mutual information I(R;S) can then be written as the sum of four components,
I = I lin + I sig-sim + I cor-ind + I cor-dep ,(6)
with the first component
I lin = T r i (s)log 2 r i (s) r i (s ') s ' s i=1 C ∑ (7)
capturing the rate component of the information, i.e. that which survives when there is no correlation between cells (even of their tuning curves) present at all. This quantity is positive semi--definite, and adds linearly across neurons. It is identical to the "information per spike" approximation calculated by a number of authors (Skaggs et al., 1993;Brenner et al., 2000;Sharpee et al., 2004). I sig--sim is the correction to the information required to take account of correlation in the tuning of individual neurons, or signal similarity:
I sig-sim = T 2 2 ln 2 r i (s) s r j (s) s j=1 C ∑ i=1 C ∑ ν ij + (1+ ν ij )ln 1 1+ ν ij ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ (8)
This is negative semi--definite. I cor--ind is the effect on the transmitted information of the average level of noise correlation (correlation at fixed stimulus) between neurons:
I cor-ind = T 2 2 r i (s)r j (s)γ ij (s) s j=1 C ∑ i=1 C ∑ log 2 1 1+ ν ij ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ .(9)
I cor--ind can take either positive or negative values. I cor--dep is the contribution of stimulus--dependence in the correlation to the information -as would be present, for instance, if synchrony was modulated by a stimulus parameter:
I cor-dep = T 2 2 r i (s)r j (s) 1+ γ ij (s) ( ) log 2 r i (s ')r j (s ') s' 1+ γ ij (s) ( ) r i (s ')r j (s ') 1+ γ ij (s ') ( ) s' ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ s j=1 C ∑ i=1 C ∑(10)
An application of this approach to break out components reflecting rate and correlational coding is illustrated in Figure 3, by application to simulated spike trains with controlled correlation. This approach has been used by a number of authors to dissect out contributions to neural coding (e.g. Scaglione et al., 2008).
This approach extends naturally to the consideration of temporal correlations between spike times within a spike train . Note that the Taylor series approach can be applied to the entropy as well as the mutual information -this was used to break out the effect of spatiotemporal correlations on the entropy of a small population of neural spike trains . While the Taylor series approach can be extremely useful in teasing out contributions of different mechanisms to the transmission of information, it is not recommended as a method for estimation of the total information, as the validity of the approximation can break down quickly as the time window grows beyond that sufficient to contain more than a few spikes from the population. It is however useful as an additional inspection tool after the total information has been computed, which allows the information to be broken down into not only mechanistic components but also their contributions from individual cells and time bins (for instance by visualizing the quantity after the summations in Equation (10) as a matrix).
A more general information component analysis approach
A limitation of the Taylor series approach is that it is restricted to the analysis of quite short time windows of neural responses. However, an exact breakdown approach allowed a very similar information component approach to be used (Pola et al., 2003). In this approach, we consider a population response (spike pattern) r, and define the normalized noise cross--correlation to be
γ (r | s) = P(r | s) P ind (r | s) − 1 if P ind (r | s) ≠ 0 0 if P ind (r | s) = 0 ⎧ ⎨ ⎪ ⎩ ⎪ .(11)
where the marginal distribution
P ind (r | s) = P(r c | s) c=1 C ∏ .(12)
Compare the expressions in Equation (5) and (12). Similarly, the signal correlation becomes
ν(r) = P ind (r) P(r c ) c ∏ − 1 if P(r c ) ≠ 0 c ∏ 0 if P(r c ) = 0 c ∏ ⎧ ⎨ ⎪ ⎪ ⎩ ⎪ ⎪ .(13)
Using these definitions, the information components can now be written exactly (without the approximation of short time windows) as
I lin = P(r c | s)log 2 P(r c | s) P(r c ) r c ∑ c ∑ s(14)I sig-sim = 1 ln 2 P(r c ) c ∏ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ r ∑ ν(r) + (1+ ν(r))ln 1 1+ ν(r) ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ (15) I cor-ind = P ind (r | s)γ (r | s) s r ∑ log 2 1 1+ ν(r) ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ (16) I cor-dep = P ind (r | s)(1+ γ (r | s))log 2 P ind (r | s ') s ' (1+ γ (r | s)) P ind (r | s ')(1+ γ (r | s ')) s ' s r ∑ .(17)
This latter component is identical to the quantity " ΔI " introduced by Latham and colleagues (Nirenberg et al., 2001) to describe the information lost due to a decoder ignoring correlations. It is zero if and only if P(s | r) = P ind (s | r) for every s and r.
The expressions shown above are perhaps the most useful description for obtaining insight into the behavior of the information components under different statistical assumptions relating to the neural code, however they are not necessarily the best way to estimate the individual components. However, the components can also be written as a sum of entropies and entropy--like quantities, which can then be computed using the entropy estimation algorithms described earlier in this chapter (Pola et al., 2003;Montani et al., 2007;Schultz et al., 2009). Note that, as shown by Scaglione and colleagues (Scaglione et al., 2008(Scaglione et al., , 2010 the components in Eq. (14--17) can, under appropriate conditions, be further decomposed to tease apart the relative role of autocorrelations (spikes from the same cells) and cross--correlations (spikes from different cells).
Maximum entropy approach to information component analysis
The conceptual basis of the information component approach is to make an assumption that constrains the statistics of the response distribution P(r|s), compute the mutual information subject to this assumption, and by evaluating the difference between this and the "full" information, calculate the contribution to the information of relaxing this constraint. By making further assumptions, the contributions of additional mechanisms can in many cases be dissected out hierarchically. As an example, by assuming that the conditional distribution of responses given stimuli is equal to the marginal distribution P(r | s) = P ind (r | s) , and substituting in to the mutual information equation, one can define an information component I ind. This component can then be further broken up
I ind = I lin + I sig-sim ,(18)
with I lin and I sig--sim as defined in the previous section. The correlational component, I cor , is then just the difference between I ind and the total mutual information. This can also be further broken up, as
I cor = I cor-ind + I cor-dep .(19)
This approach can be extended further. For instance, the assumption of a Markov approximation can be made, with a memory extending back q timesteps, and the information computed under this assumption (Pola et al., 2005). More generally, any simplified model can be used, although the maximum entropy models are of special interest Schaub and Schultz, 2012)
P simp (r | s) = 1 Z exp λ 0 − 1+ λ i g i (r) i=1 m ∑ ⎧ ⎨ ⎩ ⎫ ⎬ ⎭ (20)
with parameters λ i implementing a set of constraints reflecting assumptions made about what are the important (non--noise) properties of the empirical distribution, and the partition function Z ensuring normalisation. An example of this is the Ising model, which has been used with some success to model neural population response distributions (Schneidman et al., 2006;Shlens et al., 2006;Schaub and Schultz, 2012). Estimation of the Ising model for large neural populations can be extremely computationally intensive if brute force methods for computing the partition function are employed, however mean field approximations can be employed in practice with good results (Roudi et al., 2009;Schaub and Schultz, 2012).
Binless methods for estimating information
In applications in neuroscience, typically at least one of the random variables (stimuli or responses) is discrete, and thus the approach of discretizing one or more of the variables is often taken. However, where stimuli and responses are both continuous (an example might be local field potential responses to a white noise sensory stimulus), it may be advantageous to take advantages of techniques better suited to continuous signals, such as kernel density estimators (Moon et al., 1995), nearest neighbor estimators (Kraskov et al., 2004) or binless metric space methods (Victor, 2002). We refer to the entry "Bin--Less Estimators for Information Quantities" for an in--depth discussion of these techniques.
Calculation of information theoretic quantities from parametric models
An alternative to measuring information--theoretic quantities directly from observed data is to fit the data to a model, and then to either analytically or numerically (depending upon the complexity of the model) compute the information quantity from the model. Examples include analytical calculations of the information flow through models of neural circuits with parameters fit to anatomical and physiological data (Treves and Panzeri, 1995;Schultz and Treves, 1998;Schultz and Rolls, 1999), and parametric fitting of probabilistic models of neural spike firing (Paninski, 2004;Pillow et al., 2005Pillow et al., , 2008, of spike count distributions (Gershon et al., 1998;Clifford and Ibbotson, 2000), or of neural mass signals (Magri et al., 2009). It should come as no surprise that "there is no free lunch", and, although in principle information quantities can be computed exactly under such circumstances, the problem is moved to one of assessing model validity -an incorrect model can lead to a bias in either direction in the information computed. Integrate--and--fire cells, with common input due to sharing of one third of connections, resulting in a cross--correlogram as depicted, leads to (c) contribution to the information from I cor--ind . (d) Integrate--and--fire cells with two of the five cells increasing their correlation due to increased shared input, for one of the stimuli, where the others remain randomly correlated, leading to cross--correlograms shown in panel (e). Firing rates are approximately balanced between the two stimuli. This results in a substantial contribution to the information from the stimulus-dependent correlation component, I cor--dep . From (Panzeri et al., 1999) with permission.
Figures and Figure Captions
Figure 1 :
1The origin of the limited sampling bias in information measures. (A, B) Simulation of a toy uninformative neuron, responding on each trial with a uniform distribution of spike counts ranging from 0 to 9, regardless of which of two stimuli (S = 1 in (A) and S = 2 in (B)) are presented. The black dotted horizontal line is the true response distribution, solid red lines are estimates sampled from 40 trials. The limited sampling causes the appearance of spurious differences in the two estimated conditional response distributions, leading to an artificial positive value of mutual information. (C) The distribution (over 5000 simulations) of the mutual information values obtained (without using any bias correction) estimating Eq. 1 from the stimulus-response probabilities computed with 40 trials. The dashed green vertical line indicates the true value of the mutual information carried by the simulated system (which equals 0 bits); the difference between this and the mean observed value (dotted green line) is the bias. Redrawn with permission from(Ince et al., 2010).
Figure 2 :Figure 3 .
23Performance of various bias correction methods. Several bias correction methods were applied to spike trains from eight simulated somatosensory cortical neurons. The information estimates (upper panels) and root mean square (RMS) error (lower panels) are plotted as a function of the simulated number of trials per stimulus. (A) Mean +/-- SD (upper panel; over 50 simulations) and RMS error (lower panel) of I(S;R). (B) Mean +/--SD (upper panel; over 50 simulations) and RMS error (lower panel) of I sh (S;R). Information component analysis of simulated data (for a five--cell ensemble). (a) Poisson cells, with the only difference between the stimuli being firing rate. (b)
. The leading terms in the biases are respectively:[
]
[
]
[
]
1
( )
1
2 ln(2)
1
( | )
1
2 ln(2)
1
( ; )
1
1
2 ln(2)
s
s
s
s
BIAS H R
R
N
BIAS H R S
R
N
BIAS I S R
R
R
N
Acknowledgements
Whisker Vibration Information Carried by Rat Barrel Cortex Neurons. E Arabzadeh, S Panzeri, M E Diamond, J Neurosci. 24Arabzadeh E, Panzeri S, Diamond ME (2004) Whisker Vibration Information Carried by Rat Barrel Cortex Neurons. J Neurosci 24:6011-6020.
Synergy in a neural code. N Brenner, S P Strong, R Koberle, W Bialek, Rrr Steveninck, Neural Comput. 12Brenner N, Strong SP, Koberle R, Bialek W, Steveninck RRR (2000) Synergy in a neural code. Neural Comput 12:1531-1552.
Response variability and information transfer in directional neurons of the mammalian horizontal optokinetic system. Cw G Clifford, M R Ibbotson, Vis Neurosci. 17Clifford CW g., Ibbotson MR (2000) Response variability and information transfer in directional neurons of the mammalian horizontal optokinetic system. Vis Neurosci 17:207-215.
Coding Strategies in Monkey V1 and Inferior Temporal Cortices. E D Gershon, M C Wiener, P E Latham, B J Richmond, J Neurophysiol. 79Gershon ED, Wiener MC, Latham PE, Richmond BJ (1998) Coding Strategies in Monkey V1 and Inferior Temporal Cortices. J Neurophysiol 79:1135-1144.
A novel test to determine the significance of neural selectivity to single and multiple potentially correlated stimulus features. Raa Ince, A Mazzoni, A Bartels, N K Logothetis, S Panzeri, J Neurosci Methods. 210Ince RAA, Mazzoni A, Bartels A, Logothetis NK, Panzeri S (2012) A novel test to determine the significance of neural selectivity to single and multiple potentially correlated stimulus features. J Neurosci Methods 210:49-65.
Open source tools for the information theoretic analysis of neural data. Raa Ince, A Mazzoni, R S Petersen, S Panzeri, Front Neurosci. 4Ince RAA, Mazzoni A, Petersen RS, Panzeri S (2010) Open source tools for the information theoretic analysis of neural data. Front Neurosci 4:62-70.
Estimating entropy rates with Bayesian confidence intervals. M B Kennel, J Shlens, Hdi Abarbanel, E Chichilnisky, Neural Comput. 17Kennel MB, Shlens J, Abarbanel HDI, Chichilnisky E (2005) Estimating entropy rates with Bayesian confidence intervals. Neural Comput 17:1531-1576.
Estimating mutual information. A Kraskov, H St\ögbauer, P Grassberger, Phys Rev E. 6966138Kraskov A, St\ögbauer H, Grassberger P (2004) Estimating mutual information. Phys Rev E 69:66138.
A toolbox for the fast information analysis of multiple--site LFP, EEG and spike train recordings. C Magri, K Whittingstall, V Singh, N K Logothetis, S Panzeri, BMC Neurosci. 1081Magri C, Whittingstall K, Singh V, Logothetis NK, Panzeri S (2009) A toolbox for the fast information analysis of multiple--site LFP, EEG and spike train recordings. BMC Neurosci 10:81.
Note on the bias of information estimates. G Miller, Inf Theory Psychol Probl Methods. Miller G (1955) Note on the bias of information estimates. Inf Theory Psychol Probl Methods:95-100.
The Role of Correlations in Direction and Contrast Coding in the Primary Visual Cortex. F Montani, A Kohn, M A Smith, S R Schultz, J Neurosci. 272338Montani F, Kohn A, Smith MA, Schultz SR (2007) The Role of Correlations in Direction and Contrast Coding in the Primary Visual Cortex. J Neurosci 27:2338.
Tight Data--Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques. M A Montemurro, R Senatore, S Panzeri, Neural Comput. 19Montemurro MA, Senatore R, Panzeri S (2007) Tight Data--Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques. Neural Comput 19:2913-2957.
Estimation of mutual information using kernel density estimators. Y I Moon, B Rajagopalan, U Lall, Phys Rev E. 522318Moon YI, Rajagopalan B, Lall U (1995) Estimation of mutual information using kernel density estimators. Phys Rev E 52:2318.
Entropy and information in neural spike trains: Progress on the sampling problem. I Nemenman, W Bialek, R De Ruyter Van Steveninck, Phys Rev E. 6956111Nemenman I, Bialek W, de Ruyter van Steveninck R (2004) Entropy and information in neural spike trains: Progress on the sampling problem. Phys Rev E 69:56111.
Neural Coding of Natural Stimuli: Information at Sub--Millisecond Resolution. I Nemenman, G D Lewen, W Bialek, R R De Ruyter Van Steveninck, PLoS Comput Biol. 41000025Nemenman I, Lewen GD, Bialek W, de Ruyter van Steveninck RR (2008) Neural Coding of Natural Stimuli: Information at Sub--Millisecond Resolution. PLoS Comput Biol 4:e1000025.
Adv Neural Inf Process Syst 14 Proc. I Nemenman, F Shafee, W Bialek, Sic Conf. 14Entropy and Inference, RevisitedNemenman I, Shafee F, Bialek W (2002) Entropy and Inference, Revisited. Adv Neural Inf Process Syst 14 Proc 2002 Sic Conf 14:95-100.
Retinal ganglion cells act largely as independent encoders. S Nirenberg, S M Carcieri, A L Jacobs, P E Latham, Nature. 411Nirenberg S, Carcieri SM, Jacobs AL, Latham PE (2001) Retinal ganglion cells act largely as independent encoders. Nature 411:698-701.
Estimating Entropy on Bins Given Fewer Than Samples. L Paninski, IEEE Trans Inf Theory. 502201Paninski L (2004) Estimating Entropy on Bins Given Fewer Than Samples. IEEE Trans Inf Theory 50:2201.
A unified approach to the study of temporal, correlational, and rate coding. S Panzeri, S R Schultz, Neural Comput. 13Panzeri S, Schultz SR (2001) A unified approach to the study of temporal, correlational, and rate coding. Neural Comput 13:1311-1349.
Correlations and the encoding of information in the nervous system. S Panzeri, S R Schultz, A Treves, E T Rolls, Proc Biol Sci. 266Panzeri S, Schultz SR, Treves A, Rolls ET (1999) Correlations and the encoding of information in the nervous system. Proc Biol Sci 266:1001-1012.
Correcting for the Sampling Bias Problem in Spike Train Information Measures. S Panzeri, R Senatore, M A Montemurro, R S Petersen, J Neurophysiol. 98Panzeri S, Senatore R, Montemurro MA, Petersen RS (2007) Correcting for the Sampling Bias Problem in Spike Train Information Measures. J Neurophysiol 98:1064-1072.
Analytical estimates of limited sampling biases in different information measures. S Panzeri, A Treves, Netw Comput Neural Syst. 7Panzeri S, Treves A (1996) Analytical estimates of limited sampling biases in different information measures. Netw Comput Neural Syst 7:87-107.
Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J W Pillow, L Paninski, V J Uzzell, E P Simoncelli, E Chichilnisky, J Neurosci. 25Pillow JW, Paninski L, Uzzell VJ, Simoncelli EP, Chichilnisky E (2005) Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J Neurosci 25:11003-11013.
Spatio--temporal correlations and visual signalling in a complete neuronal population. J W Pillow, J Shlens, L Paninski, A Sher, A M Litke, E Chichilnisky, E P Simoncelli, Nature. 454Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky E, Simoncelli EP (2008) Spatio--temporal correlations and visual signalling in a complete neuronal population. Nature 454:995-999.
Data--robust tight lower bounds to the information carried by spike times of a neuronal population. G Pola, R S Petersen, A Thiele, M P Young, S Panzeri, Neural Comput. 17Pola G, Petersen RS, Thiele A, Young MP, Panzeri S (2005) Data--robust tight lower bounds to the information carried by spike times of a neuronal population. Neural Comput 17:1962-2005.
An exact method to quantify the information transmitted by different mechanisms of correlational coding. G Pola, A Thiele, K P Hoffmann, S Panzeri, Netw Comput Neural Syst. 14Pola G, Thiele A, Hoffmann KP, Panzeri S (2003) An exact method to quantify the information transmitted by different mechanisms of correlational coding. Netw Comput Neural Syst 14:35-60.
Statistical physics of pairwise probability models. Y Roudi, E Aurell, J Hertz, Arxiv Prepr ArXiv09051410Roudi Y, Aurell E, Hertz J (2009) Statistical physics of pairwise probability models. Arxiv Prepr ArXiv09051410.
Mutual information expansion for studying the role of correlations in population codes: How important are autocorrelations?. A Scaglione, G Foffani, G Scannella, S Cerutti, K Moxon, Neural Comput. 20Scaglione A, Foffani G, Scannella G, Cerutti S, Moxon K (2008) Mutual information expansion for studying the role of correlations in population codes: How important are autocorrelations? Neural Comput 20:2662-2695.
General Poisson Exact Breakdown of the Mutual Information to Study the Role of Correlations in Populations of Neurons. A Scaglione, K A Moxon, G Foffani, Neural Comput. 22Scaglione A, Moxon KA, Foffani G (2010) General Poisson Exact Breakdown of the Mutual Information to Study the Role of Correlations in Populations of Neurons. Neural Comput 22:1445-1467.
The Ising decoder: reading out the activity of large neural ensembles. M T Schaub, S R Schultz, J Comput Neurosci. 32Schaub MT, Schultz SR (2012) The Ising decoder: reading out the activity of large neural ensembles. J Comput Neurosci 32:101-118.
Weak pairwise correlations imply strongly correlated network states in a neural population. E Schneidman, I I Berry, Mj, R Segev, W Bialek, Nature. 440Schneidman E, Berry II MJ, Segev R, Bialek W (2006) Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440:1007-1012.
Stability of the replica--symmetric solution for the information conveyed by a neural network. S Schultz, A Treves, Phys Rev E. 57Schultz S, Treves A (1998) Stability of the replica--symmetric solution for the information conveyed by a neural network. Phys Rev E 57:3302-3310.
Spatial Pattern Coding of Sensory Information by Climbing Fiber--Evoked Calcium Signals in Networks of Neighboring Cerebellar Purkinje Cells. S R Schultz, K Kitamura, A Post--Uiterweer, J Krupic, M Häusser, J Neurosci. 29Schultz SR, Kitamura K, Post--Uiterweer A, Krupic J, Häusser M (2009) Spatial Pattern Coding of Sensory Information by Climbing Fiber--Evoked Calcium Signals in Networks of Neighboring Cerebellar Purkinje Cells. J Neurosci 29:8005-8015.
Temporal Correlations and Neural Spike Train Entropy. S R Schultz, S Panzeri, Phys Rev Lett. 86Schultz SR, Panzeri S (2001) Temporal Correlations and Neural Spike Train Entropy. Phys Rev Lett 86:5823- 5826.
Analysis of information transmission in the Schaffer collaterals. S R Schultz, E T Rolls, Hippocampus. 9Schultz SR, Rolls ET (1999) Analysis of information transmission in the Schaffer collaterals. Hippocampus 9:582-598.
Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions. T Sharpee, N C Rust, W Bialek, Neural Comput. 16Sharpee T, Rust NC, Bialek W (2004) Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions. Neural Comput 16:223-250.
The Structure of Multi--Neuron Firing Patterns in Primate Retina. J Shlens, G D Field, J L Gauthier, M I Grivich, D Petrusca, A Sher, A M Litke, E J Chichilnisky, J Neurosci. 26Shlens J, Field GD, Gauthier JL, Grivich MI, Petrusca D, Sher A, Litke AM, Chichilnisky EJ (2006) The Structure of Multi--Neuron Firing Patterns in Primate Retina. J Neurosci 26:8254-8266.
Estimating information rates with confidence intervals in neural spike trains. J Shlens, M B Kennel, Hdi Abarbanel, E Chichilnisky, Neural Comput. 19Shlens J, Kennel MB, Abarbanel HDI, Chichilnisky E (2007) Estimating information rates with confidence intervals in neural spike trains. Neural Comput 19:1683-1719.
An Information--Theoretic Approach to Deciphering the Hippocampal Code. W E Skaggs, B L Mcnaughton, K M Gothard, Advances in Neural Information Processing Systems. San Francisco, CA, USAMorgan Kaufmann Publishers Inc. Available5AccessedSkaggs WE, McNaughton BL, Gothard KM (1993) An Information--Theoretic Approach to Deciphering the Hippocampal Code. In: Advances in Neural Information Processing Systems 5, [NIPS Conference], pp 1030-1037. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Available at: http://dl.acm.org/citation.cfm?id=645753.668057 [Accessed January 17, 2014].
Entropy and Information in Neural Spike Trains. S P Strong, R Koberle, R R De Ruyter Van Steveninck, W Bialek, Phys Rev Lett. 80Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W (1998) Entropy and Information in Neural Spike Trains. Phys Rev Lett 80:197-200.
The Upward Bias in Measures of Information Derived from Limited Data Samples. A Treves, S Panzeri, Neural Comput. 7Treves A, Panzeri S (1995) The Upward Bias in Measures of Information Derived from Limited Data Samples. Neural Comput 7:399-407.
Approaches to information--theoretic analysis of neural activity. J Victor, Biol Theory. 1Victor J (2006) Approaches to information--theoretic analysis of neural activity. Biol Theory 1:302-316.
Binless strategies for estimation of information from neural data. J D Victor, Exp Brain Res Phys Rev E. 6651903Victor JD (2002) Binless strategies for estimation of information from neural data. Exp Brain Res Phys Rev E 66:051903.
|
[] |
[
"Dielectric screening of surface states in a topological insulator",
"Dielectric screening of surface states in a topological insulator"
] |
[
"J P F Leblanc \nMax-Planck-Institute for the Physics of Complex Systems\n01187DresdenGermany\n",
"J P Carbotte \nDepartment of Physics and Astronomy\nMcMaster University\nL8S 4M1HamiltonOntarioCanada\n\nThe Canadian Institute for Advanced Research\nM5G 1Z8TorontoONCanada\n"
] |
[
"Max-Planck-Institute for the Physics of Complex Systems\n01187DresdenGermany",
"Department of Physics and Astronomy\nMcMaster University\nL8S 4M1HamiltonOntarioCanada",
"The Canadian Institute for Advanced Research\nM5G 1Z8TorontoONCanada"
] |
[] |
Hexagonal warping provides an anisotropy to the dispersion curves of the helical Dirac fermions that exist at the surface of a topological insulator. A sub-dominant quadratic in momentum term leads to an asymmetry between conduction and valence band. A gap can also be opened through magnetic doping. We show how these various modifications to the Dirac spectrum change the polarization function of the surface states and employ our results to discuss their effect on the plasmons. In the long wavelength limit, the plasmon dispersion retains its square root dependence on its momentum, q, but its slope is modified and it can acquire a weak dependence on the direction of q. Further, we find the existence of several plasmon branches, one which is damped for all values of q, and extract the plasmon scattering rate for a representative case.
|
10.1103/physrevb.89.035419
|
[
"https://arxiv.org/pdf/1401.2340v1.pdf"
] | 119,103,679 |
1401.2340
|
e34cfad54a0ecd5956d88e2d2b6413947a79e010
|
Dielectric screening of surface states in a topological insulator
10 Jan 2014
J P F Leblanc
Max-Planck-Institute for the Physics of Complex Systems
01187DresdenGermany
J P Carbotte
Department of Physics and Astronomy
McMaster University
L8S 4M1HamiltonOntarioCanada
The Canadian Institute for Advanced Research
M5G 1Z8TorontoONCanada
Dielectric screening of surface states in a topological insulator
10 Jan 2014(Dated: January 13, 2014)numbers: 7320-r7145Gm7722Ch
Hexagonal warping provides an anisotropy to the dispersion curves of the helical Dirac fermions that exist at the surface of a topological insulator. A sub-dominant quadratic in momentum term leads to an asymmetry between conduction and valence band. A gap can also be opened through magnetic doping. We show how these various modifications to the Dirac spectrum change the polarization function of the surface states and employ our results to discuss their effect on the plasmons. In the long wavelength limit, the plasmon dispersion retains its square root dependence on its momentum, q, but its slope is modified and it can acquire a weak dependence on the direction of q. Further, we find the existence of several plasmon branches, one which is damped for all values of q, and extract the plasmon scattering rate for a representative case.
I. INTRODUCTION
The dielectric properties of Dirac fermions have been extensively studied since graphene, a single monolayer of carbon atoms, was first isolated. [1][2][3][4][5][6][7] The electron dynamics in this two dimensional membrane, remarkably, are governed by the relativistic Dirac equation and this has many consequences such as a distinctive signature in the integer quantum Hall effect. [8][9][10] More recent works on the density-density correlation function include extensions to account for a mass term 11,12 and both Rashba and intrinsic spin-orbit coupling. 13 These are relevant to topological insulators which are insulating in the bulk with metallic surface states protected by topology which exhibit a Dirac spectrum between bulk bands. [14][15][16][17][18] While in graphene, the Dirac charge carriers have a pseudospin associated with the two atoms per unit cell honeycomb lattice in topological insulators, the spins are real electron spins with spin-momentum locking. 17 Unlike graphene, a gap in the energy spectrum of the helical Dirac electrons can be opened by doping with magnetic impurities. 19 While intrinsic graphene is often described by models with particle-hole symmetry, topological insulators show asymmetry, modelled by an additional quadratic [20][21][22] in momentum (Schrödinger) term in their energy dispersion curves in addition to the dominant linear Dirac term. This leads to a goblet or hourglass shape 17,23,24 which replaces the perfect Dirac cones of graphene with a surface state valence band which fans out in relation to the surface state conduction band. There is also an important hexagonal warping contribution [25][26][27] to the surface state Hamiltonian. This leads to significant changes in the associated Fermi surface which starts as circular for small values of chemical potential, µ, and gradually acquires a hexagonal or snowflake shape as µ is increased. This change in geometry has been observed in angular resolved photoemission (ARPES) data. 18 Fu 25 showed that the Fermi surface data could be understood by adding a hexagonal warping cubic term to the Hamiltonian and it has been subsequently shown 26 that this term can have a profound effect on interband optical transitions. While in graphene the interband transitions lead to a constant uniform background conductivity [28][29][30] of σ 0 = πe 2 /2h, the inclusion of hexagonal warping leads instead to a background which increases with increasing photon energy above the threshold for interband absorption which has an onset at twice the value of the chemical potential. 26 In this paper we collect these contributions, hexagonal warping, a gap and a sub-dominant quadratic in momentum term and study their effects on the dielectric screening properties of the surface carriers. [31][32][33][34] The hexagonal warping is particularly interesting because it leads to a directional anisotropy. The density-density response, or polarization function, Π(q, ω), can then depend on the angle of the scattering momentum vector q defined relative to the Γ → K direction in the hexagonal honeycomb lattice. This anisotropy is expected to grow as the chemical potential is increased and the shape of the Fermi surface begins to deviate more from circles. Consequently, the plasmons which form in the system will depend not only on the absolute value of their momentum, but will also depend on angle. Recently, Di Pietro et al. 35 have reported experimental results of Dirac plasmons in the topological insulator Bi 2 Se 3 but did not consider warping effects in their analysis. This motivates us to fully study what effect, if any, the warping has on the plasmon dispersion.
In section II we specify our model Hamiltonian and give the expression for the polarization function, Π(q, ω), as a function of scattering momentum, q, and energy, ω. Numerical results for the real and imaginary parts of Π(q, ω) are presented in Sec. III. We also provide color map plots for the imaginary part of the inverse dielectric function. Section IV deals with the plasmon dispersion, ω p (q), wherein we also provide simplified expressions for the slope of ω p (q) in the long wavelength limit. Numerical results for the plasmon dispersion are also provided which go beyond the small q limit. A summary of our findings and concluding remarks are found in Sec. V followed by a brief appendix which contains relevant algebra.
II. MODEL AND POLARIZATION
We begin with the Kane Mele Hamiltonian 36 for helical Dirac fermions at the Γ point of the surface state Brillouin zone of a topological insulator which further includes a gap, ∆, a cubic hexagonal warping term of strength λ, and a sub-dominant quadratic in momentum Schrödinger term. Together these can be written as
H = v(k x σ y − k y σ x ) + λ 2 (k 3 + + k 3 − )σ z + ∆σ z + E(k) (1)
where σ x , σ y , and σ z are the Pauli spin matrices, v the velocity of the Dirac part of the fermion dispersion which is linear in momentum. In the hexagonal warping term k ± = k x ± ik y with k x , k y momentum components in the surface plane. The ∆ is a gap and E(k) = 2 k 2 2m ≡ E 0 k 2 a quadratic dispersion. We are interested in the case when the first Dirac term in Eq. 1 is dominant and E 0 is by comparison small. To be specific we will take v = 2.8 × 10 5 m/s and m equal to the electron mass m e , which we refer to as E 0 = 1 in the appropriate units of 2 2me . These values are illustrative only. For the specific case of Bi 2 Te 3 for example, v = 4.3 × 10 5 m/s and m = 0.9m e . 26 A fit to angular resolved photoemission data on Bi 2 Te 3 by Fu gave a value of λ ≈ 250 meVÅ 3 which sets the order of magnitude for this coupling. 22,25 The energies are given by
E s (k) = E(k) + s 2 v 2 k 2 + ∆ + λ{k 3 x − 3k x k 2 y } 2 .
(2) The directionally dependent part in Eq. (2) can be rewritten in terms of the polar angle θ k for the vector k as
∆(k, θ k ) ≡ ∆ + λk 3 cos(3θ k ).
(3) The eigenvectors are then dependent on the magnitude and direction of the momentum k, as well as the band index, s = ±1 and are given by
u(k, s) = vk 1, 1 vk 2 ∆(k, θ k ) − s 2 v 2 k 2 + ∆(k, θ k ) 2 (−ik x + k y ) T 2 v 2 k 2 + (∆(k, θ k ) − s 2 v 2 k 2 + ∆(k, θ k ) 2 ) 2 .(4)
To orient the reader, we first discuss the impact of these various interaction terms on the electronic dispersion, which we sketch schematically in Fig. 1. In order from left to right we show the dispersions for pure Dirac, a hexagonal warping term, followed by an addi-tional gap and the final frame illustrates the effect of a sub-dominant Schrödinger term without a gap. The polarization, Π(q, ω), is the fundamental quantity needed to describe the dielectric properties of an electronic system. For our model Hamiltonian it can be written as
Π(q, ω) = ss ′ dk (2π) 2 [f (E s (k)) − f (E s ′ (k + q))]| u(k, s)|u(k + q, s ′ ) | 2 1 ω − E s (k) + E s ′ (k + q) + iΓ(5)
where f is the Fermi function, Γ is a small intrinsic scattering rate, and the overlap matrix element squared represents scattering from the state |k, s to the state k + q, s ′ |, and will depend not only on the magnitude of q but also on its direction with respect to the two dimensional surface state Brillouin zone as a result of the hexagonal warping which introduces a new element of complexity not encountered in the pure Dirac spectrum. Left: the momenta of k ′ as a function of k x and k y with a color scale which represents the overlap | u k ′ |u k | 2 for an initial momentum k marked in each frame by an '×'. Right: shows a corresponding polar plot of | u k ′ |u k | 2 as a function of θ k ′ − θ k . (a) λ = 0, Dirac case. (b) λ = 250 meVÅ 3 , for initial momentum along a hexagonal side, k = (k F , θ k = 0). (c) λ = 250 meVÅ 3 , for initial momentum along a hexagonal vertex, k = (k F , θ k = π/6).
The overlap between states at the Fermi level is known to be essential to the low energy physics in graphene. An example of this is the well known chirality induced removal of backscattering processes which is understood by the overlap shown in Fig. 2(a). Here there is a strong preference for forward scattering processes for states scattering within the same cone which we illustrate in the left hand column using blue(red) color scale for strong(weak) scattering amplitudes which are given explicitly in the right hand column for an electron with initial momentum along k x or θ k = 0. The inclusion of hexagonal warping complicates this in a non-trivial manner. There is a very different angular overlap depending on the direction of the initial momentum, k, of the electron. If the initial momentum is along the side of the hexagon, as shown in Fig. 2(b) for θ k = 0, then for relevant values of λ there is now a strong preference to scattering through an angle of 2π/3 which results in enhanced scattering between alternating sides of the hexagonal Fermi surface. If instead the initial momentum is along the corner of the hexagon, shown in Fig. 2(c) for θ k = π/6, then the scattering amplitude is not significantly different from the case of a circular Fermi surface.
III. NUMERICAL RESULTS
The inclusion of the anisotropic hexagonal term poses a significant hurdle to the analytic evaluation of Eq. (5).
Here we instead describe the full Hamiltonian under a single framework which can be evaluated numerically. Because of the large number of parameters, what we present is by no means an exhaustive description of possible results, but instead a minimal set to provide a qualitative description of the effects of each term in the Hamiltonian. In addition, there exist some analytic results which have been previously evaluated for a pure Dirac system, 1,37 as well as a system with a gap term. 13 In order to qualitatively compare with these results, we proceed numerically in the clean limit of scattering Γ → 0 + << vq. We will see later that this is an essential element to correctly describing the plasmon dispersions from numerics. In the case of graphene it is common to scale momenta by the value at the Fermi level, and frequency by the chemical potential. Here we do not do this for several reasons. One reason is that the inclusion of anisotropy due to hexagonal warping creates an angularly dependent value of the Fermi momentum, k F (θ). Further, changing each parameter changes the relative Fermi level. We therefore maintain a single set of unscaled units, so that these differences can be seen in our axes labels. With this in mind we proceed with an experimentally relevant chemical potential of µ = 250 meV.
In Fig. 3 we evaluate numerically the real part of the polarization function of Eq. (5) for several choices of parameters λ, ∆, and E 0 . Fig. 3(a) is for comparison with previous analytic work on graphene and shows the real part of Π(q, ω) in units ofÅ −2 /eV as a function of q. These numerical results (scaling aside) agree precisely with analytic results presented in Fig.3(c) of Kotov et al 6 and elsewhere. 1,38 Results are presented in the same format for four values of ω/µ = 0.5 (black), 1.0 (red), 1.5 (blue) and 2.0 (green). Frame (b) is for the same values of ω but now includes hexagonal warping with λ = 50 meVÅ 3 . An important new feature is that now the polarization depends not only on the magnitude of q but also depends on direction. Results for θ q = 0 are represented with solid lines while for θ q = π/6 we use dashed lines with similar color coding. From this it is clear that there can be a great deal of anisotropy which is directly related to the warping term. Differences are small however in the limit of q going to zero, since in this region the solid and dashed curves overlap. As q is increased the departures between the two can be very significant; not just small quantitative changes, but large ones which change the qualitative behavior. As an example the dashed black curve for θ q = π/6 at ω/µ = 0.5 shows a single maximum around q ∼ = 0.05 while for θ q = 0 the peak is split into two and the second peak in the doubled maxima structure is shifted to the right to higher values of q. Comparison with the curves in frame (a) for the pure Dirac case shows that new structures are introduced by the hexagonal warping term. In particular the most prominent maximum in the graphene case is now less prominent and essentially spread over a range of momenta because of the anisotropy that is introduced. The splitting of the two angles at low ω and small q can be understood in our description of the overlaps in Fig. 2. In the θ q = 0 case the curves rise until a momentum which begins to sample the adjoining hexagonal sides (light red in Fig. 2(b) where the overlap is small, and then has a second rise once q is large enough to access the opposing hexagonal faces (light blue in Fig. 2(b)). This behaviour is not seen for θ q = π/6, which has overlaps which are more Dirac-like. We will see this behaviour, that the θ q = π/6 direction is more Dirac like, for various quantities throughout this paper. Turning next to Fig. 3(c) we show that the addition of a small Schrödinger piece to the Hamiltonian (E 0 = 1) leads to further quantitative changes. In particular, for θ q = 0 the solid black curve has lost its second peak which has been replaced by a weak shoulder around q ≈ 0.05Å −1 instead. Anisotropy with angle θ q remains most significant in the intermediate q range. Frame (d) is for λ = 50 meVÅ 3 with E 0 = 0 but now we have also included a gap of ∆ = 50 meV. Comparing with (b) we note that the introduction of a gap leads to many changes particularly in the height of the peaks. We note that in all cases, there is no indication of angular anisotropy at small q and that the most notable qualitative changes are seen in the dashed (blue and green) curves at the higher values of frequency and momentum transfer. Here the single minima at large q in (b) splits into two in (d) due to the presence of the gap for θ q = π/6. This does not occur along θ q = 0. Corresponding results for the imaginary part of the polarization function ImΠ(q, ω) vs q for the same four values of ω are presented in Fig. 4. Comparing our new results shown in Fig. 4(b) which include warping contributions against the pure Dirac case of Fig. 4(a) we emphasize two features. First the onset of non-zero damping is shifted to lower energies and momenta by the anisotropic warping term and now also depends on the direction of q. Secondly as we have already noted in discussion of Fig. 3 the sharp peak of the Dirac case has become spread over a range of values of q and as can be seen in the black curve of frame (b) the structure is split into two pieces. Quantitative modifications arise when a quadratic piece is included. For example, in Fig. 4(c) the first peak in the black solid curve is higher than the second in contrast to the E 0 = 0 case where this is reversed. In order to properly understand the source of these complicated new features, we plot in Fig. 5 a set of full color-map plots of our computed polarization function on a fine grid of q and ω. The first four top frames give our results for the real part of the polarization function. Of these, the upper two frames include no Schrödinger piece (E 0 = 0) but show two directions of the scattering q; (a) for θ q = 0, (b) for θ q = π/6. In the absence of anisotropy these would be identical. However, we note a great deal of anisotropy for finite but small values of |q|. Also the boundary between negative and positive regions of ReΠ(q, ω) in (a) merge into a single region in (b). The next row, frames (c) and (d), includes a small Schrödinger term of E 0 = 1 in addition to the dominant Dirac term and the hexagonal warping term. This introduces further changes, particularly in the blue region at small q and ω which corresponds to the plasmon region. In all cases this plasmon region, where ReΠ(q, ω) > 0, is extremely restricted and does not extend far along the ω = vq line as it would in the pure Dirac case. We will see later that this has significant consequences on the plasmon dispersions. The most striking change due to the inclusion of a Schrödinger term is the complete removal of the splitting feature which is seen along θ q = 0 in Fig. 5(a). The lower four frames give the corresponding imaginary part of the polarization and also show quantitative changes with value of E 0 and angle θ q . In particular, the boundaries of the particle-hole continuum change with λ, E 0 , and θ q . These differences are seen in (e) and (f) for the imaginary part. Also, it is in the imaginary part of the polarization that the regions in q and ω space are most evident. Examining first Fig. 5(e), we see at low ω first the Pauli blocked region, then the intraband region as q is increased, as well as the edge of the intraband piece which occurs beyond q = 2k F for ω = 0. In this case, unlike the pure Dirac case, the intraband region has a non-linear edge due to the hexagonal warping. One can also see a split residual linear feature as was pointed to in Fig. 5(a) and accentuated there as the lower blue region where ReΠ(q, ω) > 0. Fig. 5(f) shows the θ q = π/6 direction where the imaginary part is again much more Dirac-like. The primary difference between directions of q is the lack of a split feature in the intraband continuum. It is clear that with λ = 0 the boundaries of the inter and intraband particle-hole excitations shift with angle θ q . These changes will have an effect on the stability of the plasmons because the plasmons remain undamped only where ImΠ(q, ω) = 0, shown in white. We will see in Sec. IV how these considerations restrict the extent of the plasmon dispersion with variation in hexagonal warping, and other terms in the Hamiltonian.
0 0.1 0.2 0.3 q -0.4 -0.2 0 0.2 0.4 (2π)ReΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ (a) 0 0.1 0.2 0.3 q -0.2 -0.15 -0.1 -0.05 0 0.05 (2π)ReΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ θ q =0 θ q =π/6 (b) 0 0.05 0.1 0.15 0.2 0.25 q -0.1 -0.05 0 0.05 (2π)ReΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ θ q =0 θ q =π/6 (c) 0 0.05 0.1 0.15 0.2 0.25 q -0.2 -0.15 -0.1 -0.05 0 0.05 (2π)ReΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ θ q =0 θ q =π/6 (d)(-2π)ImΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ θ q =0 θ q =π/6 (c) 0 0.05 0.1 0.15 0.2 0.25 q 0 0.05 0.1 0.15 0.2 0.25 (-2π)ImΠ(q,ω) ω=0.5µ ω=1.0µ ω=1.5µ ω=2.0µ θ q =0 θ q =π/6 (d)
We next turn to the effects of λ and E 0 on the loss function, Imε −1 (q, ω), which is a measurable quantity. The full dielectric function of the surface states is given by
ε(q, ω) = 1 − V (q)Π(q, ω),(6)
where V (q) = 2πe 2 ǫ0|q| is the Coulomb potential and ǫ 0 is the effective dielectric constant of the medium. Results for Imε −1 (q, ω) are presented in Fig. 6. Fig. 6(a) provides results for the pure Dirac spectrum as a comparison case. One can see clearly the boundaries of the intra-and interband parts of the particle-hole continuum which corresponds to the shaded regions. The regions which are white are for values of q and ω where there are no particle-hole excitations due to Pauli blocking at Here this curve has a width because the use of a small residual scattering rate results in a small but finite value of ImΠ(q, ω) which is then susceptible to the plasmon pole, creating a very sharp peak in ε −1 (q, ω). At higher values of ω and q the plasmon branch enters the particle-hole continuum and becomes Landau damped. The inclusion of warping shifts the boundaries of the particle-hole continuum and in the case where θ q = 0 in Fig. 6(b) the plasmons become damped at lower q and ω relative to the λ = 0 case. Further changes in the plasmon dispersion are seen in the lowest left frame which is for θ q = π/6. Although this case includes the same warping contribution as in (b) it resembles instead the results of (a) for the pure Dirac case except for the nonlinear intraband onset. Finally, Fig. 6(d) includes a gap of 100 meV. The dielectric function of a gapped Dirac spectrum has been previously examined. 13 Here, the inclusion of hexagonal warping causes the intraband piece to be non-linear resulting in no actual gap between the intraband and interband regions of the particle-hole continuum. This severely disrupts the long lived plasmons regardless of the direction of q.
IV. LONG WAVELENGTH LIMIT OF PLASMON DISPERSION
Prior to considering numerics it is useful to derive in the limit of q → 0 an analytic expression for the plasmon dispersion, 1,39,40 ω p (q), which follows from the solution to 1 = V (q)ReΠ(q, ω p ).
(7) We can use such analytic work to guide our expectations for how various terms modify the plasmon dispersion. For definiteness we take the chemical potential to fall in the upper Dirac-cone. For q → 0 the low energy plasmon dispersion will then depend only on the intraband piece for the case where s = s ′ = 1. In this case
ReΠ(q, ω) = Λ 0 kdk 2π 2π 0 dθ 2π f (E + (k ′ )) − f (E + (k)) [E + (k ′ ) − E + (k)] 2 − ω 2 [E + (k ′ ) − E + (k)],(8)
where Λ is a cutoff, k ′ = k + q and where we have used the symmetry ReΠ(q, ω) = ReΠ(q, −ω). To lowest order in q we write E + (k + q) ∼ = qβ(k, θ, α) where θ defines the direction of k, α defines the angle of q and β(k, θ, α) is given in Eq. (A4). When the polarization does not depend on α by symmetry q can be taken along the k x axis and only θ remains which is integrated over in Eq. (8). The Fermi factors in the numerator of this equation give a factor of qβ(k, θ, α) ∂f (E+(k)) ∂E+(k) and E + (k + q) − E + (k) gives another factor of qβ(k, θ, α). This last factor also appears in the denominator but it can be dropped relative to ω as it is of order q 2 and ω will turn out to be of order q. Putting all of this together we obtain
ReΠ(q, ω) = 1 2π Λ 0 kdk 2π 0 dθ 2π − ∂f (E + (k)) ∂E + (k) q 2 β 2 (k, θ, α) ω 2(9)
which goes like q 2 and holds at any temperature T . In the limit of zero temperature the derivative of the Fermi function becomes a Dirac delta function δ(E + (k) − µ). For values of µ much less than the band width which is the case of interest here the cut-off, Λ, on |k| is of no consequence and the delta function for a given angle θ will contribute only for k = k c (θ) at energy E + (k) = E + (k c (θ), θ) = µ. Doing the k integration first gives
ReΠ(q, ω) = q 2 ω 2 2π 0 dθ (2π) 2 k c (θ)β 2 (k c (θ), θ, α)) dE+(k,θ) dk k=kc(θ) ,(10)
where β is a function defined in the appendix Eq. (A4). As we advertised, this will result in ω p ∝ √ q which justifies the approximations made. An integral over the angles of k remains to be performed. For pure Dirac dispersion E + (k) = vk, and the β(k c (θ), θ, α) function = v such that
ω p (q) = e 2 µ 2ǫ 0 √ q = α F S µ vq 2(11)
which agrees with the known result for graphene where α F S = e 2 /(ǫ 0 v) is the effective screened fine structure constant. 1,41 In that case the factor of 2 in Eq. (11) is in the numerator because of the spin/valley degeneracy which is present in graphene but is not present in a topological insulator.
It is interesting to examine other simple limits complimentary to our numerical work. For example we can take a Dirac term plus a gap, in which case β(k, θ, α) = k cos θ
2 v 2 √ 2 v 2 k 2 c +∆ 2 and 2 v 2 k 2 c + ∆ 2 = µ which leads to ω p (q) = e 2 µ 2ǫ 0 q 1 − ∆ µ 2 .(12)
The magnitude of the gap is assumed to be much less than µ, and has the effect of reducing the slope of the √ q dependence of ω p (q). Next we take a Dirac term plus a Schrödinger term. Assuming E 0 is small gives the lowest order correction
ω p (q) = e 2 µ 2ǫ 0 q 1 + 2E 0 µ ( v) 2 .(13)
In this case the slope of the √ q dependence of the plasmon dispersion is increased. The case of Dirac plus hexagonal warping involves more complex, but still straightforward algebra. Some of the necessary work is described in the appendix. The result is:
ω P (q) = e 2 µ ǫ 0 q 1 2 + λ v 2 µ v 4 h(α)(14)
where h(α) is a function of α integrated over θ(see Appendix) which surprisingly comes out to be a constant value of h(α) = 1/2. To obtain Eq. (14) we have assumed that λ was small and worked to lowest order. In general, α will not drop out of h(α) and the plasmon dispersion will depend on the angle of q relative to the Brillouin zone axis. There is no simple analytic formula that covers the higher order case however, and we need to proceed numerically as we will do next.
Numerical results for the plasmon dispersion are presented in Fig. 7. In Fig. 7(a) we explore the variation in ω p (q) vs |q| for the case θ q = 0, E 0 = 0 when the hexagonal warping term is increased. The black circles are for λ = 0, the pure Dirac case, and are for comparison. The blue circles apply for λ = 50 meVÅ 3 and the red for λ = 200 meVÅ 3 . It is clear that these data conform with our expectation, based on Eqn. (14), that the slope of the plasmon dispersion curve increases with λ in the long wavelength limit. Differences between black, blue and red data rapidly increase with increasing scattering momentum, |q|. Several other features are to be noted. The red and blue curves first increase out of q = 0 come to a maximum and then fold back to smaller values of ω. For a single case of λ = 50 we have marked the intraband transition line (dashed-blue line) where ImΠ(q, ω) goes from zero to a finite value. We can see that the uppermost line is undamped until it passes into the particle-hole continuum, where it will become damped. This curve then folds back as the dispersion gets close to a sign change in the real part of the polarization function, shown for example in the color-plots of Fig. 5(a). For the blue curve there is even a third region which is entirely within the Landau damped region related to the onset of a residual linear intraband transition. While these data points correspond to a solution of Eq. (7), they do not represent undamped plasmons as they overlap with the particle-hole continuum. In Fig. 6(a) we saw a single plasmon dispersion in the Pauli-blocked region which continues into and merges with the interband particle-hole continuum. This contour in Fig. 6(a) corresponds to the black circles of Fig. 7(a). In Fig. 6(b) the hexagonal warping increased the region of the particle hole continuum so that the plasmon line now ends at smaller values of ω and q. This is shown in Fig. 7(a) for variation in hexagonal warping strength. We see in all cases the √ q behavior in the uppermost branch of ω p which comes from intraband transitions as in our analytic work.
In Fig. 7(b) we compare results for λ = 50 meVÅ 3 with and without a Schrödinger contribution and for the two key angles of θ q = 0 and π/6. The slopes out of q = 0 are only weakly modified by angle as expected from Eq. (14), but show strong dependence on the inclusion of a Schödinger piece (E 0 = 1) in agreement with our approximate but analytic result in Eq. (13). The numerical agreement between angular directions confirms our assertion from Eq. (14) that the slope of the plasmon dispersion is unaffected by the anisotropy of the hexagonal warping for small but relevant values of λ, and that this remains true even for larger values of λ. Fig. 7(c) gives our results for two values of the gap, ∆ = 50 and 100 meV again for the two relevant angles in the presence of weak hexagonal warping of λ = 50 meVÅ 3 . We can see that the value of the gap has very little effect on the uppermost plasmon solutions, but a more noticeable impact on the lower branches which occur at the intraband/interband continuum boundaries. For large gaps these boundaries can be substantially modified. As predicted in our analytics of Eq. (12) we see that the increase in ∆ results in a small reduction of the slope out of q = 0. The final frame, Fig. 7(d), shows how the plasmon dispersions are changed when the dielectric constant of the environment is changed, here written in terms of the effective fine structure and Fermi velocity factors, vα F S . Decreasing this quantity decreases the slope of the plas-mon out of q = 0 in agreement with all analytic results. As the coupling strength, α F S is reduced, we see that the upper and lower plasmon branches merge and shrink towards lower q and ω.
Finally we examine the existence of separate plasmon branches in Fig. 8. Here we pull a single case from Fig. 7(b), at the angle of θ q = π/6 (green points), which contains two branches, one with a square root dependence, and a second linear plasmon branch at small q. We identify these branches in Fig. 8(a) as black and red (circles and squares) respectively. To illustrate the truly distinct nature of these branches, we plot in Fig. 8(b) the corresponding plasmon decay rate given by 1,42
γ(q) = ImΠ(q, ω p ) ∂ReΠ(q,ω) ∂ω ω=ωp ,(15)
which we have evaluated numerically. From this we can see that the upper plasmon branch is indeed undamped for small q within the Pauli blocked region. However, unlike the case in graphene, where the plasmon branch would encounter the particle-hole continuum and slowly gain a scattering rate 1 , here the branch is deflected and gains a sharp increase in plasmon scattering, which then drops further along the branch. This behavior is distinct from the plasmons in the linear branch which are always located within the particle-hole continuum. In this case, the decay rate is always finite and slowly rises as q is increased, exhibiting a broad peak and then decreasing at larger q. Because γ(q) rises only slowly with q, these excitations, although damped, are still seen to rise above the background in the Imǫ −1 (q, ω) plot of Fig. 6. It is interesting as to why there are two branches for the inclusion of hexagonal warping, while the pure Dirac case shows only one. Shown in Fig. 8(c) is a comparison of the plasmon poles, which can be expressed as the intersection of q/( vα F S ) = (2π)ReΠ(q, ω). The solid black and dashed curves are reproduced from Fig. 3(b) for ω = 0.5µ and show only their positive part at smaller q. These curves are different from those for the pure Dirac case, which would diverge at ω = vq. Here the anisotropy brought in by the hexagonal warping broadens the peak out over a range of momentum q and this leads to two crossings of ReΠ(q, ω) with the straight line q/( vα F S ) and consequently to two plasmon branches. This is true for both values of θ q shown. Turning to θ q = π/6 we show two arrows, black and red that emphasize the plasmon poles for this case and these are further identified in Fig. 8(a) along the line ω = 0.5µ (dashed blue). As emphasized, the second branch (red squares) will always have a finite lifetime, but our numerical data indicates that it might still be seen in Imǫ −1 (q, ω) plots. Returning to Fig. 8(c) we can see from this graphical representation that if one reduces α F S , then the q/( vα F S ) line will increase in slope. This will modify the plasmon poles as in Fig. 7(d), and for sufficiently small α F S the q/( vα F S ) line will go above the peak in the real part of the polarization and thus produce no suitable plasmon pole.
V. SUMMARY AND CONCLUSIONS
We have presented numerical results for the real and imaginary parts of the polarization function associated with the surface states of a topological insulator. The Hamiltonian includes a Dirac term, a sub-dominant Schrödinger quadratic, hexagonal warping, and a gap. We study how the introduction of these contributions modifies screening and plasmon behavior in these surface states. In particular, the hexagonal warping introduces an anisotropy into the polarization, Π(q, ω). Without this term Π(q, ω) depends only on the magnitude |q| of the momentum transfer but with warping it acquires a dependence on the angle which the scattering momentum, q, makes with respect to the axis of the 2D surface state Brillouin zone. This anisotropy is small in the long wavelength limit where q → 0, but becomes large as q increases to the order of 0.01Å −1 . In particular, for a fixed value of ω the peaks in the imaginary part of the polarization shown in Fig. 4 can shift significantly. Introducing a sub-dominant Schrödinger term produces further quantitative changes in Π(q, ω). Inclusion of a large gap can result in the splitting of peaks in both the polarization and dielectric functions at large q values. In all cases an important difference with the pure Dirac case is that the boundaries of the particle-hole continuum shift with inclusion of a warping term which has consequences on plasmon damping processes. In the pure Dirac case the intraband and interband particle-hole continuum occupy separate parts of the (q, ω) space. With warping these regions can overlap. More importantly for us here the region of no damping becomes smaller and this affects importantly the range of ω and q for which plasmons remain undamped. This range also depends on the direction of q and is further affected by the introduction of a Schrödinger term or of a gap.
Our numerical calculations have revealed that anisotropy can provided new plasmon branches which, while damped due to falling in the particle-hole continuum, might still be seen in the Imǫ −1 (q, ω) as peaks above the background because the damping is not large. Further, we have shown numerically that hexagonal warping restricts the region in q and ω where the ReΠ(q, ω) > 0 which results in only a small window where plasmons can exist. Finally, we have derived simple analytic formulas for the slope of the plasmon dispersion curve ω p (q) as it comes out of q = 0. In all cases it goes ∝ √ q but the coefficient of this dependence is changed as warping, Schrödinger term and gap are introduced. Our simple formulas for lowest order corrections confirm our numerical work in that we find the slope towards q = 0 to be reduced by a gap but increased by warping and Schrödinger pieces. The anisotropy in the plasmon dispersion ω p (q) while in principle always present, is small for small values of q and becomes noticeable only at finite q before the plasmons become damped as the particle hole continuum is reached. The analytic results we have presented are relevant to recent experimental work 35 which has observed the Dirac plasmons, at very small q ≈ 10 −5Å−1 in Bi 2 Se 3 , while our numerics show deviations from √ q behavior at larger scattering momenta which will be relevant to near-field optics techniques which have been successfully applied to graphene. 43,44 These results may have further implications aimed at understanding interactions in modified Dirac fermion systems, 45 an area which has yet to be fully explored in the context of surface states of topological insulators.
which is a cubic equation in k 2 c . We can solve this equation assuming λ to be small and then expand to get vk c ∼ = µ 1 − 1 2
λ v 2 µ v 4 cos 2 (3θ) (A8) which leads to I ≈ [ 2 v 2 k 2 c cos(θ − α) + 3λ 2 ( µ v ) 6 cos(3θ) cos(2θ + α)] 2 µ[3µ 2 − 2 2 v 2 k 2 c ] (A9) substituting Eq. A8 into the denominator of Eq. A9 we get that A9 reduces to
I ≈ µ cos 2 (θ − α) + λ v 2 µ v 4
{6 cos(3θ) cos(2θ + α) cos(θ − α) − 4 cos 2 (3θ) cos 2 (θ − α)} (A10) and after integration over θ we get We see that h(α) reduces to 1/2 and is independent of angle α in this limit, giving ω p (q) = e 2 µ 2ǫ 0 q 1 + λ v In higher orders however we expect to find anisotropy and the angle α will not drop out as we have established in the numerical work described in the main text.
I ≈ µ 2 1 + 2 λ v 2 µ v 4 h(α) (A11) where h(α) is h(α) =
FIG. 1 :
1(Color online) Schematic energy dispersions for (a) only the Dirac term, (b) both Dirac and hexagonal warping terms, (c)Dirac, hexagonal warping and gap. (d) Dirac, hexagonal warping and Schrödinger quadratic in momentum term.
FIG. 2 :
2(Color online) Overlap between states at the Fermi level for µ = 0.25 eV with initial momentum k = (k F , θ k ) and final k ′ = (k F , θ k ′ ) measured inÅ −1 .
FIG. 3 :
3(Color Online) Real part of polarization function ReΠ(q, ω) for four values of ω as a function of q in units ofÅ −1 . (a) graphene case for comparison. (b) including hexagonal warping of λ = 50 meVÅ 3 , E 0 = 0 and ∆ = 0 (c)λ = 50 meVÅ 3 , E 0 = 1.0 and ∆ = 0. (d)λ = 50 meVÅ 3 , E 0 = 0 and ∆ =50 meV. θ q = 0 and θ q = π/6 are shown as solid and dashed lines respectively in each frame.
FIG. 4 :
4(Color Online) Imaginary part of polarization function ImΠ(q, ω) for four values of ω as a function of q in units ofÅ −1 . (a) graphene case for comparison. (b) including hexagonal warping of λ = 50 meVÅ 3 , E 0 = 0 and ∆ = 0 (c)λ = 50 meVÅ 3 , E 0 = 1.0 and ∆ = 0. (d)λ = 50 meVÅ 3 , E 0 = 0 and ∆ =50 meV. θ q = 0 and θ q = π/6 are shown as solid and dashed lines respectively in each frame.
FIG. 5 :
5(Color online) Color plots of the real and imaginary parts of the polarization function for parameters labelled in each frame in the format (λ, E 0 , θ q ).
FIG. 6 :
6(Color online) Color plots of the imaginary part of the inverse dielectric function Imε −1 (q, ω) for parameters labelled in each frame in the format (λ, E 0 , θ q , ∆) and vα F S = 5. T = 0. On the lower left corner of the figure we see a prominent blue curve which is the plasmon dispersion curve.
FIG. 7 :
7(Color Online) Numerical extraction of ω p (q) from Eqs.(5)and(7)for: (a) variation in hexagonal warping strength, λ = 50, and 200 meVÅ 3 along θ q = 0 (b) λ = 50 meVÅ 3 and inclusion of E 0 term for θ q = 0 and π/6, (c) λ = 50 meVÅ 3 and a gap of ∆ = 50, 100 meV, (d) λ = 50 meVÅ 3 for variation in coupling strength α F S with θ q = 0 and E 0 = 0. In (a-c) a value of vα F S = 5 is taken for illustrative purposes.
FIG. 8 :
8(Color online) Examination of multiple plasmon poles for the single case of λ = 50 meVÅ 3 , E 0 = 0, ∆ = 0, and for vα F S = 5. (a) Separate plasmon branches, ω p (q), shown in red/black (circles/squares) for θ q = π/6. (b) The corresponding plasmon decay rate, γ(q), for each plasmon branch in frame (a). (c) A cut of the real part of the polarization, ReΠ(ω = 0.5µ, q), given by the dashed blue line in (a). Black and Red colored arrows in frames (a) and (c) mark the corresponding plasmon poles, and their multiple solutions.
(3θ) cos(2θ + α) cos(θ − α) − 4 cos 2 (3θ) cos 2 (θ − α)]
v 2 k 2 c + λ 2 k 6 c cos 2 (3θ) = µ 2 (A7)
ACKNOWLEDGMENTSThis research was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institute for Advanced Research (CIFAR).Appendix A: Derivation of q → 0 plasmon dispersions We begin with the energy dispersion for the conduction band for Dirac plus hexagonal warping only. It has the formFor E(k + q) we define θ as the angle of k and α as the angle of q such that the angle between k and q is θ − α.In this case, to leading order in the absolute value of q the λ 2 term can be reduced toand the energywhere β(k, θ, α) = 2 v 2 k cos(θ − α) + 3λ 2 k 5 cos(3θ) cos(2θ + α) E(k)(A5) so that the required integrand of Eq.(10)iswhere we have suppressed the label θ on k c (θ). For a given θ and α, k c (θ) is given by the solution of E(k c (θ), θ) = µ, which can be simplified to
. B Wunsch, T Stauber, F Sols, F Guinea, New Journal of Physics. 8318B. Wunsch, T. Stauber, F. Sols, and F. Guinea, New Jour- nal of Physics 8, 318 (2006).
. S , Das Sarma, Q Li, Phys. Rev. B. 87235418S. Das Sarma and Q. Li, Phys. Rev. B 87, 235418 (2013).
. E H Hwang, S. Das Sarma, Phys. Rev. B. 75205418E. H. Hwang and S. Das Sarma, Phys. Rev. B 75, 205418 (2007).
. A Principi, M Polini, G Vignale, Phys. Rev. B. 8075418A. Principi, M. Polini, and G. Vignale, Phys. Rev. B 80, 075418 (2009).
. A H Castro Neto, F Guinea, N M R Peres, K S Novoselov, A K Geim, Rev. Mod. Phys. 81109A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
. V N Kotov, B Uchoa, V M Pereira, F Guinea, A H C Neto, Rev. Mod. Phys. 841067V. N. Kotov, B. Uchoa, V. M. Pereira, F. Guinea, and A. H. C. Neto, Rev. Mod. Phys. 84, 1067 (2012).
. K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonov, I V Grigorieva, A A Firsov, Science. 306666K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonov, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
. Y Zhang, Y.-W Tam, H L Stormer, P Kim, Nature. 438201Y. Zhang, Y.-W. Tam, H. L. Stormer, and P. Kim, Nature 438, 201 (2005).
. K S Novoselov, A K Geim, S V Morozov, D Jiang, M I Katsnelson, I V Grigorieva, S V Dubonos, A A Firsov, Nature. 438197K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
. V P Gusynin, S G Sharapov, Phys. Rev. Lett. 95146801V. P. Gusynin and S. G. Sharapov, Phys. Rev. Lett. 95, 146801 (2005).
. A Scholz, J Schliemann, Phys. Rev. B. 83235409A. Scholz and J. Schliemann, Phys. Rev. B 83, 235409 (2011).
. A Scholz, T Stauber, J Schliemann, Phys. Rev. B. 8835135A. Scholz, T. Stauber, and J. Schliemann, Phys. Rev. B 88, 035135 (2013).
. A Scholz, T Stauber, J Schliemann, Phys. Rev. B. 86195424A. Scholz, T. Stauber, and J. Schliemann, Phys. Rev. B 86, 195424 (2012).
. M Z Hasan, C L Kane, Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
. X L Qi, S C Zhang, Rev. Mod. Phys. 831057X. L. Qi and S. C. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
. J E Moore, Nature. 464194J. E. Moore, Nature 464, 194 (2010).
. D Hsieh, Y Xia, L Wray, D Qian, A Pal, J H Dil, J Osterwalder, F Meier, G Bihlmayer, C L Kane, Y S , D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, J. Osterwalder, F. Meier, G. Bihlmayer, C. L. Kane, Y. S.
. R J Hor, M Z Cava, Hasan, Science. 323919Hor, R. J. Cava, and M. Z. Hasan, Science 323, 919 (2009).
. Y L Chen, J G Analytis, J H Chu, Z K Liu, S K Mo, X L Qi, H J Zhang, D H Lu, X Dai, Z Fang, S C , Y. L. Chen, J. G. Analytis, J. H. Chu, Z. K. Liu, S. K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C.
. I R Zhang, Z Fisher, Z X Hussain, Shen, Science. 325178Zhang, I. R. Fisher, Z. Hussain, and Z. X. Shen, Science 325, 178 (2009).
. Y L Chen, J H Chu, J G Analytic, Z K Liu, K Igarashi, H H Kuo, X L Qi, S K Mo, R G Moore, D H Lu, M Hashimoto, T Sasagawa, S C Zhang, I R Fisher, Z Hussain, Z X Shen, Science. 329659Y. L. Chen, J. H. Chu, J. G. Analytic, Z. K. Liu, K. Igarashi, H. H. Kuo, X. L. Qi, S. K. mo, R. G. Moore, D. H. Lu, M. Hashimoto, T. Sasagawa, S. C. Zhang, I. R. Fisher, Z. Hussain, and Z. X. Shen, Science 329, 659 (2010).
. A R Wright, R H Mckenzie, Phys. Rev. B. 8785411A. R. Wright and R. H. McKenzie, Phys. Rev. B 87, 085411 (2013).
. A A Taskin, Y Ando, Phys. Rev. B. 8435301A. A. Taskin and Y. Ando, Phys. Rev. B 84, 035301 (2011).
. Z Li, J P Carbotte, Phys. Rev. B. 8845414Z. Li and J. P. Carbotte, Phys. Rev. B 88, 045414 (2013).
. S Y Xu, Y Xia, L A Wray, S Jia, F Meier, J H Dil, J Osterwalder, B Slomski, A Bansil, H Lin, R J Cava, M Z Hasan, Science. 332560S. Y. Xu, Y. Xia, L. A. Wray, S. Jia, F. Meier, J. H. Dil, J. Osterwalder, B. Slomski, A. Bansil, H. Lin, R. J. Cava, and M. Z. Hasan, Science 332, 560 (2011).
. X L Qi, S C Zhang, Physics Today. 6333X. L. Qi and S. C. Zhang, Physics Today 63, 33 (2010).
. L Fu, Phys. Rev. Lett. 103266801L. Fu, Phys. Rev. Lett. 103, 266801 (2009).
. Z Li, J P Carbotte, Phys. Rev. B. 87155416Z. Li and J. P. Carbotte, Phys. Rev. B 87, 155416 (2013).
. X Xiao, W Wen, Phys. Rev. B. 8845442X. Xiao and W. Wen, Phys. Rev. B 88, 045442 (2013).
. V P Gusynin, S G Sharapov, J P Carbotte, Phys. Rev. Lett. 98157402V. P. Gusynin, S. G. Sharapov, and J. P. Carbotte, Phys. Rev. Lett. 98, 157402 (2007).
. V P Gusynin, S G Sharapov, J P Carbotte, New Journal of Physics. 1195013V. P. Gusynin, S. G. Sharapov, and J. P. Carbotte, New Journal of Physics 11, 095013 (2009).
. Z Li, E A Henriksen, Z Jiang, Z Hao, M C Martin, P Kim, H L Stormer, D N Basov, Nat. Phys. 4532Z. Li, E. A. Henriksen, Z. Jiang, Z. Hao, M. C. Martin, P. Kim, H. L. Stormer, and D. N. Basov, Nat. Phys. 4, 532 (2008).
. M Bianchi, D Guan, S Bao, J Mi, B Brummerstedt Iversen, P D C King, P Hofmann, Nature Communications. 1128M. Bianchi, D. Guan, S. Bao, J. Mi, B. Brummerstedt Iversen, P. D. C. King, and P. Hofmann, Nature Commu- nications 1, 128 (2010).
. E H Hwang, S. Das Sarma, Phys. Rev. B. 79165404E. H. Hwang and S. Das Sarma, Phys. Rev. B 79, 165404 (2009).
. S Adam, E H Hwang, S. Das Sarma, Phys. Rev. B. 85235413S. Adam, E. H. Hwang, and S. Das Sarma, Phys. Rev. B 85, 235413 (2012).
. S , Das Sarma, Q Li, Phys. Rev. B. 8881404S. Das Sarma and Q. Li, Phys. Rev. B 88, 081404(R) (2013).
. P Di Pietro, M Ortolani, O Limaj, A Di Gaspare, V Giliberti, F Giorgianni, M Brahlek, N Bansal, N Koirala, S Oh, P Calvani, S Lupi, Nature Nano. 8556P. Di Pietro, M. Ortolani, O. Limaj, A. Di Gaspare, V. Giliberti, F. Giorgianni, M. Brahlek, N. Bansal, N. Koirala, S. Oh, P. Calvani, and S. Lupi, Nature Nano. 8, 556 (2013).
. C L Kane, E J Mele, Phys. Rev. Lett. 95226801C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005).
. Y Barlas, T Pereg-Barnea, M Polini, R Asgari, A H Macdonald, Phys. Rev. Lett. 98236601Y. Barlas, T. Pereg-Barnea, M. Polini, R. Asgari, and A. H. MacDonald, Phys. Rev. Lett. 98, 236601 (2007).
. J P F Leblanc, J P Carbotte, E J Nicol, Phys. Rev. B. 84165448J. P. F. LeBlanc, J. P. Carbotte, and E. J. Nicol, Phys. Rev. B 84, 165448 (2011).
. S , Das Sarma, E H Hwang, Phys. Rev. Lett. 102206412S. Das Sarma and E. H. Hwang, Phys. Rev. Lett. 102, 206412 (2009).
. K Kechedzhi, S. Das Sarma, Phys. Rev. B. 8885403K. Kechedzhi and S. Das Sarma, Phys. Rev. B 88, 085403 (2013).
. C Jang, S Adam, J.-H Chen, E D Williams, S Das Sarma, M S Fuhrer, Phys. Rev. Lett. 101146805C. Jang, S. Adam, J.-H. Chen, E. D. Williams, S. Das Sarma, and M. S. Fuhrer, Phys. Rev. Lett. 101, 146805 (2008).
. P M Krstajić, F M Peeters, Phys. Rev. B. 85205454P. M. Krstajić and F. M. Peeters, Phys. Rev. B 85, 205454 (2012).
. Z Fei, S Rodin, G O Andreev, W Bao, A S Mcleod, M Wagner, L M Zhang, Z Zhao, G Dominguez, M Thiemens, M M Fogler, A H Castro-Neto, C N Lau, F Keilmann, D N Basov, Nature. 48782Z. Fei, S. Rodin, G. O. Andreev, W. Bao, A. S. McLeod, M. Wagner, L. M. Zhang, Z. Zhao, G. Dominguez, M. Thiemens, M. M. Fogler, A. H. Castro-Neto, C. N. Lau, F. Keilmann, and D. N. Basov, Nature 487, 82 (2012).
. J Chen, M Badioli, P Alonso-González, S Thongrattanasiri, F Huth, J Osmond, M Spasenović, A Centeno, A Pesquera, P Godignon, A Z Elorza, N Camara, F J G De Abajo, R Hillenbrand, F H L Koppens, Nature. 48777J. Chen, M. Badioli, P. Alonso-González, S. Thongrat- tanasiri, F. Huth, J. Osmond, M. Spasenović, A. Cen- teno, A. Pesquera, P. Godignon, A. Z. Elorza, N. Camara, F. J. G. de Abajo, R. Hillenbrand, and F. H. L. Koppens, Nature 487, 77 (2012).
. J P Carbotte, J P F Leblanc, E J Nicol, Phys. Rev. B. 85201411J. P. Carbotte, J. P. F. LeBlanc, and E. J. Nicol, Phys. Rev. B 85, 201411(R) (2012).
|
[] |
[
"arXiv:quant-ph/0509125v1 19 Sep 2005 Feedback cooling of a single trapped ion",
"arXiv:quant-ph/0509125v1 19 Sep 2005 Feedback cooling of a single trapped ion"
] |
[
"Pavel Bushev ",
"Daniel Rotter ",
"Alex Wilson ",
"François Dubin ",
"Christoph Becher ",
"Jürgen Eschner ",
"Rainer Blatt ",
"Viktor Steixner ",
"Peter Rabl ",
"Peter Zoller ",
"\nInstitute for Experimental Physics\nInstitute for Theoretical Physics\nUniversity of Innsbruck\nTechnikerstr. 25A-6020InnsbruckAustria\n",
"\nInstitute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences\nUniversity of Innsbruck\nTechnikerstr. 25A-6020, 6020Innsbruck, InnsbruckAustria ‡:, Austria\n"
] |
[
"Institute for Experimental Physics\nInstitute for Theoretical Physics\nUniversity of Innsbruck\nTechnikerstr. 25A-6020InnsbruckAustria",
"Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences\nUniversity of Innsbruck\nTechnikerstr. 25A-6020, 6020Innsbruck, InnsbruckAustria ‡:, Austria"
] |
[] |
Based on a real-time measurement of the motion of a single ion in a Paul trap, we demonstrate its electro-mechanical cooling below the Doppler limit by homodyne feedback control (cold damping). The feedback cooling results are well described by a model based on a quantum mechanical Master Equation.
|
10.1103/physrevlett.96.043003
|
[
"https://export.arxiv.org/pdf/quant-ph/0509125v1.pdf"
] | 33,168,930 |
quant-ph/0509125
|
504958c387de97aad44e66b52a0b9d41d338d216
|
arXiv:quant-ph/0509125v1 19 Sep 2005 Feedback cooling of a single trapped ion
Pavel Bushev
Daniel Rotter
Alex Wilson
François Dubin
Christoph Becher
Jürgen Eschner
Rainer Blatt
Viktor Steixner
Peter Rabl
Peter Zoller
Institute for Experimental Physics
Institute for Theoretical Physics
University of Innsbruck
Technikerstr. 25A-6020InnsbruckAustria
Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences
University of Innsbruck
Technikerstr. 25A-6020, 6020Innsbruck, InnsbruckAustria ‡:, Austria
arXiv:quant-ph/0509125v1 19 Sep 2005 Feedback cooling of a single trapped ion
(Dated: April 1, 2022)
Based on a real-time measurement of the motion of a single ion in a Paul trap, we demonstrate its electro-mechanical cooling below the Doppler limit by homodyne feedback control (cold damping). The feedback cooling results are well described by a model based on a quantum mechanical Master Equation.
Quantum optics, and more recently mesoscopic condensed matter physics, have taken a leading role in realizing individual quantum systems, which can be monitored continuously in quantum limited measurements, and at the same time can be controlled by external fields on time scales fast in comparison with the system evolution. Examples include cold trapped ions and atoms [1], cavity QED [2,3,4,5] and nanomechanical systems [6]. This setting opens the possibility of manipulating individual quantum systems by feedback, a problem which is not only of a fundamental interest in quantum mechanics, but also promises a new route to generating interesting quantum states in the laboratory. First experimental efforts to realize quantum feedback have been reported only recently. While not all of them may qualify as quantum feedback in a strict sense, feedback has been applied to various quantum systems [5,7,8,9,10,11]. On the theory side, this has motivated during the last decade the development of a quantum feedback theory [12,13], where the basic ingredients are the interplay between quantum dynamics and the back-action of the measurement on the system evolution. In this letter we report a first experiment to demonstrate quantum feedback control, i.e. quantum feedback cooling, of a single trapped ion by monitoring the fluorescence of the laser driven ion in front of a mirror. We establish a continuous measurement of the position of the ion which allows us to act back in a feedback loop demonstrating "cold damping" [14,15]. We will show that quantum control theory based on a quantum optical modelling of the system dynamics and continuous measurement theory of photodetection provides a quantitative understanding of the experimental results.
We study a single 138 Ba + ion in a miniature Paul trap which is continuously laser-excited and laser-cooled to the Doppler limit on its S 1/2 to P 1/2 transition at 493 nm, as outlined in Fig. 1. The ion is driven by a laser near the atomic resonance, and the scattered light is emitted both into the radiation modes reflected by the mirror, as well as the other (background) modes of the quantized light field [16]. Light scattered into the mirror modes can either reach the photodetector directly, or after reflection from the mirror. From the resulting interference the motion of the ion (its projection onto the ion-mirror axis) is detected as a vibrational sideband in the fluctuation spectrum of the photon counting signal [17]. Of the three sidebands at about (1,1.2,2.3) MHz, corresponding to the three axes of vibration, we observe the one at ν = 1 MHz. It has a width Γ ≈ 400 Hz and is superimposed on the background shot noise generated by the photon counting process.
Our goal is to continuously read the position of the ion, and feed back a damping force proportional to the momentum to achieve feedback cooling. For a weakly driven atom the emitted light is dominantly elastic scattering at the laser frequency, and the information on the motion of Ba + ion in a Paul trap (parabola) is laserexcited and -cooled on its S 1/2 to P 1/2 transition at 493 nm. A retro-reflecting mirror 25 cm away from the trap and a lens (not shown) focus back the fluorescence onto the ion. The resulting interference fringes with up to 73% contrast are observed by a photomultiplier (PMT 1). The ion's oscillation in the trap creates an intensity modulation of the PMT signal which is observed as a sideband on a spectrum analyser (rfsa) [17]. For feedback cooling, the sideband signal is filtered, phase-shifted, and applied to the ion as a voltage on a trap electrode.
the ion is encoded in the sidebands of the scattered light, displaced by the trap frequency ν. For an ion trapped in the Lamb-Dicke regime (η √ N ≪ 1, with the Lamb Dicke parameter η = 2πa 0 /λ ∼ 0.07, a 0 the r.m.s. size of the trap ground state, N the mean excitation number of the motional oscillator) the motional sidebands are suppressed by η relative to the elastic component at the laser frequency. Thus the light reaching the detector will be the sum of the elastic component and weak sidebands [17], a situation reminiscent of homodyne detection, where a strong oscillator beats with the signal field to provide a homodyne current at the detector. This physical picture allows us to formulate the continuous readout of the position of the ion as well as the quantum feedback cooling in the well-developed language of homodyne detection and quantum feedback.
The homodyne current at the photodetector (see Fig. 1) with the (large) signal from the elastic light scattering subtracted has the form
I c (t) = γη ẑ c (t) + γ 2 ξ(t) .(1)
The first term is proportional to the conditioned expectation value of the position of the trapped ion, ẑ c (t), and the second term is a shot noise contribution with Gaussian white noise ξ(t). We have definedẑ = a + a † ≡ z/a 0 with a (a † ) destruction (creation) operator of the harmonic oscillator, and we have assumed that the trap center is placed at a distance L = nλ/2+λ/8 (n integer) from the mirror, corresponding to a point of maximum slope of the standing wave intensity of the mirror mode. The current I c (t) scales with γ ∝ ǫ, which is the light scattering rate into the solid angle (4πǫ) of the mirror mode induced by the laser. The expectation value · c ≡ Tr{· ρ c (t)} is defined with respect to a conditioned density operator ρ c (t), which reflects our knowledge of the motional state of the ion for the given history of the photocurrent. According to the theory of homodyne detection, ρ c (t) obeys the Ito stochastic differential equation [18] dρ
c (t) = − iν[a † a, ρ c (t)]dt + L 0 ρ c (t)dt + 2γη 2 Hρ c (t)dW (t) ,(2)
where Hρ
c (t) = (ẑρ c (t) + ρ c (t)ẑ − 2 ẑ c (t)ρ c (t))
. The first line determines the unobserved evolution of the ion, including the harmonic motion in the trap with frequency ν and the dissipative dynamics, L 0 , due to photon scattering. The latter is given by
L 0 ρ = Γ(N + 1)D[a]ρ + ΓN D[a † ]ρ ,(3)
where we have defined the superoperator D[c]ρ ≡ cρc † − (c † cρ + ρc † c)/2. The laser cooling rate Γ and the steady state occupation number, N = a † a , can either be estimated from the motional sidebands or deduced from the cooling laser parameters [1]. In the present experiment, N ≈ 17 corresponds to the Doppler limit. The last term of Eq. (2) is proportional to the Wiener increment dW (t) ≡ ξ(t)dt and corresponds to an update of the observers knowledge about the system according to a certain measurement result I c (t). In summary, Eq. (1) demonstrates that observation of the sidebands of the light scattered into the mirror mode provides us with information of the position of the ion, while the system density matrix is updated according to Eq. (2). This is the basis for describing feedback control of the ion, as shown in the following. For feedback cooling, the vibrational sideband is extracted with a bandpass filter of bandwidth B = 30 kHz, shifted by (−π/2), amplified, and the resulting output voltage is applied to an electrode which is close to the trap inside the vacuum. Thereby we create a driving force which is proportional to (and opposed to) the instantaneous velocity of the ion and which thus adds to the damping of its vibration. The overall gain of the feedback loop depends on many factors such as the fringe contrast of the interference, the PMT characteristics, etc. It is varied electronically by setting the amplification G of the final amplifier in the loop. We can also set the phase to other values than the optimum of (−π/2), which will be used in order to compare experiment and theory.
To analyze the result of the feedback we look at the changes in the sideband spectrum. The modified spectra require careful interpretation. The spectrum observed inside the feedback loop ("in-loop" or PMT1 in Fig. 1) shows not only the motion of the ion, but the sum of the motion and the shot noise. As the feedback correlates these fluctuations, a reduction of the signal below the shot noise level may occur, similar in appearance to signatures of squeezed light. This effect is known as "squashing" [19], or "anti-correlated state of light" in an opto-electronic loop [20], and it does not constitute a quantum mechanical squeezing of the fluctuations [21,22]. The effect on the motion can only be reliably detected by splitting the optical signal before it is measured and recording it outside the feedback loop with a second PMT ("out-loop" in Fig. 1), whose shot noise is not correlated with the motion.
In Fig. 2 we show spectra recorded with the spectrum analyzer, and measured outside and inside the feedback loop. The first curve of each row, showing the largest sideband, is the one without feedback (gain G = 0). The other two curves are recorded with the loop closed (gain values G = 1.3 and 8.7). The sub shot noise fluctuations inside the loop, when the ion is driven to move in antiphase with the shot noise, are clearly visible in Fig. 2(f). The main cooling results are curves (b) and (c) which show the motional sideband reduced in size and broadened, indicating a reduced energy (proportional to the area under the curve) and a higher damping rate (the width). From case (b) to (c) the area increases, as the injected and amplified shot noise overcompensates the increased damping. As shown below, in our model the incorporation of quantum feedback competing with laser cooling predicts such behavior, i.e., the existence of an optimal gain for maximal cooling (for a detailed description cf. [23]).
We model the effect of the feedback force acting on the ion by extending Eq. (2) with the feedback contribution,
[dρ c (t)] fb = −iGI fb,c (t − τ )[ẑ, ρ c (t)],(4)
where I fb,c (t) denotes the measured current after the feedback circuit. The latter is proportional to the voltage applied to the trap electrodes. All conversion factors between the feedback current and the actual force applied on the ion are included in the overall gainG. The time delay τ in the feedback loop preserves causality and is small compared to the fastest timescale ν −1 of the motion of the ion which allows us to consider the Markovian limit (τ → 0 + ).
To obtain an expression for the feedback current I fb,c (t), we change into a frame rotating with the trap frequency ν and define the density operator, µ c (t) ≡ exp(iνa † at)ρ c (t) exp(−iνa † at), evolving on the (slow) cooling timescales. This is convenient due to the large separation between the timescale of the harmonic oscillations, ν −1 and the timescale of laser and feedback cooling in our experiment. For our experimental parameters, ν ≫ B ≫ Γ, the feedback current for a phase shift of (−π/2) has the form [23] I fb,c (t) = γη p c (t) + γ 2 Ξ(t) cos(νt),
where p c (t) ≡ Tr{pµ c (t)}, andp ≡ i(a † − a) is the momentum operator conjugated toẑ. The first term in this current therefore provides damping for the motion of the ion. The second term of Eq. (5) describes the shot noise which passes through the electronic circuit and is fed back to the ion. The stochastic variable Ξ(t) is Gaussian white noise on a timescale given by the inverse bandwidth B −1 , whereby B ≫ Γ implies that it is spectrally flat on the frequency range of the cooling dynamics. For a full record of the photocurrent I c (t), Eqs. (2) and (4) determine the evolution of the ion's motional state in presence of feedback. As it is impractical to keep track of the whole photocurrent in the experiment, we derive a master equation for the density operator averaged over all possible realisation of I c (t), µ(t). Along the ways of the Wiseman-Milburn theory of quantum feedback [12], for a phase shift of (−π/2), we obtain the quantum feedback master equation [23]
µ = L 0 µ − iG γη 4 [ẑ,pµ + µp] −G 2 γ 16 [ẑ, [ẑ, µ]].(6)
The second and third term are the additional contributions due to the feedback. The part linear inG induces damping of the motion of the ion, and the term quadratic inG describes the effect of the fed back noise leading to diffusion of the momentum. The competition between laser cooling, damping and the injected noise leads to the characteristic behavior of the steady state number expectation value
n ss = N + ηγG(2N − 1)/2Γ + γG 2 /8Γ 1 + 2ηγG/Γ .(7)
For small gain, damping dominates, and the energy of the ion is decreased below the Doppler limit. For higher gain, the diffusive term describing the noise fed back into the system overcompensates cooling, i.e., heats the ion. Consequently, for (−π/2) feedback phase and optimal gain conditions the steady state energy is minimized. On the contrary, for a π phase shift (−ẑ) replacesp in the second term of Eq. (6), the feedback force then merely induces a frequency shift, ∆ω =Gγη/2, but no damping. Increasing the gain then always enhances the steady state number expectation value, i.e., the mean ion energy. We now compare the theoretical predictions and the experimental results for the feedback phases of physical interest, namely (−π/2) and π. The measured ion energy as a function of the feedback electronic gain is shown in Fig. 3. On the first side, as expected, for a (−π/2) feedback phase, cooling by more than 30% below the Doppler limit is achieved, while further increase of the gain drives the shot noise and therefore heats the motion of the ion. On the other side, a π phase shift in the feedback loop does not yield any damping, in such conditions the motion of the ion is only driven. This results in an increase of the measured sideband area (as shown in the insert of Figure 3), as well as a shift of the sideband center frequency (graph not shown). Both cases demonstrate good agreement between experiment and theory. Finally, let us stress that the optimal cooling rate is governed by the collection efficiency of the fluorescence going into the mirror mode, ǫ. In the experiments presented above, ǫ ≈ 1% leads to a decrease of the steady state occupation number N from 17 to 12, while N ≈ 3 can be reached for ǫ ≈ 15%, experimentally achievable for optimal optical coupling.
To summarize, we have demonstrated real-time feedback cooling of the motion of a single trapped ion. Electro-mechanical back-action based on a sensitive realtime measurement of the motion of the ion in the trap allowed us to cool one motional degree of freedom by 30% below the Doppler limit. Unlike with laser cooling, the presented method allows to cool one of the ion's motional mode without heating the two others, and our procedure can easily be extended to cooling all motional modes. The cooling process is shot-noise limited and the fraction of scattered photons recorded to observe the motion of the ion limits the ultimate cooling at optimum gain. The latter can yield a steady state occupation number N = 3 for realistic experimental conditions. In conclusion, our feedback scheme offers a possible way to very efficiently cool the motion of ions unsuitable for sideband cooling.
Acknowledgements. This work has been supported by the Austrian Science Fund (FWF) in the project SFB15, by the European Commission (QUEST network, HPRNCT-2000-00121, QUBITS network, IST-1999-13021), and by the "Institut für Quanteninformation GmbH". We thank S. Mancini, A. Masalov, and D.
Vitali for clarifying discussions. P. B. thanks the group of theoretical physics at U. Camerino, Italy, for hospitality.
FIG. 1: A single 138 Ba + ion in a Paul trap (parabola) is laserexcited and -cooled on its S 1/2 to P 1/2 transition at 493 nm. A retro-reflecting mirror 25 cm away from the trap and a lens (not shown) focus back the fluorescence onto the ion. The resulting interference fringes with up to 73% contrast are observed by a photomultiplier (PMT 1). The ion's oscillation in the trap creates an intensity modulation of the PMT signal which is observed as a sideband on a spectrum analyser (rfsa) [17]. For feedback cooling, the sideband signal is filtered, phase-shifted, and applied to the ion as a voltage on a trap electrode.
FIG. 2 :
2Feedback cooling spectra. The vibrational sideband around ν = 1 MHz is shown on top of the spectrally flat shot noise background, to which the spectra are normalized. The upper curves (a)-(c) are measured outside the feedback loop, while the lower curves (d)-(f) are the in-loop results. Spectra (a) and (d) are for laser cooling only, the other curves are recorded with feedback at the indicated gain values. The feedback phase is set to (−π/2). The gain values indicate the settings of the final amplifier in the feedback loop.
FIG. 3: Steady-state energy of the cooled oscillator: measured sideband area, normalized to the value without feedback, versus gain of the feedback loop, for (−π/2) (a) and π (b) feedback phase. The curves are the model calculations. The gain axis is scaled to the experimental values of the electronic gain.2
4
6
8
0.7
0.8
0.9
1.1
1.2
1.3
1.0
Normalised sideband area
Electronic gain
Phase (-π/2)
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
2
2.2
Normalised sideband area
Electronic gain
Phase π
Present address: Laboratory of Physical Chemistry. ETH Zürich, Switzerland[ * ] Present address: Laboratory of Physical Chemistry, ETH Zürich, Switzerland.
Present address: Fachrichtung Technische Physik. Saarbrücken, GermanySaarland UniversityPresent address: Fachrichtung Technische Physik, Saar- land University, Saarbrücken, Germany.
Present address: ICFO -Institut de Ciències Fotòniques. Barcelona, Spain] Present address: ICFO -Institut de Ciències Fotòniques, Barcelona, Spain.
. D Leibfried, R Blatt, C Monroe, D Wineland, Rev. Mod. Phys. 75281and references citedD. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys 75, 281 (2003), and references cited.
. M Keller, B Lange, K Hayasaka, W Lange, H Walther, Nature. 4311075M. Keller, B. Lange, K. Hayasaka, W. Lange, H. Walther, Nature 431, 1075 (2004).
. J Mckeever, A Boca, A D Boozer, R Miller, J R Buck, A Kuzmich, H J Kimble, Science. 3031992J. McKeever, A. Boca, A. D. Boozer, R. Miller, J. R. Buck, A. Kuzmich, and H. J. Kimble, Science 303, 1992 (2004);
. P Maunz, T Puppe, I Schuster, N Syassen, P W H Pinkse, G Rempe, Nature. 42850P. Maunz, T. Puppe, I. Schuster, N. Syassen, P. W. H. Pinkse, and G. Rempe, Nature 428, 50 (2004)
. T Legero, T Wilk, M Hennrich, G Rempe, A Kuhn, Phys. Rev. Lett. 9370503T. Legero, T. Wilk, M. Hennrich, G. Rempe, and A. Kuhn, Phys. Rev. Lett. 93, 070503 (2004).
. J E Reiner, W P Smith, L A Orozco, H M Wiseman, Jay Gambetta, Phys. Rev. A. 7023819J. E. Reiner, W. P. Smith, L. A. Orozco, H. M. Wiseman, and Jay Gambetta, Phys. Rev. A. 70, 023819 (2004).
. A Hopkins, K Jacobs, S Habib, K Schwab, Phys. Rev. B. 68235328A. Hopkins, K. Jacobs, S. Habib, and K. Schwab, Phys. Rev. B 68, 235328 (2003)
. D A Steck, K Jacobs, H Mabuchi, T Bhattacharya, S Habib, Phys. Rev. Lett. 92223004D. A. Steck, K. Jacobs, H. Mabuchi, T. Bhattacharya, and S. Habib, Phys. Rev. Lett 92, 223004 (2004).
. B Urso, B Odom, G Gabrielse, Phys. Rev. Lett. 9043001B. D'Urso, B. Odom, and G. Gabrielse, Phys. Rev. Lett. 90, 043001 (2003).
. T Fischer, P Maunz, P W H Pinkse, T Puppe, G Rempe, Phys. Rev. Lett. 88163002T. Fischer, P. Maunz, P. W. H. Pinkse, T. Puppe, and G. Rempe, Phys. Rev. Lett. 88, 163002 (2002).
. N V Morrow, S K Dutta, G Reithel, Phys. Rev. Lett. 8893003N. V. Morrow, S. K. Dutta, and G. Reithel, Phys. Rev. Lett. 88, 093003 (2002).
. J M Geremia, J K Stockton, H Mabuchi, Science. 304270J.M. Geremia, J. K. Stockton, H. Mabuchi, Science 304, 270 (2004);
. D Oblak, P G Petrov, C L Garrido Alzar, W Tittel, A K Vershovski, J K Mikkelsen, J L Sørensen, E S Polzik, Phys. Rev. A. 7143807D. Oblak, P. G. Petrov, C. L. Garrido Alzar, W. Tittel, A. K. Vershovski, J. K. Mikkelsen, J. L. Sørensen, and E. S. Polzik, Phys. Rev. A 71, 43807 (2005).
. H M Wiseman, G J Milburn, Phys. Rev. Lett. 70H.M. Wiseman and G. J. Milburn, Phys. Rev. Lett. 70, 548-551 (1993).
. S Mancini, D Vitali, P Tombesi, Phys. Rev. Lett. 80S. Mancini, D. Vitali, and P. Tombesi , Phys. Rev. Lett. 80, 688-691 (1998);
. D Vitali, S Mancini, L Ribichini, P Tombesi, Phys. Rev. A. 6563803D. Vitali, S. Mancini, L. Ribichini, and P. Tombesi, Phys. Rev. A 65, 063803 (2002).
. J M W Milatz, J J Van Zolingen, Physica XIX. 181J. M. W. Milatz and J. J. Van Zolingen, Physica XIX, 181(1953).
. P F Cohadon, A Heidmann, M Pinard, Phys. Rev. Lett. 833174P. F. Cohadon, A. Heidmann, and M. Pinard, Phys. Rev. Lett. 83, 3174 (1999).
. U Dorner, P Zoller, Phys. Rev. A. 6623816U. Dorner and P. Zoller Phys. Rev. A 66, 023816 (2002).
. P Bushev, A Wilson, J Eschner, C Raab, F Schmidt-Kaler, C Becher, R Blatt, Phys. Rev. Lett. 92223602P. Bushev, A. Wilson, J. Eschner, C. Raab, F. Schmidt- Kaler, C. Becher, and R. Blatt, Phys. Rev. Lett. 92, 223602 (2004);
. J Eschner, Ch Raab, F Schmidt-Kaler, R Blatt, Nature. 413J. Eschner, Ch. Raab, F. Schmidt-Kaler, and R. Blatt, Nature 413, 495-498 (2001).
C W Gardiner, P Zoller, Quantum Noise. BerlinSpringerand references citedC. W. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, 2004), and references cited.
. B C Buchler, M B Gray, D A Shaddock, T C Ralph, D E Mcclelland, Opt. Lett. 24259B. C. Buchler, M. B. Gray, D. A. Shaddock, T. C. Ralph, D. E. McClelland, Opt. Lett. 24, 259 (1999).
. A V Masalov, A A Putilin, M V Vasilyev, J. Mod. Optics. 411941A. V. Masalov, A. A. Putilin, and M. V. Vasilyev, J. Mod. Optics 41, 1941 (1994).
. H A Haus, Y Yamamoto, Phys. Rev. A. 34H. A. Haus, Y. Yamamoto, Phys. Rev. A 34, 270-292 (1986).
. J H Shapiro, G Saplakoglu, S.-T Ho, P Kumar, B E A Saleh, M C Teich, J. Opt. Soc. Am. B. 4J. H. Shapiro, G. Saplakoglu, S.-T. Ho, P. Kumar, B. E. A. Saleh, M. C. Teich, J. Opt. Soc. Am. B 4, 1604-1620 (1987).
. V Steixner, P Rabl, P Zoller, quant-ph/0506187Phys. Rev. A. V. Steixner, P. Rabl, and P. Zoller, accepted by Phys. Rev. A, quant-ph/0506187.
|
[] |
[
"CONTACT DISCONTINUITIES IN MODELS OF CONTACT BINARIES UNDERGOING THERMAL RELAXATION OSCILLATIONS",
"CONTACT DISCONTINUITIES IN MODELS OF CONTACT BINARIES UNDERGOING THERMAL RELAXATION OSCILLATIONS"
] |
[
"Jian-Min Wang [email protected] \nInstitute of High Energy Physics\nLaboratory of Cosmic-ray and High Energy Astrophysics\nCAS\n100039Beijing\n\nChinese Academy of Sciences -Peking University Joint Beijing Astrophysical Center (CAS-PKU.BAC)\n100871BeijingP.R. China\n"
] |
[
"Institute of High Energy Physics\nLaboratory of Cosmic-ray and High Energy Astrophysics\nCAS\n100039Beijing",
"Chinese Academy of Sciences -Peking University Joint Beijing Astrophysical Center (CAS-PKU.BAC)\n100871BeijingP.R. China"
] |
[
"Astronomical Journal"
] |
In this paper we pursue the suggestion by Shu,Lubow & Anderson (1979)andWang (1995)that contact discontinuity (DSC) may exist in the secondary in the expansion TRO (thermal relaxation oscillation) state. It is demonstrated that there is a mass exchange instability in some range of mass ratio for the two components. We show that the assumption of constant volume of the secondary should be relaxed in DSC model. For all mass ratio the secondary alway satisfies the condition that no mass flow returns to the primary through the inner Lagrangian point. The secondary will expand in order to equilibrate the interaction between the common convective envelope and the secondary. The contact discontinuity in contact binary undergoing thermal relaxation does not violate the second law of thermodynamics. The maintaining condition of contact discontinuity is derived in the time-dependent model. It is desired to improve the TRO model with the advanced contact discontinuity layer in future detailed calculations.
|
10.1086/301049
|
[
"https://arxiv.org/pdf/astro-ph/9908227v1.pdf"
] | 9,584,007 |
astro-ph/9908227
|
d704b2e0524e0adaa01e3e020f20cfbdc9de8952
|
CONTACT DISCONTINUITIES IN MODELS OF CONTACT BINARIES UNDERGOING THERMAL RELAXATION OSCILLATIONS
October 1999. October 1999
Jian-Min Wang [email protected]
Institute of High Energy Physics
Laboratory of Cosmic-ray and High Energy Astrophysics
CAS
100039Beijing
Chinese Academy of Sciences -Peking University Joint Beijing Astrophysical Center (CAS-PKU.BAC)
100871BeijingP.R. China
CONTACT DISCONTINUITIES IN MODELS OF CONTACT BINARIES UNDERGOING THERMAL RELAXATION OSCILLATIONS
Astronomical Journal
118October 1999. October 1999Preprint typeset using L A T E X style emulateapj v. 04/03/99Subject headings: star: contact binary -general
In this paper we pursue the suggestion by Shu,Lubow & Anderson (1979)andWang (1995)that contact discontinuity (DSC) may exist in the secondary in the expansion TRO (thermal relaxation oscillation) state. It is demonstrated that there is a mass exchange instability in some range of mass ratio for the two components. We show that the assumption of constant volume of the secondary should be relaxed in DSC model. For all mass ratio the secondary alway satisfies the condition that no mass flow returns to the primary through the inner Lagrangian point. The secondary will expand in order to equilibrate the interaction between the common convective envelope and the secondary. The contact discontinuity in contact binary undergoing thermal relaxation does not violate the second law of thermodynamics. The maintaining condition of contact discontinuity is derived in the time-dependent model. It is desired to improve the TRO model with the advanced contact discontinuity layer in future detailed calculations.
INTRODUCTION
W Ursae Majoris (W UMa) binary stars were first thought to be a particular binary population due to their abnormal mass-radius relationship, namely, the so-called Kuiper's paradox, R 2 /R 1 = (M 2 /M 1 ) 0.46 (Kuiper 1941). These particular binaries appear to consist of two mainsequence stars that possess photospheres exhibiting the almost same effective temperatures for the two components despite the fact that typical mass ratio in a system is 0.5. It was originally proposed that a common convective envelope may be formed due to dynamic equilibrium (Osaki 1965), and mass and energy transfers would take place in CCE in order to interpret the Kuiper's paradox (Lucy 1968) although the specific mechanism of energy transfer for the circulation has not been fully understood (Robertson 1980, Sinjab, Robertson & Smith 1990, Tassoul 1992. It is now firmly believed that W UMa stars are contact binaries in which both components are full of their Roche lobe, showing strong interactions (Mochnacki 1981). Lucy's iso-entropy model (1968) as a zero-order model with thermal equilibrium, however, cannot explain the color -period diagram by Eggen (1967) which leads to the establishments of two parallel first-order theories of thermal relaxation oscillation (TRO) and contact discontinuity (DSC). TRO model was advanced by Lucy (1976), Flannery (1976), and Robertson & Eggleton (1977), who suggest that contact system can not reach thermal equilibrium at dynamical equilibrium configuration and may thus undergo thermal relaxation oscillation. DSC theory was proposed by Shu and his collaborators [Shu, Lubow & Anderson 1976, Lubow & Shu 1977, also see Biermann & Thomas (1972) and Vilhu (1973) for some earlier elements of DSC model] who hypothesize contact binary can attain thermal and dynamical equilibrium but there is a temperature inversion layer in the secondary. With great attempts the so-called Kuiper's paradox and period -colour diagram may be resolved by the two different hypotheses independently. However, both theories have some difficulties to explain observations, such as the socalled W-phenomenon, i.e., the less star is hotter than the massive component (Binnendijik 1970) (see a concise summary of observations and theory by Smith 1984). Especially there are great debases between the two in nature in their simplest version (Kähler 1989). Observational studies continue and the theoretical controversy still remains (Ruciński 1997). These imply that the first-order theories of contact binary (i.e. TRO and DSC) should be improved.
The intensive disputes by the two contending schools (Lucy & Wilson 1979, Shu, Lubow & Anderson 1979 lead to an intriguing suggestion by Shu, Lubow & Anderson (1979), Shu (1980) (from theoretical viewpoint) and Wang (1995) (from the analysis of observational data) that TRO theory needs the contact discontinuity in some phases. Some observations seem to support TRO theory (Lucy & Wilson 1979, Hilditch et al 1989, Samec et al 1998.
Although some criticisms about DSC model exist (Shu, Lubow & Anderson 1980), this theory is still attractive because it is successful in many aspects (Smith 1984). The contact discontinuity may be ironed out within the thermal timescale (Webbink 1977, Hazlehurst & Resfdal 1978, Papaloizou & Pringle 1979, Smith, Robertson & Smith 1980 in a steady state, however, the existence of time-dependent contact discontinuity can not be excluded because it does not violate the second law of thermodynamics (Papaloizou & Pringle 1979). However a detailed analysis is needed for this. It is highly desired to reconcile the two theories not only for removing the discrepancies but also for explaining more detailed observations (Shu 1980).
The difficulties of pure TRO and DSC models in explanation of W-phenomenon motivate us to explore the possibility to develop a second-order theory. The interaction between the secondary and the common convec-1 tive envelope is thought as an important role in the Wphenomenon, Wang (1994) find that the W-phenomenon can be explained by the released gravitational energy of the secondary through its contraction corresponding to the TRO contracting phase in W-type contact systems. This is encouraging and leads to the suggestions by Wang (1995) from a sample with 32 contact systems that the A-type systems may undergo thermal relaxation oscillation with contact discontinuity whereas the contraction of secondary in W-type systems irons out this contact discontinuity.
The over-riding virtue of a contact discontinuity is that it gives a clear mechanism for making the secondary physically larger, and the primary physically smaller, than their main-sequence single-star counterparts, as is needed to satisfy the Roche-lobe filling requirements of the Kuiper paradox (Lubow & Shu 1977). On the other hand, if the system cannot be maintained in steady state by heat-carrying flows, then the capping of the radiative heat flow from the secondary by the hotter overlying common envelope should lead to an expansion of the secondary, with a resulting transfer of mass from the secondary to the primary. Such Roche-lobe overflow, from a less massive star to a more massive star to a more massive one, is known to occur slowly, so the ultimate breaking of contact caused by the expansion of the binary orbit takes a relatively long time. Once contact has been broken, however, the common envelope disappears; the secondary is no longer capped; and it will begin to shrink toward its normal single-star size. Conversely, because the larger area of the common envelope is no longer available to carry away much of the primary's interior luminosity, the primary can no longer be maintained at its suppressed contact size, and it will begin to overflow its Roche lobe. The transfer of mass from a more massive star to a less massive one is known to be unstable (e.g. Paczyński 1971), and the rapid shrinkage of the binary orbit causes the system to come into contact again. The re-establishment of the common envelope and the capping of the secondary results in its refilling its Roche lobe. Thus, would the DSC hypothesis provide the physical mechanism for the TRO hypothesis, together with a justification why the duty cycle is long for the contact phase and short for the semidetached phase, as is required by the observational statistics. The rest of this paper attempts to establish the above ideas on a more quantitative basis.
This paper is organized as following: the instability of Roche lobe and its operation in contact system are found in Sec.2; the surviving condition of DSC layer is derived from the thermodynamics in Sec. 3; and the conclusions are remarked in last sectionr.
THE ROCHE LOBE INSTABILITY AND DSC MODEL
It is generally believed that the two components of W UMa stars share an optically thick common convective envelope due to the dynamics equilibrium (Osaki 1965, Mochnacki 1981. The redistribution of the total luminosities (the plus of luminosities of each star), which takes place in CCE, deals with comprehensive fluid processes (Lucy 1968). Why and how to redistribute the luminosties is the main task to theoreticians. The debate of the exsiting theories of contact binaries had been attracted much consideration between 1970s and 1980s (Lucy & Wilson 1979, Shu 1980, Shu, Lubow & Anderson 1981). Shu (1980) clearly stated that the two superficially distinct theories are complementary with the crucial theoretical issue to be resolved being the secular stability of temperature inversion layer from his thought-provoking analysis. Here we argue that DSC layer is a natural results of TRO theory via the mechanism of Roche lobe instability, showing the presence of DSC layer during the expansion TRO phase.
In the following discussions we assume that the total mass and angular momentum are conserved, neglecting the spin angular momentum of two components. These assumptions are basic and the same in TRO theory, but they are unnecessary in the DSC theory. In principle, the two assumptions put more strong constraints on the theoretical model. In the conserved systems there are mainly two other parameters: mass ratio q, and mass ratio changing rate due to mass exchangeq, to determine the structure of the contact binaries in TRO theory. The most serious shortcoming of TRO (mentioned in the previous section) is a strong indicator that we should relax some of assumptions in TRO model. One possible way to remove this shortcoming is to supplement the interaction between CCE and the component. This inclusion may reconcile the two contending schools each other (Wang 1995).
We first show that the instability of mass exchange may prevent from the mass in the secondary being pushed into the primary through the inner Lagrangian point L 1 due to the lid effects of CCE placed on the secondary (Shu, Lubow & Anderson 1976). For a contact system with total angular momentum J in a circular orbit and total mass M = M 1 + M 2 , the separation between components reads
D = J 2 GM 3 (1 + q) 4 q 2 ,(1)
where q = M 2 /M 1 (for the convenience we take q ≤ 1), and G is the gravitation constant. The Roche lobe radius R L of the secondary approximates for all mass ratio (Eggleton 1983)
r L = R L D = 0.49 0.6 + q 2 3 ln(1 + q − 1 3 ) .(2)
The Roche lobe of the primary will be obtained when we replace q by 1/q. It is important to note that the Roche lobe is changing due to the mass transfer between the two components. The variation rate of the Roche lobe due to mass transfer between the two components reads
d ln R L dq = 2r L 3q 1 3 1 1 + q 1 3 − 2 ln(1 + q − 1 3 ) + 2(q − 1) q(1 + q) ,(3)
and then we have the timescale for this change with the helps of d ln R L /dt = (d ln R L /dq)(dq/dt)
t RL = d ln R L dt −1 = f (q)t M ,(4)
where t M is the timescale of mass transfer defined as
t M = M 1 M 1 ,(5)
here the parameterṀ 1 is the rate of mass transfer, and the function f (q) is
f (q) = 2(1 − q) q − 2r L 3q 1 3 1 1 + q 1 3 − 2 ln(1 + q − 1 3 ) (1 + q) −1 .
(6) The function f (q) represents the ratio of the two timescales. We have calculated the function f (q) in Figure 1, showing its value for the range of q from 0.0 to 1.0. If f (q) > 0 then Roche lobe will expand with increases of q or shrink with decreases of q. If f (q) < 0 then Roche lobe will shrink with the increases of q or expand with decreases of q. It is very important to address that if |f (q)| < 1 then the expansion or shrinkage will be rapid than the process of mass transfer in the contact system known from equation (4). From Figure 1 the Roche lobe of the secondary expands with the increases of mass ratio. The expansion timescale is shorter than that of gaining mass from the primary until the mass ratio q > 0.8, indicating that the secondary is capable of swallowing more mass when q < 0.8. In this range of mass ratio the mass gaining is unstable. The Roche lobe of the primary will shrink due to the mass exchange. There also is an instability of mass exchange in the range of q < 0.35. This implies that the timescale of Roche lobe shrinkage due to mass transfer is shorter than that of mass transfer. This means that Roche lobe shrinks more rapid than the mass loss. This mass exchange instability plays important role in the structure of contact binaries. The maximum of mass ratio for the instability of the primary is 0.35. It is believed that the mass transfer will be more efficient when q < 0.35 with the presence of the instability of mass exchange. Thus it is expected that the mean mass ratio of A-type systems will be less than that of W-types. This is consistent with the observations. It should be noted that here we do not specify the mechanism for the energy and mass transfer. Of course the direction of mass exchange between the two components is determined by the relative potential of the star surface. Here we are trying to show the instability of mass exchange, namely, represented by |f (q)| < 1.
In the original DSC version the rising convective elements interior to the Roche lobe of the secondary cannot penetrate into the common convective envelope because the resulting buoyancy deficit opposes such penetration. SLA76 argued that there is a mass flow pushed by the slight excess of pressure due to a slight heating of the interior of the secondary under constant volume. It is very important to note that this mass flow process is based on the assumption with constant volume of the secondary. The mass flow was estimated by SLA79, however, their estimation follows up another assumption that all the energy transferred from the primary to the secondary radiates again from the secondary, neglecting the interaction between CCE and the secondary. Now we can work out a condition inhibits the returning of mass flow to the primary through the inner Lagrangian point L 1 . If there is no excess pressure, then the mass flow stops. This is equivalent to dρ/dt ≤ 0 if the contact discontinuity being lower than the temperature of CCE survives, namely, T =constant (we assume the gas is ideal), whereρ is the mean mass density within the Roche lobe of the secondary
defined asρ ∝ M 2 /R 3 L . We thus have d lnρ dt ≈ d dt ln q 1 + q − 3 t RL ,(7)
then the condition no returning of mass flow reads
t RL ≤ 3qt M .(8)
We draw the line t RL = 3qt M in Figure 1. It is obvious that all the value of f (q) is always less than 3q. This means that all the cases satisfy the condition that no mass flow returns to the primary even beyond the mass exchange instability. Therefore the assumption of constant volume of the secondary should be relaxed in the advanced DSC model. The contact discontinuity is time-dependent from this viewpoint at least, coinciding with that DSC layer could be maintained in a time-dependent model (Papaloizou & Pringle 1979).
THERMODYNAMICS OF DSC LAYER
By defining the thermal timescale as t Th = M M−δm (4πr 2 ρv c ) −1 dm, Webbink(1977) first showed that the thermal diffusion time scale in the common envelope is typically of the same order as the dynamical timescale (is roughly of one orbital period). This makes the contact discontinuity disappear within one orbital period. We call the thermal diffusive process as interaction ǫ. Papaloizou & Pringle (1979) show the steady contact discontinuity violates the second law of thermodynamics, but the timedependent contact discontinuity may exist. However in the time-dependent model it is the interaction ǫ that keeps the contact discontinuity in contact system undergoing thermal relaxation oscillation. The controversy of inner structure may be removed by this kind of interaction (Wang 1995). With the helps of the conservation of mass and momentum we can rewrite the energy equation beyond the energy generation region as
ρ ∂ ∂t (Ψ + T s) = −ρT v · ∇s − ∇ · F + ǫ,(9)
for the inviscid fluid (e.g. Webbink 1977, and1992 ApJ, 396, p378 for the erratum), where t is time; ρ, the density; T , the temperature; v, the velocity; s, the specific entropy; F , the energy flux radiated from the star; ǫ, the energy density absorbed by the secondary in the unit time due to the interaction with CCE; and Ψ, the gravitational energy per unit mass. Following the assumption by Shu, Lubow & Anderson (1980) that the specific entropy s can be decomposed in terms of a barotropic and a baroclinic one as s = s 0 (Ψ D ) + s 1 ( x, t), we integrate the above equation over the volume enclosed by the equipotential surfaces C and D, and obtain
dS dt = s 0 (Ψ D ) d(∆M ) dt − CD ρ T ∂Ψ ∂t dV + ǫ T ∆V − CD ρs 1 ( x, t) v · ndA − CD 1 T ∇ · F dV,(10)
where S = ρsdV , ∆M = ρdV , ∆V = dV is the volume enclosed by the two surfaces C and D, dA is the area of the surface of contact discontinuity, and n is its normal vector. For the time-dependent case we assume that the last two terms offset approximately as in steady case (Shu, Lubow & Anderson 1980), thus we have more physically concise form of equation (10)
dS dt = s 0 (Ψ D ) d(∆M ) dt − CD ρ T ∂Ψ ∂t dV + ǫ T ∆V,(11)
The first term of right hand in equation (11) represents the entropy increases due to mass exchange between CCE and the secondary, the last term does the same meanings but due to energy exchange, the second term does the entropy decreases of entropy due to the Roche lobe expansion. The enclosed volume is an open system undergoing mass and energy exchanges with its surroundings rather than an isolated volume. This equation also tells us the resulting expansion due to the interaction ǫ if the contact discontinuity layer survives: 1) if there is only exchanges of energy by thermal diffusion, namely, ∆M =const, we have
CD ρ ∂Ψ ∂t dV ≥ ǫ∆V.(12)
This clearly states that the surviving of contact discontinuity must be provided by the expansion. Detailed calculation should be done in the future. 2) There is a mass exchange between CCE and the secondary accompanying the energy interaction, i.e., d(∆M )/dt > 0, the expansion is at least
CD ρ ∂Ψ ∂t dV ≥ ǫ∆V + s 0 (Ψ D )T d(∆M ) dt .(13)
This equation predicts the secular change of orbital period due to the shift of mass ratio. 3) If the secondary keeps constant volume as originally suggested by Shu, Lubow & Anderson (1976), the term dΨ/dt = 0, we always have dS/dt ≥ 0 which means the discontinuity will be ironed out with the thermal diffusive timescale. The only possible way to relax this condition is the inclusion of the changes of Roche lobe. This way will permits us unifying the two contending hypotheses. According the simplest version of star structure, equations (12) and (13) will provide the expansion velocity
v int ≥ R 2 ǭ ǫ ∆V V 2 + T s 0 (Ψ D )g −1 2 d(ln ∆M ) dt ,(14)
whereǭ = (GM 2 ∆M/R 2 )/V 2 is the mean density of the gravitational energy between the secondary and CCE, g 2 = GM 2 /R 2 2 the gravitational acceleration, V 2 is the volume of the secondary. One should note that in the above estimation we neglect the down directed propagation of energy to the interior with dynamical timescale. Therefore the present estimation is somewhat higher than that of the actual. Both of the mass exchange and energy interaction result in the expansion of the secondary, we thus have the minimum velocity
V = max(v int , v RL ),(15)
where v RL = dR L /dt is the expanding velocity of the Roche lobe. This is the condition maintaining contact discontinuity. From the viewpoint of total energy (by the nuclear) conservation, the exhausted energy (i.e. ǫ) to expand the secondary lowers the re-radiating efficiency of the transferred energy from the primary. The lower temperature of the secondary than the primary may be an indicator of the presence of contact discontinuity.
CONCLUSIONS AND DISCUSSIONS
Introducing a discontinuity of temperature by Shu andhis collaborators (1976, 1979) the thermal instability of binary (Lucy 1976, Flannery 1976 can be suppressed, but its maintenance of DSC layer opens. In this paper we try to construct the physical scenario of time-dependent model of contact binary. Only two assumptions that total mass and angular momentum of the contact system are conserved are employed in this paper. It is found that the mass exchange results in the instability of Roche lobe in some ranges of mass ratio. We show that this instability always satisfies the condition that keeps the mean density of the secondary d lnρ/dt ≤ 0. Therefore it ensures that no mass flow returns to the primary through the inner Lagrangain point L 1 . The second-order theory predicts that the contact binary may be in oscillations take place about a state with a contact discontinuity. The temperature differences of DSC layer across the interface is determined by the expansion velocity.
The existing TRO and DSC theories (Lucy 1976, Flannery 1976, Robertson & Eggleton 1977Shu, Lubow & Anderson 1976 neglect the effects of interaction ǫ between CCE and the secondary. Here we argue that it is the contact discontinuity layer that results in the interaction between CCE and the secondary and in the meanwhile it is the interaction that maintains the contact discontinuity. We find this interaction ǫ can result in some interesting issues. First it is the reason why the temperature of the secondary in A-types is lower than that of the primary. Second the maintenance of contact discontinuity needs faster mass transfer which breaks down the deep contact. Thus the shortcoming of TRO theory will be removed. It is highly desired that the unification of TRO theory and DSC model should be calculated in order to discover the nature of the contact binaries.
In the present work we do not specify the mechanism of thermal diffusion process. Although we have not performed the time-dependent model unifying TRO and DSC hypotheses, this time-dependent unified theory might give some predictions. First the secular behavior of period change of A-type systems is violent than that in W-type system in order to survive the existence of contact discontinuity. Second, the maintenance of contact discontinuity may lead to the radial oscillation of the secondary with period from a few to several ten minus. The interaction between CCE and the secondary drives such a oscillation similar to the κ-mechanism working in other types of stars. It is thus expected to find the light variation during the primary eclipse as another probe of contact discontinuity in A-type systems.
The author would like to express his honest thanks to the referee, Professor Frank H. Shu, for his input of physical insight to the title and the fourth paragraph of the first section, enhancing the scientific clarity of this paper. I appreciate the useful discussions with Drs. Zhanwen Han and Fangjun Lu. This project is supported by Climbing Plan of The Ministry of Science and Technology of China and the Natural Science Foundation of China under Grant No. 19800302.
. P Biermann, H.-C Thomas, A&A. 1660Biermann, P. & Thomas, H.-C., 1972, A&A, 16, 60
. L Binnendijik, Vistas in Astronomy. 12217Binnendijik, L., 1970, Vistas in Astronomy, 12, 217
. O J Eggen, Mem.R.A.S.70111Eggen, O.J., 1967, Mem.R.A.S., 70, 111
. P P Eggleton, ApJ. 268368Eggleton, P.P., 1983, ApJ, 268, 368
. B P Flannery, ApJ. 205217Flannery, B.P., 1976, ApJ, 205, 217
. J Hazlehurst, S Refsdal, A&A. 629Hazlehurst, J. & Refsdal, S., 1978, A&A, 62, L9
. R W Hilditch, D J King, T M Mcfarlance, MNRAS. 231341Hilditch, R.W., King, D.J. & Mcfarlance, T.M., 1988, MNRAS, 231, 341
. H Kähler, A&A. 20967Kähler,H., 1989, A&A, 209, 67
. G P Kuiper, ApJ. 93133Kuiper, G.P., 1941, ApJ, 93, 133
. A P Linnell, ApJ. 300304Linnell, A.P., 1986, ApJ, 300, 304
. A P Linnell, ApJ. 316389Linnell, A.P., 1987, ApJ, 316, 389
. S H Lubow, F H Shu, ApJ. 216517Lubow, S.H. & Shu, F.H., 1977, ApJ, 216, 517
. L B Lucy, ApJ. 1511123Lucy, L.B., 1968, ApJ, 151, 1123
. L B Lucy, ApJ. 205208Lucy, L.B., 1976, ApJ, 205, 208
. L B Lucy, R E Wilson, ApJ. 231502Lucy, L.B. & Wilson, R.E., 1979, ApJ, 231, 502
. S Mochnacki, ApJ. 245650Mochnacki, S., 1981, ApJ, 245, 650
. Y Osaki, PASJ. 1797Osaki, Y., 1965, PASJ, 17, 97
. B Paczyński, ARA&A. 9183Paczyński, B., 1971, ARA&A, 9, 183
. J Papaloizou, J E Pringle, MNRAS. 1895Papaloizou, J. & Pringle, J.E., 1979, MNRAS, 189, 5P
. J A Robertson, MNRAS. 192263Robertson, J.A., 1980, MNRAS, 192, 263
. J A Robertson, P Eggleton, MNRAS. 179359Robertson, J.A. & Eggleton, P.P, 1977, MNRAS, 179, 359
. S M Ruciński, AJ. 1131112Ruciński, S.M., 1997, AJ, 113, 1112
. R Samec, G Carrigan, B J Gray, J D French, J A Mcdermith, R J Padgen, E E , AJ. 116895Samec, R.,G., Carrigan, B.J., Gray, J.D., French, J.A., McDermith, R.J. & Padgen, E.E., 1998, AJ, 116, 895
F H Shu, Close Binary Stars: Observations and Interpretation. M.J. Plavec, D.M. Popper & R.K. Ulrich88477IAU SymShu, F.H., 1980, in IAU Sym. 88, Close Binary Stars: Observations and Interpretation, ed. M.J. Plavec, D.M. Popper & R.K. Ulrich, pp.477
. F H Shu, S H Lubow, L Anderson, ApJ. 209536Shu, F.H., Lubow, S.H. & Anderson, L., 1976, ApJ, 209, 536
. F H Shu, S H Lubow, L Anderson, ApJ. 229223Shu, F.H., Lubow, S.H. & Anderson, L., 1979, ApJ, 229, 223
. F H Shu, S H Lubow, ARA&A. 19277Shu, F.H. & Lubow, S.H., 1981, ARA&A, 19, 277
. F H Shu, S H Lubow, L Anderson, ApJ. 239937Shu, F.H., Lubow, S.H. & Anderson, L., 1980, ApJ, 239, 937
. I M Sinjab, J A Robertson, R C Smith, MNRAS. 244619Sinjab, I.M., Robertson, J.A. & Smith, R.C., 1990, MNRAS, 244, 619
. D H Smith, J A Robertson, R C Smith, MNRAS. 190177Smith, D.H., Robertson, J.A. & Smith, R.C., 1990, MNRAS, 190, 177
. R C Smith, QJRAS. 25405Smith, R.C., 1984, QJRAS, 25, 405
. J L Tassoul, ApJ. 389375Tassoul, J.L., 1992, ApJ, 389, 375
. O Vilhu, A&A. 26267Vilhu, O., 1973, A&A, 26, 267
. J.-M Wang, ApJ. 434277Wang, J.-M., 1994, ApJ, 434, 277
. J.-M Wang, AJ. 110782Wang, J.-M., 1995, AJ, 110, 782
. R F Webbink, ApJ. 215851Webbink, R.F., 1976, ApJ, 215, 851
The function f (q) shows the tR L /tM for the two components. The solid line represents that of the secondary, and the dashed line does the primary. See detail in the textFig. 1.-The function f (q) shows the tR L /tM for the two components. The solid line represents that of the secondary, and the dashed line does the primary. See detail in the text.
|
[] |
[
"THE RICCI FLOW ON RIEMANN SURFACES",
"THE RICCI FLOW ON RIEMANN SURFACES"
] |
[
"S Abraham [email protected] \nGrupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain\n",
"P Fernández De Córdoba \nGrupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain\n",
"José M Isidro \nGrupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain\n",
"J L G Santander \nGrupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain\n"
] |
[
"Grupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain",
"Grupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain",
"Grupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain",
"Grupo de Modelización Interdisciplinar\nInstituto de Matemática Pura y Aplicada\nUniversidad Politécnica de Valencia\n46022ValenciaSpain"
] |
[] |
We establish a 1-to-1 relation between metrics on compact Riemann surfaces without boundary, and mechanical systems having those surfaces as configuration spaces.
| null |
[
"https://arxiv.org/pdf/0810.2236v3.pdf"
] | 15,710,118 |
0810.2236
|
259379f521c0008a20c591f3feb922ff9be93e54
|
THE RICCI FLOW ON RIEMANN SURFACES
8 Dec 2008
S Abraham [email protected]
Grupo de Modelización Interdisciplinar
Instituto de Matemática Pura y Aplicada
Universidad Politécnica de Valencia
46022ValenciaSpain
P Fernández De Córdoba
Grupo de Modelización Interdisciplinar
Instituto de Matemática Pura y Aplicada
Universidad Politécnica de Valencia
46022ValenciaSpain
José M Isidro
Grupo de Modelización Interdisciplinar
Instituto de Matemática Pura y Aplicada
Universidad Politécnica de Valencia
46022ValenciaSpain
J L G Santander
Grupo de Modelización Interdisciplinar
Instituto de Matemática Pura y Aplicada
Universidad Politécnica de Valencia
46022ValenciaSpain
THE RICCI FLOW ON RIEMANN SURFACES
8 Dec 2008
We establish a 1-to-1 relation between metrics on compact Riemann surfaces without boundary, and mechanical systems having those surfaces as configuration spaces.
Introduction
Motivation
Let M be a smooth n-dimensional manifold endowed with the local coordinates q i , i = 1, . . . , n, that we regard as the configuration space of some classical mechanical system with the Lagrangian function L,
L = T − V = 1 2 a ij (q)q iqj − V (q).(1)
Here V denotes the potential energy of the system, and T is the kinetic energy (a positive definite quadratic form in the velocitiesq i ). Using these data we can construct a Riemannian metric as follows. Consider the momenta p i conjugate to the q i ,
p i (q) = ∂L ∂q i = a ij (q)q j .(2)
Then the 1-form
p i dq i = a ijqj dq i = 1 dt a ij dq i dq j ,(3)
is the integrand of Hamilton's principal function (or time-independent action):
S[q] := p i dq i .(4)
Now conservation of energy implies that the Hamiltonian function H,
H = 1 2 a ijqiqj + V (q),(5)
is a constant of the motion, that we denote by E. We can solve (5) for the square root of the quadratic form,
a ij dq i dq j = 2(E − V (q)) dt,(6)
and substitute the result into (4) after using (3), to find
S[q] = 2(E − V (q)) a ij dq i dq j =: ds.(7)
Determining the actual trajectory followed by the particle is therefore equivalent to finding the shortest path between two given points, with distances measured with respect to the (square root of the) quadratic form ds 2 :
ds 2 := g ij (q)dq i dq j , g ij (q) := 2(E − V (q))a ij (q)(8)
The factor 2(E − V (q)) is positive away from those points at which the particle is at rest (where T = 0, hence E = V (q)). Let M ′ denote the subset of all points of M at which the particle is not at rest:
M ′ := {q ∈ M : T | q > 0} .(9)
We will assume that M ′ qualifies as a manifold (possibly as a submanifold of M), and that the matrix a ij (q) is everywhere nondegenerate on M ′ . This implies that g ij (q) is nondegenerate on M ′ . Moreover, the quadratic form a ij (q) is positive definite and symmetric. Altogether, M ′ qualifies as a Riemannian manifold. On the latter, determining the actual trajectories for the particle is equivalent to determining the geodesics of the metric (8).
Of course, all of the above is well known in the literature [1]. The statement that the actual motion of the particle follows the geodesics of the metric (8) goes by the name of Fermat's principle (see, e.g., ref. [2] for a nice account). The latter is equivalent to the principle of least action in Lagrangian mechanics. In this paper we address the converse problem, namely: to determine a point mechanics starting from the knowledge of a Riemannian metric on a given manifold. The sought-after mechanical system must somehow be canonically associated with the given metric, in the sense that it must be a natural choice, so to speak, among all possible point mechanics that one can possibly define on the given Riemannian manifold.
This problem will turn out to be too hard to solve in all generality-if it possesses a solution at all. Indeed, on an n-dimensional manifold, a general metric is determined (in local coordinates) by the knowledge of n(n + 1)/2 coefficient functions g ij , out of which some potential function U and some positive-definite kinetic energy T must be concocted. We can, however, make some simplifying assumptions. An educated guess leads us to restrict our attention to 2-dimensional manifolds M, the simplest on which nontrivial metrics can exist. On the latter class of manifolds, any Riemannian metric is conformal, so it is univocally determined by the knowledge of just one function, the so-called conformal factor. Having got this far we can unashamedly declare M (our would-be configuration space) to be a compact Riemann surface without boundary. Compactness ensures the convergence of the integrals we will work with, without the need to impose further conditions on the integrands (such as, e.g., fast decay at infinity). The absence of a boundary ensures the possibility of integrating by parts without picking up boundary terms. However it should be realised that imposing these two requirements (compactness and the absence of a boundary) is a useful, but by no means necessary, condition to achieve our goal, namely: to relate metrics on configuration space with mechanical models on that same space. Contrary to the previous example, we will not require that geodesics of the metric be actual trajectories for the mechanics. This notwithstanding, interesting links between mechanics and geometry will be exposed.
We should point out that there is, of course, a natural choice of a mechanics for a given family of metrics on a manifold-namely the one defined by the Einstein-Hilbert gravitational action functional. However the latter defines not a point mechanics, but a field theory. Moreover, this field theory has the space of all metrics on M as its configuration space. We are interested in a point mechanics, the configuration space of which is the Riemann surface M. Surprisingly, the point mechanics we will construct will be intimately related with the gravitational action functional.
Setup
On our compact Riemann surface without boundary M there exist isothermal coordinates x, y, in which the metric reads [9] g ij = e −f δ ij ,
ds 2 = e −f (x,y) (dx 2 + dy 2 )(10)
where f = f (x, y) is a function, hereafter referred to as conformal factor. The volume element on M equals 1 √ g dxdy = e −f dxdy.
Given an arbitrary function ϕ(x, y) on M, we have the following expressions for the Laplacian ∇ 2 ϕ and the squared gradient (∇ϕ) 2 :
∇ 2 ϕ := 1 √ g ∂ m ( √ gg mn ∂ n ϕ) = e f ∂ 2 x ϕ + ∂ 2 y ϕ =: e f D 2 ϕ,(12)(∇ϕ) 2 := g mn ∂ m ϕ∂ n ϕ = e f (∂ x ϕ) 2 + (∂ y ϕ) 2 =: e f (Dϕ) 2 ,(13)
where D 2 ϕ and (Dϕ) 2 stand for the flat-space values of the Laplacian and the squared gradient, respectively. The Ricci tensor reads
R ij = 1 2 D 2 f δ ij = 1 2 e −f ∇ 2 f δ ij .(14)
From here we obtain the Ricci scalar
R = e f D 2 f = ∇ 2 f.(15)
Now Perelman's functional F [ϕ, g ij ] on the Riemann surface M is defined as [13,14] F [ϕ, 1 Our conventions are g = | det g ij | and R im = g −1/2 ∂n`Γ n im g 1/2´− ∂ i ∂m`ln g 1/2´− Γ r is Γ s mr for the Ricci tensor, Γ m ij = g mh`∂ i g jh + ∂ j g hi − ∂ h g ij´/ 2 being the Christoffel symbols.
g ij ] := M e −ϕ (∇ϕ) 2 + R(g ij ) √ g dxdy.(16)
By (15) we can express F [ϕ, g ij ] as
F [ϕ, f ] := F [ϕ, g ij (f )] = M e −ϕ−f (∇ϕ) 2 + ∇ 2 f dxdy.(17)
The gradient flow of F is determined by the evolution equations
∂g ij ∂t = −2 (R ij + ∇ i ∇ j ϕ) , ∂ϕ ∂t = −∇ 2 ϕ − R.(18)
Via a time-dependent diffeomorphism, the above are equivalent to
∂g ij ∂t = −2R ij , ∂ϕ ∂t = −∇ 2 ϕ + (∇ϕ) 2 − R.(19)
Setting now ϕ = f in (17) we have
F [f ] := F [ϕ = f, f ] = M e −2f (∇f ) 2 + ∇ 2 f dxdy,(20)
and the second eqn. in (19) becomes, by (15),
∂f ∂t + 2∇ 2f − ∇f 2 = 0.(21)
In the time-flow eqn. (21) we have placed a tilde on top of the conformal factor in order to distinguish it from the time-independent f present in the functional (20). This improvement in notation will turn out to be convenient later on.
Summary of results
Proof of the theorem
A mechanics from a given Riemannian metric.
Starting from a knowledge of the metric (10) on M, we will construct a classical mechanical system having M as its confirguration space. We recall that, for a point particle of mass m subject to a time-independent potential U , the Hamilton-Jacobi equation for the time-dependent actionS reads ∂S ∂t
+ 1 2m ∇S 2 + U = 0.(22)
It is well known that, separating the time variable as per
S = S − Et,(23)
with S the time-independent action (Hamilton's principal function), one obtains
1 2m (∇S) 2 + U = E.(24)
Eqn. (23) suggests separating variables in (21) as per
f = f + Et,(25)
where the sign of the time variable is reversed 2 with respect to (23). Substituting (25) into (21) leads to
(∇f ) 2 − 2∇ 2 f = E.(26)
Comparing (26) with (24) we conclude that, picking a value of the mass m = 1/2, the following identifications can be made:
S = f, U = −2∇ 2 f = −2R.(27)
A Riemannian metric from a given mechanics.
Conversely, if we are given a classical mechanics as determined by an arbitrary timeindependent action S on M, and we are required to construct a conformal metric on M, then the solution reads f = S. This concludes the proof.
Discussion
With the 1-to-1 relation established above one can exchange a conformally flat metric for a time-independent action functional satisfying the Hamilton-Jacobi equation. In the second part of the theorem one defines a Riemann metric, starting from the knowledge of a given mechanics. However it is not guaranteed that the metric so obtained is the canonical one corresponding to the Riemann surface on which the given mechanics is defined. An example will illustrate this point. Riemann surfaces in genus greater than 1 can be obtained as the quotient of the open unit disc D ⊂ C by the action of a Fuchsian group Γ [9]. As the quotient space D/Γ, the Riemann surface M now carries a Riemannian metric of constant negative curvature, inherited from that on the disc D. This hyperbolic metric is the canonical metric to consider on M. On the other hand, the metric provided our theorem need not be hyperbolic. For example, by (27) we have that the Ricci scalar curvature R and the potential function U carry opposite signs. Given a mechanics on M, this determines the sign of U (modulo additive constants), hence also the sign of R, which need not be the constant negative sign corresponding to a hyperbolic Riemann surface as explained above. However there is no contradiction. It suffices to realise that the metric induced by the mechanics considered need not (and in general will not) coincide with the hyperbolic metric induced on D/Γ by D.
Our theorem may be regarded as providing a mechanical system that is naturally associated with a given metric. Although we have considered the classical mechanics associated with a given conformal factor, one can immediately construct the corresponding quantum mechanics, by means of the Schroedinger equation for the potential U . In fact the spectral problem for time-independent Schroedinger operators with the Ricci scalar as a potential function has been analysed in ref. [14]. We can therefore restate our result as follows: we have established a 1-to-1 relation between conformally flat metrics on configuration space, and quantum-mechanical systems on that same space. That the Ricci flow plays a key role in the quantum theory has been shown in refs. [3,4].
Moreover, the Perelman functional (16) also arises in the Brans-Dicke theory of gravitation, in models of conformal gravity (for a review see, e.g., ref. [5]), and in the semiclassical quantisation of the bosonic string [6]. Further applications have been worked out in refs. [10,11] in connection with emergent quantum mechanics [7,8].
After finishing this paper we became aware of ref. [12], where issues partially overlapping with ours are dealt with.
Altogether we see that the Ricci flow and the Perelman functional have important links to classical and quantum physics. Our conclusions here reaffirm the importance of these links.
Theorem. Let M be a compact Riemann surface without boundary, and regard M as the configuration space of a classical mechanical system, with a potential U proportional to the Ricci scalar curvature of M. Then there exists a 1-to-1 relation between conformal metrics on M, and classical mechanical models on the same space. Specifically the time-independent mechanical action S (Hamilton's principal function) equals the conformal factor f , while the potential function U equals minus two times the Ricci curvature of M.
This time reversal is imposed on us by the time-flow eqn. (21), with respect to which time is reversed in the mechanical model. This is just a rewording of (part of) section 6.4 of ref.[14], where a corresponding heat flow is run backwards in time.
. V Arnold, Mathematical Methods of Classical Mechanics. SpringerV. Arnold, Mathematical Methods of Classical Mechanics, Springer, Berlin (1991).
O Bühler, A Brief Introduction to Classical, Statistical and Quantum Mechanics. ProvidenceAmerican Mathematical Society13O. Bühler, A Brief Introduction to Classical, Statistical and Quantum Mechan- ics, Courant Lecture Notes in Mathematics 13, American Mathematical Society, Providence (2007).
R Carroll, arXiv:math-ph/0703065Some remarks on Ricci flow and the quantum potential. R. Carroll, Some remarks on Ricci flow and the quantum potential, arXiv:math-ph/0703065.
R Carroll, arXiv:0710.4351Ricci flow and quantum theory. math-phR. Carroll, Ricci flow and quantum theory, arXiv:0710.4351 [math-ph].
R Carroll, arXiv:0705.3921Remarks on Weyl geometry and quantum mechanics. gr-qcR. Carroll, Remarks on Weyl geometry and quantum mechanics, arXiv:0705.3921 [gr-qc].
E D'hoker, String Theory, Quantum Fields and Strings: A Course for Mathematicians. ProvidenceAmerican Mathematical Society2E. D'Hoker, String Theory, in Quantum Fields and Strings: A Course for Mathe- maticians, vol. 2, American Mathematical Society, Providence (1999).
Note on the existence theorem in "Emergent quantum mechanics and emergent symmetries. H.-T Elze, arXiv:0710.2765quant-phH.-T. Elze, Note on the existence theorem in "Emergent quantum mechanics and emergent symmetries", arXiv:0710.2765 [quant-ph].
H.-T Elze, arXiv:0806.3408The attractor and the quantum states. quant-phH.-T. Elze, The attractor and the quantum states, arXiv:0806.3408 [quant-ph].
H Farkas, I Kra, Riemann Surfaces. BerlinSpringerH. Farkas and I. Kra, Riemann Surfaces, Springer, Berlin (1980).
J M Isidro, J L G Santander, P Fernández De Córdoba, arXiv:0808.2351Ricci flow, quantum mechanics and gravity. hep-thJ.M. Isidro, J.L.G. Santander and P. Fernández de Córdoba, Ricci flow, quantum mechanics and gravity, arXiv:0808.2351 [hep-th].
J M Isidro, J L G Santander, P Fernández De Córdoba, arXiv:0808.2717A note on the quantum-mechanical Ricci flow. hep-thJ.M. Isidro, J.L.G. Santander and P. Fernández de Córdoba, A note on the quantum-mechanical Ricci flow, arXiv:0808.2717 [hep-th].
Geometrical interpretation of the quantum Klein-Gordon equation. B Koch, arXiv:0801.4635quant-phB. Koch, Geometrical interpretation of the quantum Klein-Gordon equation, arXiv: 0801.4635 [quant-ph].
G Perelman, arXiv:math/0211159The entropy formula for the Ricci flow and its geometric applications. math.DGG. Perelman, The entropy formula for the Ricci flow and its geometric applica- tions, arXiv:math/0211159 [math.DG].
P Topping, Lectures on the Ricci Flow. Cambridge University Press325P. Topping, Lectures on the Ricci Flow, London Mathematical Society Lecture Notes Series 325, Cambridge University Press (2006).
|
[] |
[] |
[] |
[] |
[] |
Assuming a spherical symmetry, the extreme UV emitted by a very hot source ionizes low pressure molecular hydrogen making a transparent bubble of H II (Protons and electrons). For an increase of radius, intensity of extreme UV and temperature decrease, so that the plasma contains more and more atoms. A spherical shell, mainly of neutral atoms (H I) appears. If this shell is optically thick at Lyman frequencies of H I, it is superradiant and a competition of modes selects modes tangent to a sphere for which many atoms are excited. Thus, a shell of plasma emits, into a given direction, tangential rays showing a ring in which selected modes are brighter. While at Lyman frequencies, absorption of rays emitted by the source excites the atoms able to amplify the superradiance, a more powerful amplification of superradiance results from an induced scattering of the radial beams, which extends to feet of lines and progressively to the whole spectrum. Thermodynamics says that the brightness of radial and tangential beams tends to be equal; if the solid angle of observation is much larger for the ring than for the source, almost the whole light emitted by the source is transferred to the rings, and the source becomes invisible. Paradoxically, a glow due to incoherent scattering and impurities around the source remains visible. As the scattering decreases with the decrease of the radial intensity, the brightness of the ring decreases with radius. These characteristics are found in supernova remnant 1987A.Pacs 42.65.Es, 42.50.Md, 95.30.Jx
| null |
[
"https://arxiv.org/pdf/0707.3011v1.pdf"
] | 14,837,780 |
0707.3011
|
78c51ed02e3c2cba36378743c33ed430c5583dbb
|
Assuming a spherical symmetry, the extreme UV emitted by a very hot source ionizes low pressure molecular hydrogen making a transparent bubble of H II (Protons and electrons). For an increase of radius, intensity of extreme UV and temperature decrease, so that the plasma contains more and more atoms. A spherical shell, mainly of neutral atoms (H I) appears. If this shell is optically thick at Lyman frequencies of H I, it is superradiant and a competition of modes selects modes tangent to a sphere for which many atoms are excited. Thus, a shell of plasma emits, into a given direction, tangential rays showing a ring in which selected modes are brighter. While at Lyman frequencies, absorption of rays emitted by the source excites the atoms able to amplify the superradiance, a more powerful amplification of superradiance results from an induced scattering of the radial beams, which extends to feet of lines and progressively to the whole spectrum. Thermodynamics says that the brightness of radial and tangential beams tends to be equal; if the solid angle of observation is much larger for the ring than for the source, almost the whole light emitted by the source is transferred to the rings, and the source becomes invisible. Paradoxically, a glow due to incoherent scattering and impurities around the source remains visible. As the scattering decreases with the decrease of the radial intensity, the brightness of the ring decreases with radius. These characteristics are found in supernova remnant 1987A.Pacs 42.65.Es, 42.50.Md, 95.30.Jx
Introduction
The aim of this paper is a theoretical, qualitative study of the interaction of light emitted by an extremely hot source, with a surrounding, low density, homogeneous cloud of initially cold hydrogen. As such systems may exist around stars, this model could be used in astrophysics.
A spherical symmetry is assumed around the centre of the source O. The density of the gas is supposed low, so that we need not take an index of refraction into account. Section 2 reminds well known optics to set the notations.
Section 3 describes a superradiant emission of a spherical shell of excited neutral atomic hydrogen (H I ).
Section 4 describes a coherent scattering of bright light by this shell.
Section 5 shows similarities between this theoretical system and an observation of a supernova remnant.
Notations.
Set a and b two states of identical molecules of a gas, and E a < E b the corresponding energies. Set N a and N b the populations of these molecules (number of molecules in an unit volume); at a temperature of equilibrium T a,b , N a /N b = exp (E b − E a )/kT a,b . If the molecules are in a blackbody at temperature T n , T a,b = T n . In the blackbody, Planck's law [1,2] correlates the amplitude of the electromagnetic field, the spectral brightness of a monochromatic beam with temperature T n and wavelength λ = c/ν = hc/(E b − E a ). Out of a blackbody, the relation remains true provided that T n is replaced by T a,b , defining the temperature of a monochromatic beam from its spectral brightness and its wavelength.
Suppose that two holes are drilled in the blackbody, small enough for a negligible perturbation of the blackbody, but defining, at a wavelength λ, a Clausius invariant at least equal to λ 2 , so that the diffraction of a beam propagating through the holes may be neglected. The coherent amplification coefficient of a beam propagating through the holes is larger than 1 (true amplification) if its initial temperature is lower than T n , else lower than 1 (absorption).
If the output brightness of the beam does not depend on the path, this brightness corresponds to tem-perature T n , and the gas is said optically thick.
Out of a blackbody, define a column density as the path integral of the density of a type of atoms, and a spectral column density as the path integral of N b − N a ; following Einstein [3] the amplification of light is an exponential function of the spectral column density. In weak homogeneous sources, the total field remains close to the zero point field, so that the increase of field is nearly proportional to the column density, that is to the path. More precisely, assuming a constant amplification coefficient, the total field is an exponential function of the path, and the lines become sharper because their centres are more amplified than their feet; it is the superradiance. In a strong, optically thick source, the lines saturate, tending to temperature T a,b for any input temperature.
Superradiance in a spherical
shell of atomic hydrogen.
Absorption of extreme UV (beyond Lyman region) emitted by the source ionizes the gas into a plasma of protons and electrons, transparent to light. We suppose that some energy is lost by radiation of lines by impurities or during collisions, so that the transparency is not perfect, the ionization is limited, ionized atoms make a nearly transparent spherical bubble. For an increase of radius R, intensity therefore absorption of extreme UV decrease, temperature decreases too, so that the proportion of neutral hydrogen atoms increases, becoming noticeable around 50 000 K. Supposing that the decrease of temperature continues, around 10 000 K the atoms start to combine into molecules. Thus the density of atoms in an excited state is maximal on (at least) a sphere Σ. At Lyman lines frequencies, the interactions of light with atomic hydrogen are strong, the oscillator strength being, for instance 0.8324 for α line. Study the spherical shell of excited atoms, supposing that it is optically very thick at the main Lyman frequencies, at least for beams crossing large spectral column densities in the shell.
A spontaneous emission into a virtual beam of Clausius invariant λ 2 , is, in the usual model, made of pulses which are elementary modes; these modes are coherently amplified up to temperature T a,b ; as photoexcitations are large and collisions rare, some T a,b may be much larger than the temperature of the gas, some populations may be inverted; the amplification depletes the population of excited state E b (2P state for Ly α ). Supposing that the gas has a low density, the amplification coefficient is low; during a pulse, the spaceand time-coherent increases of amplitude add along an assumed long path, while, without coherence it is the intensities of spontaneous emissions which would add; thus, the temperature of space-coherent, superradiant beams tends quickly to the temperature of the gas. The beams emitted nearly tangentially to Σ depopulate the outer regions, forbidding a start of different emissions.
Evidently, the state of the atoms, therefore the radius of Σ depend on all light-matter interactions.
Into a given direction, the superradiant beams are inside a hollow cylinder whose base is a ring centred on the source. Observed modes are defined by the pupil of the observer (in astrophysics, the mirror of the telescope) and a just resolved region of the ring; an angular competition of modes making columns of light let appear some regions brighter.
4 Stimulated scattering of radial beams by the superradiant beams.
In the ring, neutral atoms de-excited by superradiance may be re-excited by absorption of Lyman lines radiated by the source, then de-excited again by superradiance; this process is weaker than a resonant stimulated scattering. A quasi resonant stimulated scattering of radial light emitted by the source may be induced by feet of the superradiant beams, starting a permanent increase of linewidth of the stimulated lines. Finally, the whole spectrum may be transferred from the radial beams to the tangential beams. As the radial beams are progressively absorbed, the brightnesses The brightnesses of the tangential beams tend to be equal to the brightnesses of the radial beams; if the solid angle of observation of a ring is much larger than the solid angle of observation of the source, the intensity received by an observer from the ring tends to be much larger than the intensity received from the source which becomes invisible. On the contrary, if excited impurities and incoherent scattering emit a glow around the source, as the brightness of this glow is low, it is not absorbed, slightly amplified in the shell.
The transfer of energy from radial to tangential beams starts at radius R of Σ where a fraction of hydrogen remains ionized. The depopulation of the excited states followed by collisions cools the gas, so that the proportion of neutral atoms increases more quickly than for r < R ( figure 1). This increase increases superradiance and scattering, increasing more the de-ionization... . This reaction process may become catastrophic, transforming brutally all ions into neutral atoms, scattering a large fraction of the radial intensity, thus radiating very bright spots.
As the excited levels are strongly depopulated, it was sufficient to consider the strongest Lyman transitions. The emitted rays excite various mono-or polyatomic molecules, generating long columns of excited molecules. These molecules radiate various superradiant lines in the direction of the columns, so that these lines seem having the same origin than the Lyman lines and the continuum; however, as the gas is colder than in the bubble, the lines are sharper.
5 Application to astrophysics.
The star of supernova remnant 1987A disappeared when an "equatorial ring" centered on it appeared [4] ; the density of neutral hydrogen increases mainly for a radius larger than 3/4 of the radius of the inner rim of the ring [5,6] and reaches a temperature of 50 000 K.
With densities of the order of 10 10 m −3 and paths of the order of 0.01 light-year, that is column densities of the order of 10 24 m −2 , this neutral atomic hydrogen is optically thick at Lyman frequencies [7].
The previous theoretical result may be a starting point for a mainly optical model of this remnant. Compare it, where applicable, to the standard model.
Main improvements:
The standard model cannot explain how the star can disappear, while the glow which surrounds it remains visible [7,8,5].
The brightness increases abruptly at the inner rim of the ring; in some places and time it starts so strongly that the name "hot spot" [4] is used; then the decrease of brightness is more regular, so that the outer rim of the ring is not well defined. Angular variations of brightness, generating the "pearls of the necklace" [4], look like the modes observed in a conical, coherent emission of light.
Remaining problems:
The necklace is not circular, but elliptical, with irregularities: this may be explained by a larger density of the gas in direction of an axis z', and irregularities of the density [9].
There are two other, larger, outer rings. We may suppose that they result from the formation of two shells of H I around two neutron stars ejected along z' axis during a previous explosion of the star; an ac-cretion of the gas could emit mainly extreme UV able to build shells, but light from the main star would be necessary to illuminate the shells.
Measuring the times of propagation from the main star, along a path either direct, or with a scattering, the light-echoes method allows to locate the scattering matter. Thus, Sugerman et al. [10] found two tumblers of matter which seem to be the shells generating the outer rings.
Conclusion.
A theoretical model of interaction of light emitted by a very hot object with initially cold hydrogen, gives a repartition of light very similar to the observed images of supernova remnant 1987A.
The standard model is more complex, its results are incomplete or less precise. Its computation of emission and scattering by incoherent interactions added in a Monte-Carlo process, close to Wolf's process [11], does not work well [12,6] because coherence must be taken into account.
Taking into account the stimulation of emissions, energized both by de-excitations and by scatterings, the origin of many rings observed in astrophysics could be explained from coherent optics and thermodynamics applied to simple models.
Figure 1 :
1Variation of the relative densities of H I , H II and excited atomic hydrogen H I *, relative intensities of light, and temperature along a radius of the shell of H I . of the tangential beams decrease from the inner rim of the observed ring.
Eine neue Strahlungshypothese. M Planck, Verh. Deutsch. Phys. Ges. 13138Planck M., "Eine neue Strahlungshypothese.", Verh. Deutsch. Phys. Ges, 13, 138 (1911).
Űber einen Versuch, von quantentheoritischen Betrachtungen zur Annahme stetiger Energieanderungen zurűckzukehren. W Nernst, Verh. Deutsch. Phys. Ges. 1883Nernst W., "Űber einen Versuch, von quan- tentheoritischen Betrachtungen zur Annahme stetiger Energieanderungen zurűckzukehren", Verh. Deutsch. Phys. Ges 18, 83 (1916).
Zur Quantentheorie der Strahlung. A Einstein, Phys. Zs. 18121Einstein A., "Zur Quantentheorie der Strahlung.", Phys. Zs. 18, 121 (1917).
On the emergence and discovery of hot spots in SNR 1987A. Lawrence S , B E Sugerman, P Bouchet, Astrophys. J. 527123Lawrence S., B. E. Sugerman, P. Bouchet et al., "On the emergence and discovery of hot spots in SNR 1987A.", Astrophys. J., 527, L123 (2000)
Flash ionization of the partially ionized wind of the progenitor of SN 1987A. P Lundqvist, Astrophys. J. 511389Lundqvist P., "Flash ionization of the partially ionized wind of the progenitor of SN 1987A", Astrophys. J., 511, 389 (1999).
Evolution of the reverse shock emission from SNR 1987A. K Heng, R Mccray, S A Zhekov, arxiv:astro-ph/0603151Heng K., R. McCray, S. A. Zhekov et al., "Evo- lution of the reverse shock emission from SNR 1987A", arxiv:astro-ph/0603151 (2006).
G J M Graves, P Challis, R A Chevalier, arxiv:astro-ph/0505066Limits from the Hubble space telescope on a point source in SN 1987A. Graves G. J. M., P. M Challis, R. A. Cheva- lier et al., "Limits from the Hubble space telescope on a point source in SN 1987A", arxiv:astro-ph/0505066 (2005).
Spatially resolved STIS spectroscopy of SN 1987A: Evidence for shock interaction with circumstellar gas. G Sonneborn, C S J Pun, R A Kimble, Astrophys. J. 492139Sonneborn G., Pun C. S. J., Kimble R. A. et al., "Spatially resolved STIS spectroscopy of SN 1987A: Evidence for shock interaction with circumstellar gas", Astrophys. J., 492, L139 (1998).
The Axially Symmetric Ejecta of Supernova. L Wang, J C Wheeler, P Hőflich, arxiv:astro-ph/0205337Wang L., J. C. Wheeler, P. Hőflich, "The Ax- ially Symmetric Ejecta of Supernova 1987A", arxiv:astro-ph/0205337 (2002).
A New View of the Circumstellar Environment of SN. B E K Sugerman, A P S Crotts, W E Kunkel, S R Heathcote, S S Lawrence, arxiv:astro-ph/0502268 & 0502378Sugerman B. E. K., Crotts A. P. S., Kunkel W. E., Heathcote S. R., Lawrence S. S., "A New View of the Circumstellar Environment of SN 1987A", arxiv:astro-ph/0502268 & 0502378 (2005)
Non-cosmological redshifts of spectral lines. E Wolf, Nature. 326363Wolf E., Nature, "Non-cosmological redshifts of spectral lines", 326, 363 (1987)
Hubble Space Telescope Observations of High-Velocity Ly and H Emission from Supernova Remnant 1987A: The Structure and Development of the Reverse Shock. Michael E , R Mccray, R Chevalier, Astrophys. J. 593809Michael E., R. McCray, R. Chevalier et al., "Hubble Space Telescope Observations of High- Velocity Ly and H Emission from Supernova Remnant 1987A: The Structure and Develop- ment of the Reverse Shock", Astrophys. J., 593, 809 (2003).
|
[] |
[
"Ensemble in-equivalence in supernova matter within a simple model",
"Ensemble in-equivalence in supernova matter within a simple model"
] |
[
"F Gulminelli \nUMR6534\nCNRS\nF-14050Caen cédexLPCFrance\n\nUMR6534\nENSICAEN\nF-14050Caen cédexLPCFrance\n",
"Ad R Raduta \nNIPNE\nPOB-MG6Bucharest-MagureleRomania\n"
] |
[
"UMR6534\nCNRS\nF-14050Caen cédexLPCFrance",
"UMR6534\nENSICAEN\nF-14050Caen cédexLPCFrance",
"NIPNE\nPOB-MG6Bucharest-MagureleRomania"
] |
[] |
A simple, exactly solvable statistical model is presented for the description of baryonic matter in the thermodynamic conditions associated to the evolution of core-collapsing supernova. It is shown that the model presents a first order phase transition in the grandcanonical ensemble which is not observed in the canonical ensemble. Similar to other model systems studied in condensed matter physics, this ensemble in-equivalence is accompanied by negative susceptibility and discontinuities in the intensive observables conjugated to the order parameter. This peculiar behavior originates from the fact that baryonic matter is subject to attractive short range strong forces as well as repulsive long range electromagnetic interactions, partially screened by a background of electrons. As such, it is expected in any theoretical treatment of nuclear matter in the stellar environement. Consequences for the phenomenology of supernova dynamics are drawn.
|
10.1103/physrevc.85.025803
|
[
"https://arxiv.org/pdf/1110.2034v1.pdf"
] | 119,197,983 |
1110.2034
|
55d303e9261c04a1edeefe72957a696efc9d87ef
|
Ensemble in-equivalence in supernova matter within a simple model
10 Oct 2011 February 3, 2013
F Gulminelli
UMR6534
CNRS
F-14050Caen cédexLPCFrance
UMR6534
ENSICAEN
F-14050Caen cédexLPCFrance
Ad R Raduta
NIPNE
POB-MG6Bucharest-MagureleRomania
Ensemble in-equivalence in supernova matter within a simple model
10 Oct 2011 February 3, 2013numbers: 6410+h6460-i2650+x2660-c
A simple, exactly solvable statistical model is presented for the description of baryonic matter in the thermodynamic conditions associated to the evolution of core-collapsing supernova. It is shown that the model presents a first order phase transition in the grandcanonical ensemble which is not observed in the canonical ensemble. Similar to other model systems studied in condensed matter physics, this ensemble in-equivalence is accompanied by negative susceptibility and discontinuities in the intensive observables conjugated to the order parameter. This peculiar behavior originates from the fact that baryonic matter is subject to attractive short range strong forces as well as repulsive long range electromagnetic interactions, partially screened by a background of electrons. As such, it is expected in any theoretical treatment of nuclear matter in the stellar environement. Consequences for the phenomenology of supernova dynamics are drawn.
I. INTRODUCTION
Standard thermodynamics is based on the assumption that the physical properties of a system at equilibrium do not depend on the statistical ensemble which is used to describe it. Under this condition thermodynamics is unique and the different thermodynamic potentials are related via simple linear Legendre transforms. Is is however well known that the equivalence between the different statistical ensembles can only be proved [1] at the thermodynamic limit and under the hypothesis of short range interactions, while non-standard thermostatistic tools have been developed during the years to deal with non-extensive and long-range interacting systems [2,3].
The issue of ensemble in-equivalence, namely the possible dependence of the observed physics on the externally applied constraints, has been typically associated to phase transitions, more precisely to phase separation quenching due to the external constraint. A well-known example in the literature concerns the possible occurrence of negative heat capacity in finite systems, which has been widely studied theoretically [4] and has also given rise to different experimental applications in nuclear and cluster physics [5]. In this specific example the phase separation is quenched by the microcanonical conservation constraints, leading to the thermodynamic anomaly of a non-monotonous equation of state.
Concerning macroscopic systems, different model applications have shown fingerprints of ensemble in-equivalence [3,[6][7][8][9][10] but phenomenological applications are scarce. In this paper we show that the dense matter which is produced in the explosion of core-collapse supernova and in neutron stars is an example of a physical system which displays this in-equivalence.
We will limit our discussion to finite temperature T ≈ 10 10 K and nuclear sub-saturation 10 10 < ρ < 10 14 g cm −3 densities, thermodynamic conditions which are known to be largely explored in the dynamics of supernova matter and in the cooling phase of proto-neutron stars [11,12]. The baryonic component of this stellar matter is given by a statistical equilibrium of neutrons and protons, the electric charge of the latter being screened by an homogeneous electron background.
If the electromagnetic interactions are ignored, this gives the standard model of nuclear matter, which is known to exhibit first and second order phase transitions with baryonic density as an order parameter, meaning that the transition concerns a separation between a dense (ordered) and a diluted (disordered) phase [13]. It is however known since decades to the astrophysical community that the situation is drastically different in stellar matter, where microscopic dishomogeneities are predicted at almost all values of temperature, density and proton fraction and thermodynamical quantities continuously change at the phase transition [14,15]. This specific situation of stellar matter respect to ordinary nuclear matter has been shown to be due to Coulomb frustration which quenches the first order phase transition [16]. However, the thermodynamic consequences of this specific thermodynamics with long range interactions have never been addressed to our knowledge.
In this work we will show that these dishomogeneities imply ensemble in-equivalence, making neutron star matter the first astrophysical example to our knowledge of ensemble in-equivalence at the thermodynamic limit. Additionally, we will show that a consistent treatment of this specific thermodynamics can have sizeable effects in the equations of state which are currently used to describe the supernova phenomenology.
In a recent paper [17] we have proposed a phenomenological hybrid model for supernova matter which was numerically solved by Monte-Carlo simulations. Since convergence is always an issue in Monte-Carlo calculations, we propose in this paper an analytic version of the same model. In order to have analytical results, we will limit to the simplest version of the model where a schematic nuclear energy functional is used. It is clear that more sophisticated energy functionals will have to be implemented in order to have quantitative predictions for the supernova simulations.
However, the qualitative conclusions of this paper will only depend on the sign (attractive or repulsive) of the interactions and on the size dependence (in terms of volume and surface) of the nuclear binding energy. As such, they will not depend on the details of the model.
II. THE MODEL
Stellar matter at temperature lower than the typical nuclear binding energy E b ≈ 8 MeV/nucleon ≈ 10 11 K, and density ρ lower than the saturation density of nuclear matter ρ 0 ≈ 0.16 fm −3 ≈ 1.6 · 10 14 g·cm −3 , can be viewed as a statistical mixture of free protons and neutrons with loosely interacting nuclear clusters at internal density ρ 0 , immersed in an homogeneous electron background density ρ e which neutralizes the total positive charge density ρ p over macroscopic length scales, ρ e = ρ p . Nucleons bound in clusters can be described by a phenomenological free energy functional depending on the cluster size a = n + z and chemical composition i = n − z as well as on the temperature of the medium:
f β a,i = e a,i + e * a,i β − T s β a,i(1)
For nuclear (fermionic) clusters both the average cluster excitation energy e * a,i β and entropy s β a,i can be evaluated in the low temperature Fermi gas approximation
e * a,i β = c 0 aT 2 ; (2) s β a,i = 2c 0 T + c S a −1/3 h(T ) a,(3)
The surface term c S a 2/3 h(T ) in Eq. (3) effectively accounts for the entropy increase at finite temperature due to surface excitations, producing a vanishing surface free energy at a given temperature, corresponding to the critical point β C = T −1 C of nuclear matter. The cluster energy e a,i is modified with respect to the energy of the nucleus in the vacuum e 0 a,i because of the electromagnetic interaction with the electron background which neutralizes the proton charge. In the Wigner-Seitz approximation the functional results [18] e a,i = e 0 a,i − c C z 2 a −1/3 3 2
ρ p ρ 0p 1/3 − 1 2 ρ p ρ 0p 1/2(4)
where ρ p = ρ e is the proton density and ρ 0p ≥ ρ p the corresponding saturation value. The minimal scale at which charge neutrality is verified, is called a Wigner-Seitz cell. We use in the following a simple liquid-drop parameterization for the cluster energies e 0 a,i [17]:
e 0 a,i = −c V a + c S a 2/3 1 − c I i 2 a 2 + c C z 2 a −1/3 .(5)
but shell and pairing corrections can be readily incorporated [19]. Density dependent correction terms accounting for the nuclear interaction with the free nucleons [20][21][22][23] are also expected to improve the predictive power of the model, as well as a more sophisticated form for the cluster internal entropy using realistic a and i dependent densities of states [17]. Since none of these improvements is expected to change the qualitative results of this paper, we are not including them here to keep an analytically solvable model. As a first approximation, one can consider that the the system of interacting nucleons is equivalent to a system of non-interacting clusters, nuclear interaction being completely exhausted by clusterization [24]. This classical model of clusterized nuclear matter is known in the literature as nuclear statistical equilibrium (NSE) [25][26][27][28]. This simple model can only describe diluted matter at ρ ≪ ρ 0 as it can be found in the outer crust of neutron stars, while nuclear interaction among nucleons and clusters has to be included for applications at higher density, when the average inter-particle distance becomes comparable to the range of the force.
In our model, interactions among composite clusters are taken into account in the simplified form of a hard sphere excluded volume. Since the nuclear density is (approximately) constant inside the clusters, the volume occupied by each species (a, i) is given by V a,i = an a,i /ρ 0 , where n a,i is the abundance of the species (a, i). The volume fraction available to the clusters then reads:
V F V = 1 − a>1,i a n a,i ρ 0 V = 1 − ρ cl ρ 0 ,(6)
where ρ cl is the total density of nucleons bound in clusters. A slightly different expression has been used in Refs. [17,19], which however does not change the results presented in this paper. In addition to the excluded volume effect for the clusters, the inter-particle nuclear interaction is also considered for the nucleons not bound in clusters. The free nucleons self-energy is computed in the self-consistent Hartree-Fock approximation with a phenomenological realistic effective interaction [29]. The energy density can be expressed as a function of the density of neutrons ρ n and protons ρ p which are not bound in clusters as :
ǫ (HM) = 2 2m 0 (τ n + τ p ) + t 0 (x 0 + 2)(ρ n + ρ p ) 2 /4 − t 0 (2x 0 + 1)(ρ 2 n + ρ 2 p )/4 + t 3 (x 3 + 2)(ρ n + ρ p ) σ+2 /24 − t 3 (2x 3 + 1)(ρ n + ρ p ) σ (ρ 2 n + ρ 2 p )/24 + (t 1 (x 1 + 2) + t 2 (x 2 + 2)) (ρ n + ρ p )(τ n + τ p )/8 + (t 2 (2x 2 + 1) − t 1 (2x 1 + 1)) (ρ n τ n + ρ p τ p )/8,(7)
where m 0 is the nucleon mass, t 0 , t 1 , t 2 , t 3 , x 0 , x 1 , x 2 , x 3 , σ are Skyrme parameters and τ n , τ p represent the neutron and proton kinetic energy density. We use the SKM* [29] parameterization for the numerical applications.
At density ρ ≥ ρ 0 the whole baryonic matter is expected to be homogeneous and described by eq.(7). The transition from inhomogeneous to homogeneous matter is physically realized in stellar matter at the interface between the crust and the core of a neutron star. As we can see from eq.(4), in our model at proton densities ρ p = ρ 0p the Wigner-Seitz correction exactly compensates the Coulomb self-energy of the cluster and the total Coulomb energy vanishes, reflecting matter homogeneity at supersaturation densities. In this regime the asymptotic cluster energy represents the energy density of homogeneous neutral nuclear matter
lim a→∞ 1 a e a,i (ρ p = ρ 0p ) = 1 ρ ǫ HM (ρ n , ρ 0p ),(8)
where ρ = lim A,V →∞ A/V = ρ n + ρ p , ρ I = lim I,V →∞ I/V = ρ n − ρ p , and A, I are the total number of particles and chemical asymmetry. The crust-core transition can thus be seen equivalently as the melting of clusters inside dense homogeneous matter, or as the emergence of a percolating cluster of infinite size. Charge neutrality is imposed globally but local charge dishomogeneities at the scale of the Wigner-Seitz cell naturally appear in the thermodynamic conditions where matter is clusterized. In turn, this gives rise to a long range monopole component of the Coulomb potential which extends over domains of the order of the cluster size, and which can potentially become macroscopic in the limit of very extended clusters close to the crust-core transition. As we will show in detail, these long range Coulomb correlations are at the origin of the specific thermodynamics.
Grandcanonical formulation
Considering that the center of mass of composite fragments can be treated as a classical degree of freedom, their grandcanonical partition sum reads [17,19]
Z a>1 β,µ,µI = a>1,iǫ(−a,a) exp z a,i(9)
The partition sum associated to a cluster composed of n neutrons and z protons in a volume V is given by
z a,i = V F 2πam 0 βh 2 3/2 exp − β f β a,i − µ a,i(10)
Here, V F is the free volume associated to the cluster center of mass given by eq.(6), and the cluster chemical potential is a linear combination of the isoscalar and isovector chemical potentials µ, µ I which have to be introduced in the presence of two conserved charges
µ a,i = µa + µ I i.(11)
Fermi statistics cannot be neglected when treating a = 1 fragments (protons and neutrons). This component of the baryonic partition sum is thus included in the finite temperature Hartree-Fock approximation [17]
Z a=1 β,µ,µI = Z 0 β,µ,µI exp −β ∂ ∂β ln Z 0 β,µ,µI + V ǫ HM(12)
where the non-interacting part of the partition sum can be expressed as a functional of the kinetic energy density τ q for neutrons (q = n) or protons (q = p) :
ln Z 0 β,µ,µI = 2βV 3 q=n,p τ q ∂ǫ HM ∂τ q(13)
with:
τ q = 8π h 3 ∞ 0 1 2 p 4 dp 1 + e β(eq−µq ) .(14)
The number density of free protons and neutrons ρ f q = ∂ βµq ln Z a=1 β,µ,µI /V determines the single particle energy which enters in the self-consistency equation (14) according to,
e q = p 2 ∂ǫ HM ∂τ f q + ∂ǫ HM ρ f q .(15)
The computation of all thermodynamical variables is straightforward using the standard grandcanonical expressions for the global baryonic partition sum Z = Z a>1 Z a=1 . In particular the total baryonic pressure is simply βp = ln Z β,µ,µI /V , the multiplicity of the different clusters is given by
n a,i = ∂ ln Z β,µ,µI ∂βµ a,i = z a,i(16)
and the total baryonic density is the sum of the clusterized (a > 1) and the unbound (a = 1) component
ρ = ρ p + ρ n = a,i a n a,i V + ρ f = ρ cl + ρ f(17)
where ρ f = ρ f p + ρ f n is the total density associated to the nucleons which are not bound in clusters. Expression Eq. (4) includes the electrons self-energy inside the Wigner-Seitz cell. This means that in the global stellar partition sum the remaining electron contribution will be factorized Z tot = Z bar β,µ,µI Z el β,µe , where Z bar β,µ,µI is given by Eq. (9) and Z el β,µe is a trivial ideal Fermi gas contribution which has no influence on the thermodynamics and will not be discussed further [18].
The functional relation between the different grandcanonical variables is represented in Fig. 1 for a representative given value of T and µ I , relevant for the astrophysical applications. The constrained entropy per baryon s β,µI = σ β,µI /ρ is evaluated from the Legendre transform of the partition sum in the region where the density is defined:
σ β,µI (ρ) = ln Z β,µ,µI /V − βµρ,(18)
and shown in panel (a) as a function of the volume per baryon or inverse density v = 1/ρ. The first derivative of this function is the baryonic pressure P given in panel (b). The first derivative of the entropy density σ β,µI with respect to the density gives the other equation of state, namely the chemical potential µ = P/ρ − s β,µI /β, represented in panel (c). Finally phase equilibrium is best spotted by looking at the phase diagram given by the relation between intensive variables, as shown in panel (d). The grandcanonical thermodynamics leads to a first order phase transition. This can be inferred from the characteristic two-humped structure of the constrained entropy, which is better evidenced by subtraction of a straight line, as well as by the crossing of the two equations of state (panel (d)). In the presence of a first order phase transition only in the ensemble where the order parameter is fixed (here: the canonical ensemble) the phase transition region is accessible, while it is jumped over in the ensemble where the order parameter is fixed only in average (here: the grandcanonical). As a consequence, a discontinuity is observed in the equations of state covering a huge range of baryonic densities relevant for the description of supernova dynamics.
The equilibrium solution in the phase transition region corresponds to a linear combination of the two pure phases following Gibbs rules. This construction is exactly equivalent to a one-dimensional Maxwell construction if we work in an ensemble where all intensive parameters are fixed but one, and it is represented by the dashed lines in Fig. 1 [30,31]. We note on passing that this simplification demands to work in the non-standard ensemble (T, ρ, µ I ).
In astrophysical applications it is customary to work rather with the parameters (T, ρ, y p ), where y p = ρ p /ρ is the proton fraction. Within this ensemble, a full two-dimensional Gibbs construction is needed to correctly calculate the coexistence zone. The use of a Maxwell construction in the ensemble (T, ρ, y p ) is a mistake, yet it is often used in the literature [18,19].
The construction of a convex entropy envelope allows to recognize that inside the density region, indicated by dotted lines, the obtained grandcanonical solutions do not correspond to an equilibrium, since a higher entropy solution can be obtained by making a linear combination of the two (dense and diluted) solutions marked by a filled circle, that is by a first order phase transition. The pressure and chemical potential associated to the metastable branch on the low density side are also represented in Fig. 1. From the pressure viewpoint (panel (b)) such solutions appear equivalent to a phase coexistence (dashed line), but this is not true in terms of chemical potential (panel (c)).
The presence of metastable solutions was never discussed in the framework of NSE models [19,[25][26][27][28] to our knowledge. It can be understood from the fact that, for chemical potentials higher than the Fermi energy of dense uniform matter µ ≥ e F ≈ −16M eV , the equilibrium condition can be obtained either as a mixture of clusters and homogeneously distributed nucleons µ a>1 = µ a=1 = µ, or alternatively letting the clusterized component to vanish (µ a=1 = µ and ρ cl = 0). This gives a second stationary entropy solution which appears to correspond to the absolute entropy maximum, and which renders metastable the clusterized solution at lower density.
The ending point of the metastable branch can be worked out easily. Indeed from Eq. (10) we can see that, for any temperature β −1 and isovector chemical potential µ I , it exist a limiting value of chemical potential µ max for which the cluster multiplicities n A,I asymptotically diverge. This value is determined by the condition of an asymptotically negative Gibbs free energy
lim a→∞ f β a,i − µa − µ I i < 0(19)
combined with the requirement that the Coulomb energy vanishes in the limit of an infinitely extended homogeneous cluster:
1 V a,i a − i 2 n a,i = ρ 0p(20)
For chemical potentials above µ max , clusterized partitions become unstable due to the emergence of a liquid phase. Coming back to the discontinuity shown in the equations of state in Figure 1, in standard thermodynamics the fact that the Gibbs construction is a posteriori made to fill up the coexistence region is not a limitation, because ensemble equivalence guarantees that the very same linear combination solution would have been obtained if we had worked in the canonical ensemble, explicitly constraining the density.
Because of the complexity of the phenomenology of stellar matter we have not proposed an Hamiltonian treatment of the problem. However the phenomenological free energy functional Eq. (5) we have used implicitly contains the effect of the attractive short range nuclear forces, scaling proportionally to the number of particles in the thermodynamic limit, and repulsive long range Coulombic forces, scaling proportionally to the square of the number of particles and only partially screened. None of the possible improvements on the nuclear energy functional would change these very general scaling behaviors. It has been argued in ref. [16] that this generic frustration effect should lead to a quenching of the first order phase transition. We have shown that the phase transition is observed in the grandcanonical ensemble. We turn therefore to explore the possibility that ensemble in-equivalence might be at play in the stellar environement, with the thermodynamic anomalies associated to the thermodynamics of long range interactions [3].
Canonical formulation
A fully canonical formulation of our model would imply the use of two independent extensive variables, the proton and neutron density (ρ p , ρ n ) or equivalently the isoscalar and isovector density (ρ, ρ I ). It is however well established [32,33] that, contrary to other physical systems like binary alloys and molecular mixtures where phase transitions can also imply separation of the species [34], the isovector density is not an order parameter of the nuclear matter phase transition. Since the in-equivalence effect we are looking for is associated to the phenomenon of phase coexistence, we thus expect to see it even if we keep a grandcanonical treatment for the isovector density.
The effect we are looking for is due to the long range Coulomb interaction, which vanishes in homogeneous matter. This Coulomb interaction is neglected in the computation of the abundances of a = 1 particles which are modelized as homogeneously distributed. For this reason we will also stick to a grandcanonical formulation for a = 1 particles according to Eq. (12), and assume that the approximate Legendre transformation Eq. (18) ln Z a=1 β,µI (ρ) = ln Z a=1 β,µ,µI − βµρV,
which is exact in the mean-field approximation we have employed, is physically correct. For the same reason we will consider that the standard thermodynamic assumption of total canonical entropy being additive among independent components, is verified as expected at the thermodynamic limit within ensemble equivalence:
σ can β,µI (ρ) = ln Z a=1 β,µI (ρ f ) + lim V →∞ 1 V ln Z a>1 β,µI (V ρ cl )(22)
where the density repartition between the clustered ρ cl and unbound ρ f component Eq. (17) is uniquely defined by the condition of having a single chemical potential µ for both components. Possible extra deviations from ensemble equivalence originating from the a = 1 contribution would need a more sophisticated model where the polarization of the free protons would be explicitly accounted for [19]. For simplicity, in the following we will refer to this hybrid (β, µ I , ρ) ensemble as to the "canonical" ensemble.
To derive the expression of the canonical partition sum, we start from the general statistical mechanics relation which links the different statistical ensembles, restricted to composite clusters a > 1 only (the subscript a > 1 is omitted hereafter for simplicity):
Z β,µ,µI = A>1 Z β,µI (A) exp βµA(23)
Identification with Eq. (9) gives,
Z β,µI (A) = {na} a>1 ω na a n a !(24)
where n a = i n ia is the occupation number of size a and the sum is restricted to combinations {n a } ≡ {n 2 , . . . , n A } satisfying the canonical constraint,
A a=2 an a = A.(25)
The weight of the different cluster size is given by,
ω a = a z=0
ω a,a−2z exp(βµ I (a − 2z)), (26) and the weight of each nuclear species is determined by the free energy functional we have assumed,
ω a,i = V F 2πam 0 βh 2 3/2 exp − βf β a,i .(27)
Following Ref. [35] we introduce an auxiliary canonical partition sum Z m β,µI (A) defined by the additional constraint that the cluster multiplicity is fixed to m, a n a = m. With the help of relation (24) we get,
Z m−1 β,µI (A − a) = < n a > m ω a Z m β,µI (A),(28)
where < n a > m is the average multiplicity of size a under the additional constraint of total multiplicity m. Explicitly implementing the canonical constraint Eq. (25) we arrive to the recursion relation,
Z m β,µI (A) = 1 A A a=2 aω a Z m−1 β,µI (A − a),(29)
which is valid for any value of A. Summing over all the possible m values, we can see that the same relation holds for the unconstrained canonical partition sum
Z β,µI (A) = 1 A A a=2 aω a Z β,µI (A − a).(30)
This expression can be recursively computed with the initial condition Z β,µI (2) = ω 2 . The computational problem however arises that going towards the thermodynamic limit the evaluation of the double sum implied by Eq. (26) becomes numerically very heavy. Starting from a sufficiently high minimal value a min we therefore develop a continuous approximation for Eq. (26),
ω a>amin ≈ 1 2 a −a dy exp g(y),(31)
This integral is calculated in the saddle point approximation:
g(i) = ln ω a,i + βµ I i ≈ ln ω a,<i> + βµ I < i > − (i− < i >) 2 2σ 2 a .(32)
where the most probable isotopic composition < i > of a cluster of size a depends on the temperature according to,
µ I = ∂f a,i ∂i | i=<i> ,(33)
and the associated dispersion is given by
1 σ 2 a = β ∂ 2 f a,i ∂i 2 | i=<i> .(34)
The global weight of size a finally results, ω a>amin ≈ ω a,<i> 2πσ 2 a exp(βµ I < i >)/2 (35) and the value of a min is chosen such that Eq. (35) and Eq. (26) give estimation differing less than 5%. The canonical thermodynamical potential σ can β,µI is then defined by Eq. (22), with the identification Z β,µI ≡ Z a>1 β,µI . Ensembles are equivalent if this function coincides with the entropy density σ β,µI (ρ) defined by the Legendre transform of the grandcanonical partition sum Eq. (18). More precisely, ensemble equivalence demands that the thermodynamic quantities as calculated from the canonical partition sum,
µ can = − 1 β ∂σ can β,µI ∂ρ , p can = ρσ can β,µI β + µρ, ǫ can = − ∂σ can β,µI ∂β ,(36)
coincide with the corresponding grandcanonical quantities. The canonical thermodynamics, in the same thermodynamic conditions as in Fig. 1, is displayed in Fig. 2. We can see that the canonical calculation allows to interpolate between the dense and diluted branches observed in the grandcanonical ensemble as expected. However the interpolation is not linear, meaning that the chemical potential continuously varies as a function of the density. The discontinuity in the entropy slope at high density leads to a jump in the intensive observables close to the saturation density, in complete disagreement with the grandcanonical solution [3]. Even more interesting, the entropy presents a convex intruder, the behavior of the equations of state is not monotonous and a clear back-bending is observed, qualitatively similar to the phenomenon observed in first order phase transitions in finite systems [4]. The canonical partition sum Eq. (30) is only defined for finite values of the total number of particles A. The doubt therefore arises that this non-trivial behavior and the similarity with the finite systems thermodynamics might be due to a non achievement of the thermodynamic limit. Fig. 3 shows that this is not the case, and the canonical calculation is convergent. In this figure the chemical potential (a), pressure (b) and cluster size distribution (c) are represented for different values of the total number of particles A at a given density inside the in-equivalence region.
We can see that indeed a very large size has to be used before this limit is achieved. As a rule of thumb, the total system size has to typically be approximately ten times bigger than the most probable cluster size in order to have convergent results, meaning that the equivalent of a Wigner-Seitz cell contains in average ≈ 10 dominant clusters. This can be understood considering that at finite temperature the distribution is very large, and the often used single-nucleus approximation [18] is not realistic.
As it can be seen in Fig. 3, the grandcanonical equilibrium prediction for this thermodynamic condition would correspond to a macroscopic liquid fraction in equilibrium with essentially free particles (dashed line). An explicit computation of the coexistence region in the canonical ensemble shows that this is not the case, as the cluster sizes do not scale with the total system size A. As one may notice, this liquid fraction is replaced by a finite, though large, nucleus with a characteristic radius of the order of only 5 − 10 fm.
This finding is in agreement with all the theoretical microscopic modelizations which are naturally elaborated inside a single Wigner-Seitz cell within a fixed number of particles [36][37][38][39]. All these canonical studies agree in predicting that matter is clusterized for all subsaturation densities and the cluster size and composition evolve continuously with the density, which is incompatible with a modelization based on phase coexistence [40] where only the relative proportion of the two phases varies through the phase transition. As it has been argued in Refs. [16], the quenching of the phase transition is due to the high electron incompressibility. Because of the charge neutrality constraint over macroscopic distances, an intermediate density solution given by a linear combination of a high density homogeneous region and a low density clusterized region would imply an infinite repulsive interaction energy due to the electron density discontinuity at the (macroscopic) interface [16].
This argument requires electrons to be completely incompressible. Since the electron incompressibility, while high, is not infinite, it could have been argued that a slightly modified coexistence region should be observed, where the liquid fraction would be constituted by large but still mesoscopic clusters, such that the interface energy would not diverge. The comparison between canonical and grandcanonical shown in Fig. 1, 2, 3 demonstrates that this is not true: the presence of microscopic, instead than macroscopic, fluctuations, qualitatively change both the thermodynamics and the composition of matter.
III. PHENOMENOLOGICAL CONSEQUENCES OF ENSEMBLE IN-EQUIVALENCE
It is important to remark that in the density region where the grancanonical ensemble is defined, the predictions of the ensembles coincide. The in-equivalence is observed in the intermediate density region, where the grandcanonical first order phase transition is not observed in the canonical ensemble. Since in this density domain the grandcanonical ensemble is not defined, there is no ambiguity on which ensemble should be chosen, meaning that the phenomenology of star matter has to be described with canonical thermodynamics.
A closer look at the cluster distribution in the in-equivalence region can be obtained from the left part of Fig. 4, which displays the cluster distributions as a function of the density in the ensemble in-equivalence region. We can see that this distribution varies continuously, as it is expected physically, with very heavy nuclei and a very wide size distribution at the highest densities corresponding to the inner crust. These very massive nuclei disappear at a density close to saturation density, which defines the density corresponding to the crust-core transition in the model. This sudden process is at the origin of the non differentiable point observed in the entropy in Fig. 2. The resulting discontinuity in chemical potential and pressure is therefore a physical effect in the framework of this model. A word of caution is however necessary. It is well known from star matter literature [41] that close to saturation density deformed extended nuclei are energetically favored, the so called "pasta" phases. We expect that adding a deformation degree of freedom to the cluster energy functional would smooth this discontinuity. The right part of Fig. 4 represents the mass fraction of unclusterized matter (free nucleons and homogeneous dense matter) as a function of the density. Homogeneous matter dominates at the very low densities which physically correspond to the neutron star atmosphere, while clusters become increasingly dominant at higher density, until they melt into the homogeneous liquid core. Even in the absence of a first order phase transition, a very sharp behavior is obtained defining a relatively precise value for the crust-core transition density .
Besides the relevance of the issue of ensemble in-equivalence from the statistical physics viewpoint, it is interesting to remark that the use of grandcanonical thermodynamics can lead to important qualitative and quantitative discrepancies in the computation of different physical quantities of interest for the astrophysical applications. This is demonstrated in Fig. 5, which shows the cluster distribution for a chosen thermodynamic condition ( temperature T = 1.6 MeV, baryonic density ρ = 3.3 · 10 11 gcm −3 , proton fraction Y p = 0.41) which is typical for the dynamics of supernova matter after the bounce and before the propagation of the shock wave [11,12]. We can see that the dominant cluster size is around A = 60, which is a particularly important size in the process of electron capture which determines the composition of the resulting neutron star [42][43][44][45][46][47]. It is clear that it is very important to correctly compute the abundances of such nuclei.
Conversely in a grandcanonical formulation, as the widely used nuclear statistical equilibrium (NSE) [25][26][27][28], these partitions are simply not accessible as they fall in the phase transition region. An approach consisting in taking the metastable grandcanonical prediction and considering only the nuclei of size such that the chosen total density is obtained, as in Ref. [19], is shown by the dashed line in Fig. 5. It is clear that such approach completely misses the correct cluster distribution. Alternatively, hybrid canonical-grandcanonical formulations are routinely used in the astrophysical community [18,48,49]. Such approaches do not share the drawback of grandcanonical NSE calculations, but they always introduce artificial Maxwell constructions to fill the high density part of the equation of state. Moreover, they never address the fluctuations in the cluster composition, clusterized matter being modelized by a single representative nucleus. It is clear from Fig. 5 that this approximation is highly questionable at finite temperature, where distributions are wide, the largest cluster does not coincide with the average one nor with the most probable. The model we have presented overcomes all these problems. It is clear that many improvements are still necessary in this model before it can be considered as a reliable quantitative prediction for astrophysical simulations. Both improvements on the cluster energy functional and inclusion of deformation degrees of freedom are in progress to this aim. However we believe that the main results presented in this paper, namely the absence of phase transition due to Coulomb frustration and the dominance of microscopic clusters with a large and continuous distribution of size extending over most of the subsaturation region, are general results which will not change with a more sophisticated model.
IV. CONCLUSIONS
To conclude, in this paper we have shown that dense stellar matter as it can be found in core-collapse supernova and in the crust of neutron stars is a macroscopic physical example of ensemble in-equivalence. A first-order phase transition is observed in the grandcanonical ensemble, but when the region corresponding to the discontinuity is explored explicitly constraining the density in the canonical ensemble, the macroscopic dishomogeneities associated to phase coexistence are seen to be replaced by microscopic dishomogeneities leading to cluster formation. As a consequence, the transition observed in the physical system is continuous. This phenomenon is due to the long range Coulombic interactions which quench the phase transition, thus giving rise to a thermodynamics qualitatively similar to the one of finite systems including thermodynamic anomalies. Specifically, the relation between density and chemical potential is non monotonous implying negative susceptibility.
Accounting for this specificity can have sizeable effects in the computation of different quantities of interest for supernova dynamics. This paper has been partly supported by ANR under the project NEXEN and by IFIN-IN2P3 agreement nr. 07-44.
FIG. 1 :
1Grandcanonical thermodynamics at T =1.6 MeV and µI =1.68 MeV. Constrained entropy per baryon (a) and associated pressure (b) as a function of the inverse baryonic density; to better evidence the concave behavior of s β,µ I a linear function (βPT /ρ − βµIT ) is subtracted; (c) chemical potential as a function of the baryonic density; (d) pressure as a function of the chemical potential. Dashed lines: Gibbs construction.
FIG. 2 :
2Constrained entropy (a), pressure (b),(c) and chemical potential (b)(d) evaluated in the canonical ensemble in the same thermodynamic conditions as inFig. 1. The pressure and chemical potential discontinuity is indicated by a dotted line.
FIG. 3 :
3Convergence study of the canonical ensemble. The baryonic chemical potential (a), baryonic pressure (b) and cluster multiplicity per unit volume as a function of the cluster size are represented at a representative density inside the ensemble in-equivalence region, by varying the total system size. Thick line in panel (c): cluster multiplicity distribution in the grandcanonical ensemble.
FIG. 4 :
4(a) Cluster distributions as a function of the density in the ensemble in-equivalence region; (b) mass fraction of unclusterized matter as a function of the total baryonic density, in the same thermodynamic conditions as in (a). The dashed line gives the saturation density of nuclear matter.
FIG. 5 :
5Comparison between canonical (full line) and grandcanonical (dashed line) predictions for the cluster distribution in a specific thermodynamic condition relevant for supernova dynamics.Ad. R. R acknowledges partial support from the Romanian National Authority for Scientific Research under grant PN-II-ID-PCE-2011-3-0092 and kind hospitality from LPC-Caen.
. L Van Hove, Physica. 15951L. Van Hove, Physica 15 (1949) 951;
. C N Yang, T D Lee, Phys. Rev. 87404C.N. Yang and T.D. Lee, Phys. Rev. 87, 404 (1952);
K Huang, chap.15.2 and appendix C. John Wiley and Sons IncStatistical MechanicsK.Huang, Statistical Mechanics, John Wiley and Sons Inc. (1963), chap.15.2 and appendix C.
. C Tsallis, Introduction to Nonextensive Statistical Mechanics Approaching a Complex World. SpringerC. Tsallis, Introduction to Nonextensive Statistical Mechanics Approaching a Complex World, Springer, NY (2009)
Dynamics and Thermodynamics of Systems With Long Range Interactions. T Dauxois, S Ruffo, E , Lect. Notes in Phys. Arimondo and M. Wilkens602SpringerT. Dauxois, S. Ruffo, E. Arimondo and M. Wilkens (eds.), Dynamics and Thermodynamics of Systems With Long Range Interactions, Lect. Notes in Phys.602, Springer (2002);
. A Campa, T Dauxois, S Ruffo, Phys. Rep. 48057A. Campa, T. Dauxois, S. Ruffo, Phys. Rep. 480, 57 (2009).
. P Chomaz, Phys. Rev. Lett. 853587P. Chomaz et al., Phys. Rev. Lett. 85, 3587 (2000);
Microcanonical Thermodynamics: Phase Transitions in Finite Systems. D H E Gross, Lecture Notes in Physics. 66World ScientificD.H.E. Gross, 'Microcanonical Thermodynamics: Phase Transitions in Finite Systems', Lecture Notes in Physics, vol. 66, World Scientific (2001).
. M Agostino, Phys. Lett. B. 473219M. D'Agostino et al., Phys. Lett. B 473, 219 (2000);
. M Schmidt, Phys. Rev. Lett. 861191M. Schmidt et al., Phys. Rev. Lett. 86, 1191 (2001);
. F Gobet, Phys. Rev. Lett. 89183403F. Gobet et al., Phys. Rev. Lett. 89, 183403 (2002) .
. R Bachelard, Phys. Rev. Lett. 101260603R. Bachelard et al., Phys. Rev. Lett. 101, 260603 (2008);
. S Ruffo, Eur. Phys. J. B. 64355S. Ruffo, Eur. Phys. J. B 64, 355 (2008).
. F Bouchet, J Barré, J. Stat. Phys. 1181073F. Bouchet and J. Barré, J. Stat. Phys. 118, 1073 (2005).
. F Baldovin, E Orlandini, Phys. Rev. Lett. 96240602F. Baldovin and E. Orlandini, Phys. Rev. Lett. 96, 240602 (2006).
. S Gupta, D Mukamel, Phys. Rev. Lett. 10540602S. Gupta, D. Mukamel, Phys. Rev. Lett. 105, 040602 (2010).
. F Gulminelli, Ph, Chomaz, Phys. Rev. E. 6646108F. Gulminelli and Ph. Chomaz, Phys. Rev. E 66, 046108 (2002).
. F S Kitaura, H Th, W Janka, Hillebrandt, Astron. Astrophys. 450345F. S. Kitaura, H. Th. Janka and W. Hillebrandt, Astron. Astrophys. 450, 345 (2006).
. A Marek, H Th, Janka, Astrophys. Journ. 694664A. Marek and H. Th. Janka, Astrophys. Journ. 694, 664 (2009).
. D Vautherin, Adv. Nucl. Phys. 22123D. Vautherin, Adv. Nucl. Phys. 22, 123 (1996);
. S Shlomo, V M Kolomietz, Rep. Prog. Phys. 681S. Shlomo and V. M. Kolomietz, Rep. Prog. Phys. 68, 1 (2005);
. F Douchin, P Haensel, J Meyer, Nucl. Phys. A. 665419F. Douchin, P. Haensel and J. Meyer, Nucl. Phys. A 665, 419 (2000);
. A Rios, Nucl. Phys. A. 84558A. Rios, Nucl. Phys. A 845, 58 (2010).
. J M Lattimer, M Prakash, 304536J. M. Lattimer and M. Prakash, Science Vol. 304 no. 5670, 536 (2004).
Neutron stars: equation of state and structure. P Haensel, A Y Potekhin, D G Yakovlev, SpringerBerlinP.Haensel, A.Y. Potekhin, D.G. Yakovlev, 'Neutron stars: equation of state and structure', Springer, Berlin (2007).
. P Napolitani, Phys. Rev. Lett. 98131102P. Napolitani et al., Phys. Rev. Lett 98, 131102 (2007);
. C Ducoin, Phys. Rev. C. 7565805C. Ducoin et al., Phys. Rev. C 75, 065805 (2007).
. R Ad, F Raduta, Gulminelli, Phys. Rev. C. 828065801Ad. R. Raduta and F. Gulminelli, Phys. Rev. C 82, 8065801 (2010).
. J M Lattimer, F Douglas Swesty, Nucl. Phys. 535331J. M. Lattimer and F. Douglas Swesty, Nucl. Phys. A535, 331 (1991).
. M Hempel, J Schaffner-Bielich, Nucl. Phys. A. 837210M. Hempel and J. Schaffner-Bielich, Nucl. Phys. A 837, 210 (2010).
. S K Samaddar, J N De, X Vinas, M Centelles, Phys. Rev. C. 8035803S. K. Samaddar, J. N. De, X. Vinas and M. Centelles, Phys. Rev. C 80, 035803 (2009).
. A Arcones, Phys. Rev. C. 7815806A. Arcones et al., Phys. Rev. C 78, 015806 (2008).
. S Heckel, P P Schneider, A Sedrakian, Phys. Rev. C. 8015805S. Heckel, P. P. Schneider and A. Sedrakian, Phys. Rev. C 80, 015805 (2009).
. S , Phys. Rev. C. 8115803S. Typel et al., Phys. Rev. C 81, 015803 (2010).
. M E Fisher, Physics (NY). 3255M. E. Fisher, Physics (NY) 3, 255 (1967).
The Physics of Stars. A C Phillips, John Wiley and SonsChichester, GBA.C.Phillips, The Physics of Stars, John Wiley and Sons, Chichester, GB (1994).
. A S Botvina, I N Mishustin, Nucl. Phys. A. 84398A. S. Botvina, I. N. Mishustin, Nucl. Phys. A 843, 98 (2010).
. S I Blinnikov, I V Panov, M A Rudzsky, K Sumiyoshi, arXiv:0904.3849astro-phS. I. Blinnikov, I. V. Panov, M. A. Rudzsky and K. Sumiyoshi, arXiv: 0904.3849 [astro-ph].
. S R Souza, A W Steiner, W G Lynch, R Donangelo, M A Famiano, Astrophys. J. 7071495S. R. Souza, A. W. Steiner, W. G. Lynch, R. Donangelo and M. A. Famiano, Astrophys. J. 707, 1495 (2009).
. J Bartel, Nucl. Phys. 38679J. Bartel et al., Nucl. Phys. A386, 79 (1982).
. N K Glendenning, Phys. Rep. 342394N. K. Glendenning, Phys. Rep. 342, 394 (2001).
. M Hempel, G Pagliara, J Schaffner-Bielich, Phys. Rev. D. 80125014M. Hempel, G. Pagliara, J. Schaffner-Bielich, Phys. Rev. D 80, 125014 (2009).
. Ph, M Chomaz, J Colonna, Randrup, Physics Reports. 389Ph.Chomaz, M.Colonna, J.Randrup, Physics Reports 389 (2004);
. J Margueron, Ph, Chomaz, Phys. Rev. 6741602J.Margueron, Ph.Chomaz, Phys. Rev. C67 , 041602 (2003).
. C Ducoin, Ph Chomaz, F Gulminelli, Nucl. Phys. A. 77168C. Ducoin, Ph. Chomaz and F. Gulminelli, Nucl. Phys. A 771, 68 (2006).
. J Sivardière, J. Phys. C. 143829J. Sivardière, J. Phys. C 14, 3829 and 3845, (1981).
. K C Chase, A Z Mekjian, Phys. Rev. C. 522339K. C. Chase and A. Z. Mekjian, Phys. Rev. C 52, R2339 (1995).
. C J Horowitz, M A Pérez-Garcia, D K Berry, J Piekarewicz, Phys. Rev. C. 7235801C.J. Horowitz, M.A. Pérez-Garcia, D.K. Berry and J. Piekarewicz, Phys. Rev. C 72, 035801 (2006) .
. H Sonoda, G Watanabe, K Sato, K Yasuoka, T Ebisuzaki, Phys. Rev. C. 7735806H. Sonoda, G. Watanabe, K. Sato, K. Yasuoka and T. Ebisuzaki, Phys. Rev. C 77, 035806 (2008);
. G Watanabe, H Sonoda, T Maruyama, K Sato, K Yasuoka, T Ebisuzaki, Phys. Rev. Lett. 103121101G. Watanabe, H. Sonoda, T. Maruyama, K. Sato, K. Yasuoka and T. Ebisuzaki, Phys. Rev. Lett. 103, 121101 (2009).
. W G Newton, J R Stone, Phys. Rev. C. 7955801W. G. Newton and J. R. Stone, Phys. Rev. C 79, 055801 (2009).
. F Sebille, S Figerou, V De La Mota, Nucl. Phys. A. 82251F. Sebille, S. Figerou and V. de la Mota, Nucl. Phys. A 822, 51 (2009).
. C Ishizuka, A Ohnishi, K Sumiyoshi, Prog. Theor. Phys. Suppl. 146373C. Ishizuka, A. Ohnishi, and K. Sumiyoshi, Prog. Theor. Phys. Suppl. 146, 373 (2002).
. J M Lattimer, C J Pethick, D G Ravenhall, D Q Lamb, Nucl. Phys. 432646J. M. Lattimer, C. J. Pethick, D. G. Ravenhall and D. Q. Lamb, Nucl. Phys. A432, 646 (1985).
. H A Bethe, G E Brown, J Applegate, J M Lattimer, Nucl. Phys. 324487H. A. Bethe, G. E. Brown, J. Applegate and J. M. Lattimer, Nucl. Phys. A324, 487 (1979).
Relativistic Astrophysics. B Ya, I D Zeldovich, Novikov, University of Chicago pressChicagoYa. B. Zeldovich and I. D. Novikov, 'Relativistic Astrophysics', University of Chicago press, Chicago (1971).
. C J Horowitz, M A Pérez-Garcia, J Carriere, D K Berry, J Piekarewicz, Phys. Rev. C. 7065806C. J. Horowitz, M. A. Pérez-Garcia, J. Carriere, D. K. Berry and J. Piekarewicz, Phys. Rev. C 70, 065806 (2004).
. G Martínez-Pinedo, M Liebendorfer, D Frekers, Nucl. Phys. 777395G. Martínez-Pinedo, M. Liebendorfer and D. Frekers, Nucl. Phys. A777, 395 (2006).
. H T Janka, K Langanke, A Marek, G Martínez-Pinedo, B Müller, Phys. Rep. 44238H. T. Janka, K. Langanke, A. Marek, G. Martínez-Pinedo, and B. Müller, Phys. Rep. 442, 38 (2007).
. H Sonoda, G Watanabe, K Sato, T Takiwaki, K Yasuoka, T Ebisuzaki, Phys. Rev. C. 7542801H. Sonoda, G. Watanabe, K.Sato, T. Takiwaki, K. Yasuoka, T. Ebisuzaki, Phys. Rev. C 75, 042801(R) (2007).
. H Shen, H Toki, K Oyamatsu, K Sumiyoshi, Nucl. Phys. A. 637435H. Shen, H. Toki, K. Oyamatsu and K. Sumiyoshi, Nucl. Phys. A 637, 435 (1998);
. H Shen, H Toki, K Oyamatsu, K Sumiyoshi, Prog. Theor. Phys. 1001013H. Shen, H. Toki, K. Oyamatsu and K. Sumiyoshi, Prog. Theor. Phys. 100, 1013 (1998).
. G Shen, C J Horowitz, S Teige, Phys. Rev. C. 8215806G. Shen, C. J. Horowitz and S. Teige, Phys. Rev. C 82, 015806 (2010).
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.